ITD: Load Balancing, Traffic Steering and Clustering using Nexus 5k/6k/7k/9k
Cisco Intelligent Traffic Director (ITD)is an innovative solution to bridge the performance gap between a multi-terabit switch and gigabit servers and appliances. It is a hardware based multi-terabit layer 4 load-balancing, traffic steering and clustering solution on the Nexus 5k/6k/7k/9k series of switches.
It allows customers to deploy servers and appliances from any vendor with no network or topology changes. With a few simple configuration steps on a Cisco Nexus switch, customers can create an appliance or server cluster and deploy multiple devices to scale service capacity with ease. The servers or appliances do not have to be directly connected to the Cisco Nexus switch.
ITD won the Best of Interop 2015 in Data Center Category.
With our patent pending innovative algorithms,ITD (Intelligent Traffic Director) supports IP-stickiness, resiliency, consistent hash, exclude access-list, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed.ITDprovides order of magnitude CAPEX and OPEX savings for the customers. ITDis much superior than legacy solutions like PBR, WCCP, ECMP, port-channel, layer-4 load-balancer appliances.
ITD provides :
Hardware based multi-terabit/s L3/L4 load-balancing at wire-speed.
Zero latency load-balancing.
CAPEX savings : No service module or external L3/L4 load-balancer needed. Every Nexus port can be used as load-balancer.
Redirect line-rate traffic to any devices, for example web cache engines, Web Accelerator Engines (WAE), video-caches, etc.
Capability to create clusters of devices, for example, Firewalls, Intrusion Prevention System (IPS), or Web Application Firewall (WAF), Hadoop cluster
Resilient (like resilient ECMP), Consistent hash
VIP based L4 load-balancing
NAT (available for EFT/PoC). Allows non-DSR deployments.
Load-balances to large number of devices/servers
ACL along with redirection and load balancing simultaneously.
Bi-directional flow-coherency. Traffic from A–>B and B–>A goes to same node.
Order of magnitude OPEX savings : reduction in configuration, and ease of deployment
Order of magnitude CAPEX savings : Wiring, Power, Rackspace and Cost savings
The servers/appliances don’t have to be directly connected to Nexus switch
Monitoring the health of servers/appliances.
N + M redundancy.
Automatic failure handling of servers/appliances.
VRF support, vPC support, VDC support
Supported on all linecards of Nexus 9k/7k/6k/5k series.
Supports both IPv4 and IPv6
Cisco Prime DCNM Support
No certification, integration, or qualification needed between the devices and the Cisco NX-OS switch.
The feature does not add any load to the supervisor CPU.
ITD uses orders of magnitude less hardware TCAM resources than WCCP.
Handles unlimited number of flows.
Load-balance traffic to 256 servers of 10Gbps each.
Load-balance to cluster of Firewalls.ITDis much superior than PBR.
Scale IPS, IDS and WAF by load-balancing to standalone devices.
Scale the NFV solution by load-balancing to low cost VM/container based NFV.
Scale the WAAS / WAE solution.
Scale the VDS-TC (video-caching) solution.
Scale the Layer-7 load-balancer, by distributing traffic to L7 LBs.
ECMP/Port-channel cause re-hashing of flows.ITDis resilient, and doesn't cause re-hashing on node add/delete/failure.
Cisco launched their solution for hybrid cloud solution for the Microsoft Azure public cloud back in September of 2017. Since that time, Cisco has enjoyed great success with installations around the world from Poland to Australia and many points in ...
NoticeThis is not an official guide, just something I've been testing to help during those "difficult" situations. All works here are my own!GoalsThere are some situations where we need to load an image onto a Nexus switch (or other network devices ...
I was playing a little with some show commands on ACI and i found that on leaves there are VXLAN tunnels also towards the APIC (10.0.0.1 in the figure); i was wondering why there should be these tunnels? APIC is using VLAN 3967 as infrastructure VLAN, and...
Hello, We are a meetings and events company and we have decided to hold virtual meetings. The set up is that we usually invite 100 - 200 participants with 2 - 3 speakers. I would like to know how to extract WebEx analytics; ie participant...