09-04-2012 01:30 AM - edited 08-29-2017 05:18 AM
Environment where multiple networks share a common vlan sometimes appear in legacy network deployments and environments where a legacy load balancer is being replaced with the Cisco ACE. Cisco ACE working as loadbalancer helps to decide which server should serve a client request.
Before the ACE can receive traffic from the supervisor engine in the Catalyst 6500 series switch or in a Cisco 7600 series router, you must create VLAN groups on the supervisor engine, and then assign the groups to the ACE. After you configure the VLAN groups on the supervisor engine for the ACE, you can configure the VLAN interfaces on the ACE. You cannot assign the same VLAN to multiple groups; however, you can assign multiple groups to an ACE. VLANs that you want to assign to multiple ACEs, for example, can reside in a separate group from VLANs that are unique to each ACE.
Clients will send application requests through the multilayer switch feature card (MSFC), which routes them to a virtual IP address (VIP) within the Cisco Application Control Engine (ACE). The VIP used in this example resides in an ACE context, which is configured with a client VLAN and a server VLAN. Client requests will arrive at the VIP, and the ACE will pick the appropriate server and then use the destination Network Address Translation (NAT) to send the client request to the server. The servers can reside in one to four subnets sharing a common VLAN. The ACE supports secondary IP addressing to allow the server to respond to the ACE as its default gateway using the common interface VLAN. The ACE will then change the source IP to be the VIP and forward the response to the client via the MSFC. As with IOS the secondary IP addresses are treat the same way as primary addresses. They will appear in the ARP, routing, and other tables, simply as IPs, without special markings to indicate the IP is being used as a secondary address. Cisco ACE supports up to four secondary IPs per interface vlan with a system wide maximum of 1024 secondary IP addresses.
The Cisco ACE needs to be configured via access control lists (ACLs) to allow traffic into the ACE data plane. After the ACL checks are made, a service policy, which is applied to the interface, is used to classify traffic destined for the VIP. The VIP is associated with a load-balancing action within the multi-match policy. The load-balancing action tells the ACE how to handle traffic that has been directed to a VIP. In this example, all traffic is sent to a server farm, where it is distributed in round-robin fashion to one of five real servers. The ACE configuration occurs in layers, such that it builds from the real IPs to applying the VIP on an interface. Due to this layered structure, it is optimal to create the configuration by working backward from the way the flow is processed.
Thus, to enable server load balancing you need to do the following:
1) Enable ACLs to allow data traffic through the ACE device, as it is denied by default.
2) Configure the IPs of the servers (define rservers).
3) Group the real servers (create a server farm).
4) Define the virtual IP address (VIP).
5) Define how traffic is to be handled as it is received (create a policy map for load balancing).
6) Associate a VIP to a handling action (create a multimatch policy map [a service policy])
7) Create client- and server-facing interfaces and apply secondary IP to server interface.
8) Apply the VIP and ACL permitting client connections to the interface (apply access group and service policy to interface).
To begin the configuration, create an access list for permitting client connections.
ACE-1(config)# access-list everyone extended permit ip any any
ACE-1(config)# access-list everyone extended permit icmp any any
Although this example shows a “permit any any,” it is recommended that ACLs be used to permit only the traffic you want allow through the Cisco ACE. In the past, server load-balancing (SLB) devices have used the VIP and port alone to protect servers. Within the ACE, ACLs are processed first, and thus dropping traffic using an ACL requires fewer resources than dropping it once it passes the ACLs and reaches the VIP.
The Cisco ACE needs to know the IP address of the servers available to handle client connections. The rserver command is used to define the IP address of the service. In addition, each rserver must be place in service for it to be used. The benefit of this design is that no matter how many applications or services an rserver hosts, the entire real server can be completely removed from the load-balancing rotation by issuing a single “no inservice” or “no inservice-standby” command at the rserver level. This is very beneficial for users needing to upgrade or patch an rserver, because they no longer have to go to each application and remove each instance of the rserver.
ACE-1(config)# rserver lnx1
ACE-1(config-rserver-host)# ip add 192.168.1.11
ACE-1(config-rserver-host)# inservice
ACE-1(config-rserver-host)# rserver lnx2
ACE-1(config-rserver-host)# ip add 192.168.1.12
ACE-1(config-rserver-host)# inservice
ACE-1(config-rserver-host)# rserver lnx3
ACE-1(config-rserver-host)# ip add 192.168.1.13
ACE-1(config-rserver-host)# inservice
Now group the rservers to be used to handle client connections into a server farm. Again, the rserver must be placed in service. This allows a single instance of an rserver to be manually removed from rotation.
ACE-1(config-cmap)# serverfarm web
ACE-1(config-sfarm-host)# rserver lnx1
ACE-1(config-sfarm-host-rs)# inservice
ACE-1(config-sfarm-host-rs)# rserver lnx2
ACE-1(config-sfarm-host-rs)# inservice
ACE-1(config-sfarm-host-rs)# rserver lnx3
ACE-1(config-sfarm-host-rs)# inservice
Use a class map to define the VIP to which clients will send their requests. In this example, the VIP is considered L3 (Layer 3) because there is a match on any port. If the VIP were to match only HTTP traffic, the match would be bound to port 80 and considered an L4 (Layer 4) VIP. (For example, “match virtual-address 172.16.1.100 tcp eq 80”).
ACE-1(config)# class-map slb-vip
ACE-1(config-cmap)# match virtual-address 172.16.1.100 any
Next define the action to take when a new client request arrives. In this case, all traffic will be sent to the “web” serverfarm. This type of load balancing is considered L4 since only class-default is used.
ACE-1(config)# policy-map type loadbalance http first-match slb
ACE-1(config-pmap-lb)# class class-default
ACE-1(config-pmap-lb-c)# serverfarm web
Since the VIPs and load-balancing actions are defined independently, they must be associated so that the Cisco ACE knows how to handle traffic destined for a VIP. The association is made using a multimatch policy map. Keep in mind that multimatch policy maps are applied to interfaces as service policies.
ACE-1(config)# policy-map multi-match client-vips
ACE-1(config-pmap)# class slb-vip
ACE-1(config-pmap-c)# loadbalance policy slb
ACE-1(config-pmap-c)# loadbalance vip inservice
At this point the interface VLANs can be created to interconnect the Cisco ACE to the client side of the network and to the servers.
ACE-1(config)# interface vlan 20
ACE-1(config-if)# description “Client Side”
ACE-1(config-if)# ip address 172.16.1.5 255.255.255.0
ACE-1(config-if)# no shutdown
ACE-1(config-if)# interface vlan 40
ACE-1(config-if)# description “Default gateway of real servers”
ACE-1(config-if)# ip address 192.168.1.1 255.255.255.0
ACE-1(config-if)# no shutdown
In addition to the primary IP address the server side interface needs to have a secondary IP address associated to it so that it can be a part of the 10.10.10.0/24 network.
ACE-1(config-if)# interface vlan 40
ACE-1(config-if)# ip address 10.10.10.1 255.255.255.0 secondary
The last step is to apply the ACL and service policy (“policy-map multi-match”) to the client-side interface. Both the access group and service policy are applied on the input side of the interface.
ACE-1(config)# interface vlan 20
ACE-1(config-if)# access-group input everyone
ACE-1(config-if)# service-policy input client-vips
There is no need to add an access group to the server side, as the Cisco ACE automatically creates pinholes to allow server response traffic to pass back to the client.
Cisco has a new solution called ITD:
http://blogs.cisco.com/datacenter/itd-load-balancing-traffic-steering-clustering-using-nexus-5k6k7k
ITD (Intelligent Traffic Director) is a hardware based multi-Tbps Layer 4 load-balancing, traffic steering and clustering solution on Nexus 5K/6K/7K series of switches. It supports IP-stickiness, resiliency, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: