I'm looking for ideas on weighting, tracking and priority for a scenario where we have a pair of 6500's in one datacenter and another pair of 6500's in another data center present on a single network segment (using VPLS) along with a pair of forwarding firewalls in Active / Stdby mode. Like this...
...all 6500's in same GLBP Group
...by default all 6500's round-robin load-balance
...DC's are geogrpahically diverse
DC 1 DC 2
6500 6500 6500 6500
| | | |
| | | |
----------------Outside VPLS Span -------------------
DC 1 ASA (active) DC 2 ASA
------------------Inside VPLS Span --------------
NOTE: there are production servers and other resources in both DC's, some operating as failover infratructures and other operating independtly. The requirement for layer 2 connectivity is therefore, not an option. VPLS is providing the L2 over L3 connectivity and works just fine.
My concern is, by default all 4 6500's in the GLBP group will round-robin load-balance. Because the DC's are geogrpahically diverse, (max 100ms delay between DC's, typically sub 10ms), if all 4 6500's are forwarding, there could be some TCP coming back through DC 1 distribution pair and other parts of the same TCP conversation coming back over the other DC 2 distribution pair. While all traffic would then traverse the same Active FW, out of sequence TCP from the two geographically diverse pairs of 6500's, including buffering performance and re-assembly delays, could be a problem.
Ideally it would be nice to have the DC1 6500's act as a forwarding GLBP pair and if there were any problems that took out both of these distribution routers, the DC2 GLBP pair would then automtically take over. And, NEVER load-balancing using all 4 group members at one time so that out of sequence TCP would never be a problem.
I don't see a way to make that work with GLBP, though I admint I don't understand the object tracking feature sufficiently to know if that's one possible solution.
Anybody every done something like this before?
Advanced GLBP config guide?
>> by default all 6500's round-robin load-balance
if there are multiple clients, if GBLP is used on the outside VPLS VLan and the ASA(active) is the only client in vlan AND if ASA has a default route with next-hop = GLBP VIP only one C6500 will be used until the ASA will ARP again for the VIP IP address.
This can happen after 4 hours.
You could try to relay on proxy ARP (a static route with no IP next-hop just outgoing interface) but it does not pay it would cause the ASA ARP table to become very big and a lot of process switching.
This is an important point GLBP provides advantages only on real client Vlans where tens to hundreds of clients are connected not in your scenario.
GBLP load balancing simply means different answers to ARP requests for theh GLBP VIP address.
You should use multiple static routes with next-hop tracking but the issue is that if the ASA performs failover and the standby ASA becomes active it will try to use the C6500 on DC1 ( ASA configurations in a failover pair are identical)
So you could have two primary static routes and two floating static routes (with AD 201 for example) for the C6500 in DC2.
but as noted above in case of ASA switchover the new active ASA would try to use the remote C6500.
So briefly I don't see any use of GLBP in this scenario.
Hope to help
That's about where I came out at as well - ergo my confusion. And, I do understand your point about only 1 client on the segment using GLBP (the active firewall).
First we're going to performance test the infrastructure to confirm there are any re-assembly issues. If not, then I think the solution will hold as its primary purpose is to provide continuation of service through the remote DC 2 should the building containing DC1, blow up. I think the 4 member group holds up in that scenario - again, assuming no re-assembly performance issues. I note that most of the traffic traversing the FW is traveling through the Internet in any case and through 2 different providers (additional perimeter FW's using ASR).
We are not overly concerned about partial device failuares such as a single distribution router at either DC location going down or, a FW failover event where all traffic from the new active DC2 FW might have to forward back through the DC 1 GLBP pair. Really bad things are happening in that scenario and we could accept the temporary sub-optimal forwarding paths.
However, if there is a statistically significant performance impact using the 4 member GLBP group across VPLS connected data centers, then I think in the end, since the majority of traffic occurs at DC1 and since the default active FW is at DC 1, we make the DC1 distribution pair a GLBP group, make the DC 2 distribution pair a second GLBP group, and apply floating routes in case both DC 1 distributions were to go away.
Does that make sense?
>> we make the DC1 distribution pair a GLBP group, make the DC 2 distribution pair a second GLBP group, and apply floating routes in case both DC 1 distributions were to go away.
I agree this is a better choice
Hope to help