02-03-2009 06:11 PM - edited 03-06-2019 03:51 AM
I have two equal paths between my data center and disaster recovery site. They are both 100 mbs metro-ethernet connections. Due to limitation on the carrier's equipment I'm running layer 3 between the sites and not layer 2. Basically the carrier can't pass VLAN tags on their network.
I tought EIGRP would provide some load balancing between the two lines. My graphs with MRTG is certainly not showing that. At times 1 path is double the amount of load as the other one. Am
I wrong in my assumption on the way EIGRP handles multiple equal routes?
02-04-2009 12:47 AM
Hello Roland,
per destination load balancing work well if there are many flows with different source and destination IP addresses.
traffic between two datacenters is likely formed by few very high volume traffic flows.
(for examples synchronization of two databases a lot of traffic between two IP addresses)
In case like yours, if no VOIP is involved, you could think to use per packet load sharing to get a fair use of links.
I mean that what you see is not a limitation of the routing protocol but it is how CEF destination based load balancing works.
Hope to help
Giuseppe
02-04-2009 02:58 AM
Hi Roland,
Could you please share the output of the following command.
sh cef interface
regards
02-04-2009 03:57 AM
Hi, here is the output from the two CEF interfaces in the data center
Core1
Vlan254 is up (if_number 56)
Corresponding hwidb fast_if_number 56
Corresponding hwidb firstsw->if_number 56
Internet address is 10.90.254.49/30
ICMP redirects are always sent
Per packet loadbalancing is disabled
IP unicast RPF check is disabled
Inbound access list is not set
Outbound access list is not set
IP policy routing is disabled
BGP based policy accounting is disabled
Hardware idb is Vlan254
Fast switching type 21, interface type 103
IP CEF switching enabled
IP Feature Fast switching turbo vector
IP Feature CEF switching turbo vector
Input fast flags 0x0, Output fast flags 0x0
ifindex 54(54)
Slot 0 Slot unit 254 VC -1
Transmit limit accumulator 0x0 (0x0)
IP MTU 1500
Core2
Vlan264 is up (if_number 66)
Corresponding hwidb fast_if_number 66
Corresponding hwidb firstsw->if_number 66
Internet address is 10.90.254.62/30
ICMP redirects are always sent
Per packet loadbalancing is disabled
IP unicast RPF check is disabled
Inbound access list is not set
Outbound access list is not set
IP policy routing is disabled
BGP based policy accounting is disabled
Hardware idb is Vlan264
Fast switching type 21, interface type 103
IP CEF switching enabled
IP Feature Fast switching turbo vector
IP Feature CEF switching turbo vector
Input fast flags 0x0, Output fast flags 0x0
ifindex 64(64)
Slot 0 Slot unit 264 VC -1
Transmit limit accumulator 0x0 (0x0)
IP MTU 150
Yes Per Packet Load Balancing isn't enabled. I just noticed that.
02-04-2009 04:07 AM
Hi
Enable per packet load balance on the interface by using the command
ip load-sharing per-packet (Interface command) and let me know the result
Regards
02-04-2009 06:18 AM
Since the layer 2 and 3 interfaces reside on two different chassis will this be beneficial?
02-04-2009 06:37 AM
You would want to enable per packet load sharing at the point in the network where the traffic streams split. But.... And this is a HUGE but....
I wouldn't do it. If these are TCP based streams (and I assume they are), there's a good possibility you'll end up with what are counted as missed frames due to queuing and serialization delays, that will send TCP into slow start, effectively reducing application performance across the board. The load sharing numbers might look a lot better, but the applications might be doing much worse, in reality.
Ber very careful with per packet load sharing. The first answer you received on the thread--that this is probably what you are going to get with such a low number of source/destination pairs--is correct. The one option you might try is to enable the tunnel option on CEF load sharing--I can't remember the exact command, but it tries to account for low numbers of source/destination pairs, if possible, and it might split the traffic better for you, without disturbing the per-flow semantics.
:-)
Russ
02-04-2009 08:11 AM
I'm having a similar issue in that I have 2 metro ethernet links to a replica site, but I'm only ever using one of them to their full capacity.
I was thinking of using PBR and pairing sets of master/replicas to force, say, Pair1 to use LinkA and Pair2 to use LinkB.
I still wouldn't get more than 100Mb between the pairs of servers, but that wouldn't be an issue as my network in the replica site is only fast ethernet.
Just my â¬.02
Kevin
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide