cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4850
Views
0
Helpful
8
Replies

EIGRP Equal Cost Load Balancing doesn't increase bandwidth?

GuoPatrick02
Level 1
Level 1

On one of my previous experiments, I did a test where my computer was behind a router that's connected to another router with a T1 link and that router on the far end is connected to the internet. I went on speedtest.net and it says my downlaod speed is about 1.46mbps, which is pretty accurate considering that the T1 (slowest link on my network) is 1.544mbps.

On an experiment I did today, I was doing equal cost load balancing with 4 T1 links. The routing table says that the route to 0.0.0.0 (the path to speedtest.net) will take the 4 paths and each of them has equal metrics so I know that equal cost load balancing is in use to get to the internet. I predicted that my speedtest will give me about 6mbps for download speed since one link is 1.5mbps and there are 4 equally load balanced so 1.5mbps per link would give 6mbps. When I did the speedtest, it says the same thing, 1.46mbps. I tried shutting down 3 of my 4 links and retesting and it gave me the same result. Does this mean that for EIGRP, equal cost load balancing doesn't increase your bandwidth, it just divides the speed of one link and distribute it to each of the links? (0.375mbps per link for equal cost load balancing across 4 T1 links)

I haven't tried OSPF equal cost load balancing and EIGRP unequal cost load balancing yet so if anyone could give me info on those that would help too

Thanks

P.S. My internet speed that I ordered is 12mbps. With no T1s in my network I usually get a speedtest of about 22mbps+ because my ISP allows bursting. At the same time of this experiment, I have another computer connected to the internet with no T1s in the path and speedtest says that computer's download speed is 23.44mbps. Other than the T1s, every other cable is ethernet, and all my routers have fastethernet speeds of 100mbps

1 Accepted Solution
8 Replies 8

Reza Sharifi
Hall of Fame
Hall of Fame

The behavior you have experience is correct.  With equal cost load balancing you basically have more pipes to use and it doesn't mean that the packet forwarding will equally be divided among 4 links.  If you are sending for example 100 packets, it doesn't mean that each T1 is going to send exactly 25 packets. One T1 may send 30, the other one 20 and the other 2 25 each. So, it means you have 4 T1s and each one is 1.5mbps and not 6mbps.

HTH

The link from my far end router to the edge router is a fastethernet link. If each T1 link is load balancing equally 1.5mbps per link, and each T1 is 1.5mbps, shouldnt the far end router be receiving and forwarding out information at 6mbps since it's getting 1.5mbps per each of the 4 links at the same time? Even if they're not gonna be completely equal to 1.5mbps, like one's sending more than the other, shouldn't the far end router still be receiving about 6mbps total?

Hi,

when there are multiple paths for the same destination installed in the routing table, the switching process will take care of the load sharing.For transit traffic by default traffic is CEF switched which means that it is switched per flow( src-dst IP), so

in fact if you are doing a speedtest from one source IP to the same destination IP the CEF algorithm will send the packets

out the same link.That is a normal behaviour you are observing

Regards.

Alain

Don't forget to rate helpful posts.

Don't forget to rate helpful posts.

Hi alain ,

is there any cisco paper that explain this behavior in deep manner , because I tought with this scenario , with multiple paths to same destination , the router will uses round roubin method to switch packet , so il will uses all links in the bundle.

THANK YOU ALAIN!

I needed to do the "ip load-sharing per-packet" command on each serial interface since the default is per-destination and speedtest.net is a single destination so it wouldn't load balance.

Here are my new results from speedtest.net:

Using all 4 T1 links: 5.73mbps downstream

Using 3 T1 links (one serial interface shutdown): 4.50mbps downstream

Using 2 T1 links (two serial interfaces shutdown): 2.93mbps downstream

Using 1 T1 link (three serial interfaces shutdown): 1.47mbps downstream

Conclusion:

Load balancing does increase total bandwidth to a single destination, but not by default

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

CEF load balancing per packet will take advantage of multiple paths, for a single flow.  The downside, though, is packet sequencing is lost and this is often very injurious to many flow types.  I.e. generally, you don't want to enable CEF per-packet.

I can see why Cisco made per-destination the default option for load-sharing. Thanks for adding that in Joseph

Review Cisco Networking for a $25 gift card