cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2215
Views
15
Helpful
14
Replies

Understanding EIGRP link bundle + load balancing vs bandwidth increase

SJ K
Level 5
Level 5
4 Accepted Solutions

Accepted Solutions

Richard Burts
Hall of Fame
Hall of Fame

Noob

 

The environment that you describe is very odd. Perhaps there is some additional information from this study book that would help us understand how they have set it up. In a normal configuration of an 1841 router the IOS will not accept both fa0/0 and fa0/1 being in the same subnet. Are they perhaps running something like Integrated Routing and Bridging. IRB uses bridging logic on the interfaces and uses a virtual interface to have the IP address and so with IRB I would expect to have only a single IP address on each router. If it is not IRB then I do not understand how they get both interfaces to be in the same subnet.

 

Aside from the issue about how both interfaces are in the same subnet this is how I would expect things to work. When you issue the ping command on the IOS router the router looks for the interface to use for the outbound traffic. If there are two interfaces that are equally good (load sharing is enabled) then the router will choose one of the interfaces. Note that specification of the source address does not impact the router decision of which outbound interface to use. Having chose the outbound interface R0 will send the ping. R1 receives the ping. In processing the inbound packet and determining how to forward it the router determines that the destination is the router itself and sends the content of the packet to the router CPU for processing. Note that if the packet is received on fa0/0 and the destination address is fa0/1 there is no need to forward the packet to fa0/1 and the router just processes the response to the packet.

 

HTH

 

Rick

HTH

Rick

View solution in original post

Mark Malone
VIP Alumni
VIP Alumni

EIGRP has 2 forms of load balancing equal and unequal , for equal to work there must be more than 1 route to the destination that is equal to the other route in terms of metrics K1-K5  in the routing table, only 4 routes by default (up to 32 with maximum paths) can equally load balance at the same time, this would provide you with more resiliency redundancy and share the traffic over multiple links easing congestion in busy networks , the scenario above would not work you cannot have multiple interfaces under the same ip address space as it causes over lapping subnets on the device , routers never switch traffic they only route by ip , EIGRP bases its load balancing on its metrics there very important in EIGRP as it makes it very flexible for the administrator

Unequal cost LB allows you to share traffic to other destinations that do not show up in ip routing table but are in your eigrp topology table , this allows for sharing up to 6 paths and can be manipulated further using traffic-share command so that your stronger links have better ratio than weaker links like gig to serial link , you would want more traffic going across the gig link than serial but you may still want to use your serial as your paying for it

q2) yes if you have more links you will get more traffic through as there more bandwidth available on equal cost LB as multiple gig links as an example going back to the same destination with equal cost paths , but in reality even though the links may be equal cost 1 link could be under a lot more utilization than its other similar link depending on your traffic patterns locally on each router

 

This explains it well the two types , it takes a bit of practice to get the variance working right build bigger labs with multiple paths and routers using EIGRP you will see it working a lot better

http://ericleahy.com/index.php/eigrp-equal-and-unequal-cost-load-sharing/

View solution in original post

Noob

 

Correct. Once the router has determined that it is the destination of the packet there is no need to forward it to fa0/1.

 

In your other post may be a clue about what is going on. If the bundle results in a connection with 3 MB then it suggests that the interfaces are Serial interfaces and not Fast Ethernet. It is an interesting phenomenon that IOS will allow two Serial interfaces to have IP addresses in the same subnet. So if it were Ser0/0 192.168.1.1 and Ser0/1 192.168.1.3 it would be a legitimate configuration. And in that case my description fits pretty well. If the router ping 192.168.1.4 it will discover that there are two interfaces with equal metric that get to the destination. The router will choose one interface to use and will send the ping on that interface. If the router chooses Ser0/1 then it is easy to see that the ping is received on 192.168.1.4 and is processed. The more interesting case is if the router chooses Ser0/0 and the ping is received on 192.168.1.2. It is as I described and the receiving router will determine that it is the destination and will process the ping without forwarding it to 192.168.1.4.

 

HTH

 

Rick

HTH

Rick

View solution in original post

Noob

 

I do not have a good explanation about why but I can tell you from lots of experience that on an IOS router it will reject configuration of router Ethernet interfaces where both IP addresses would be in the same subnet. But two Serial interfaces configured with IP addresses in the same subnet is allowed. That is probably the case in PT and it is absolutely the case on live routers.

 

I am quite puzzled about the author's comment about a single 3 MB pipe rather than two 1.5 pipes. I wonder if there is some other detail about how he set up the configuration in the book that we do not know.

 

Without knowing what he had done my first idea would be that perhaps he was talking about doing something like MLPPP. As I mentioned in another response in this thread with MLPPP you have multiple physical interfaces that are part of a single layer 3 interface. If it were MLPPP it would be quite appropriate to talk about a 3 MB pipe. But if it were MLPPP there would be only a single IP address on each router. Since he clearly has separate IP addresses on each link then it can not be MLPPP. If there are two links and each interface has IP addresses and we are running EIGRP then it has got to be load sharing over two links not over one 3 MB pipe.

 

To understand how it works with two pipes let us consider this example. We have two routers that are connected by two links and we have enabled load sharing using per flow load sharing. There is a web server connected to R0 which sends a web page of 6000 Bytes to a PC connected to the LAN of R1. So now there are multiple packets to transmit from R0 to R1 and they will all be sent on the same interface. While R0 is busy transmitting the multiple web page packets there are some FTP packets from a device on R0 to R1. R0 can use the second link to transmit the FTP while it is using the first link to transmit the web page packets.

 

When we think about load sharing there might be two reasons that we want to use this feature. As we have just discussed one reason might be that we can transmit more data between the two routers. But sometimes our justification of load sharing might not be based on increase in bandwidth but is based on increased reliability through redundancy. This is particularly the case when we talk about Ether Channel. When we use Ether Channel between switches frequently what we want to achieve is not necessarily greater bandwidth but is the ability to fail over quickly and automatically in the case of a problem with one of the links.

 

You make a good point about 3 MB or 3 Mb. Technically you are correct that lower case b is accurate and upper case B is not. But many people use the term loosely and usually people get the correct understanding of what was meant. But when accuracy is important we should use "b". 

 

HTH

 

Rick

HTH

Rick

View solution in original post

14 Replies 14

Richard Burts
Hall of Fame
Hall of Fame

Noob

 

The environment that you describe is very odd. Perhaps there is some additional information from this study book that would help us understand how they have set it up. In a normal configuration of an 1841 router the IOS will not accept both fa0/0 and fa0/1 being in the same subnet. Are they perhaps running something like Integrated Routing and Bridging. IRB uses bridging logic on the interfaces and uses a virtual interface to have the IP address and so with IRB I would expect to have only a single IP address on each router. If it is not IRB then I do not understand how they get both interfaces to be in the same subnet.

 

Aside from the issue about how both interfaces are in the same subnet this is how I would expect things to work. When you issue the ping command on the IOS router the router looks for the interface to use for the outbound traffic. If there are two interfaces that are equally good (load sharing is enabled) then the router will choose one of the interfaces. Note that specification of the source address does not impact the router decision of which outbound interface to use. Having chose the outbound interface R0 will send the ping. R1 receives the ping. In processing the inbound packet and determining how to forward it the router determines that the destination is the router itself and sends the content of the packet to the router CPU for processing. Note that if the packet is received on fa0/0 and the destination address is fa0/1 there is no need to forward the packet to fa0/1 and the router just processes the response to the packet.

 

HTH

 

Rick

HTH

Rick

Hi Rick,

Having chose the outbound interface R0 will send the ping. R1 receives the ping. In processing the inbound packet and determining how to forward it the router determines that the destination is the router itself and sends the content of the packet to the router CPU for processing. Note that if the packet is received on fa0/0 and the destination address is fa0/1 there is no need to forward the packet to fa0/1 and the router just processes the response to the packet.

Ah.. so upon the reception of the inbound packet, the router knows that the destination is for itself and doesn't need to route to fa0/1 again.

Thanks!

 

Regards,
Noob

Noob

 

Correct. Once the router has determined that it is the destination of the packet there is no need to forward it to fa0/1.

 

In your other post may be a clue about what is going on. If the bundle results in a connection with 3 MB then it suggests that the interfaces are Serial interfaces and not Fast Ethernet. It is an interesting phenomenon that IOS will allow two Serial interfaces to have IP addresses in the same subnet. So if it were Ser0/0 192.168.1.1 and Ser0/1 192.168.1.3 it would be a legitimate configuration. And in that case my description fits pretty well. If the router ping 192.168.1.4 it will discover that there are two interfaces with equal metric that get to the destination. The router will choose one interface to use and will send the ping on that interface. If the router chooses Ser0/1 then it is easy to see that the ping is received on 192.168.1.4 and is processed. The more interesting case is if the router chooses Ser0/0 and the ping is received on 192.168.1.2. It is as I described and the receiving router will determine that it is the destination and will process the ping without forwarding it to 192.168.1.4.

 

HTH

 

Rick

HTH

Rick

Hi Rick,

Thanks for the fast confirmation ! and sorry for the late reply earlier..

 

Yeap. You are right, the setup was done on a serial link, but i am not sure why it couldn't be done on an ethernet interface too so i done mine in ethernet (but again, from the forum, it says PT wont work though..

 

As per the author as mentioned
"... the router now has a 3MB pipe through that line instead of just two 1.5Mbps T1 links..."

 

What's the difference in having load balancing in 2 x 1.5 Mbps equal cost link vs 3 MBb pipe ?  Does it means 1 frame will be split across 2 link and send up together at the same time -- but as what Peter mentioned, that will be resource intensive and not common .

And if it is via per packet or destination, then only 1 link will be utilized at 1 time, so having a 3MB pipe and 2 x 1.5Mbps - does it make any difference ?

 

Also, why is it a 3MB and not a 3Mb pipe, i guess its typo.. (as i followed word by word)..

 

Hope to hear your advices soon.

 

Regards,
Noob

 

 

Noob

 

I do not have a good explanation about why but I can tell you from lots of experience that on an IOS router it will reject configuration of router Ethernet interfaces where both IP addresses would be in the same subnet. But two Serial interfaces configured with IP addresses in the same subnet is allowed. That is probably the case in PT and it is absolutely the case on live routers.

 

I am quite puzzled about the author's comment about a single 3 MB pipe rather than two 1.5 pipes. I wonder if there is some other detail about how he set up the configuration in the book that we do not know.

 

Without knowing what he had done my first idea would be that perhaps he was talking about doing something like MLPPP. As I mentioned in another response in this thread with MLPPP you have multiple physical interfaces that are part of a single layer 3 interface. If it were MLPPP it would be quite appropriate to talk about a 3 MB pipe. But if it were MLPPP there would be only a single IP address on each router. Since he clearly has separate IP addresses on each link then it can not be MLPPP. If there are two links and each interface has IP addresses and we are running EIGRP then it has got to be load sharing over two links not over one 3 MB pipe.

 

To understand how it works with two pipes let us consider this example. We have two routers that are connected by two links and we have enabled load sharing using per flow load sharing. There is a web server connected to R0 which sends a web page of 6000 Bytes to a PC connected to the LAN of R1. So now there are multiple packets to transmit from R0 to R1 and they will all be sent on the same interface. While R0 is busy transmitting the multiple web page packets there are some FTP packets from a device on R0 to R1. R0 can use the second link to transmit the FTP while it is using the first link to transmit the web page packets.

 

When we think about load sharing there might be two reasons that we want to use this feature. As we have just discussed one reason might be that we can transmit more data between the two routers. But sometimes our justification of load sharing might not be based on increase in bandwidth but is based on increased reliability through redundancy. This is particularly the case when we talk about Ether Channel. When we use Ether Channel between switches frequently what we want to achieve is not necessarily greater bandwidth but is the ability to fail over quickly and automatically in the case of a problem with one of the links.

 

You make a good point about 3 MB or 3 Mb. Technically you are correct that lower case b is accurate and upper case B is not. But many people use the term loosely and usually people get the correct understanding of what was meant. But when accuracy is important we should use "b". 

 

HTH

 

Rick

HTH

Rick

Hi Rick,

 

Thanks for the reply!.  I got to admit, i really enjoyed these discussions even though its 3.00am here now and have benefited greatly.

 

Sorry that i have posted on your reply earlier on while you are posting this. ;)

Just in case you missed them, its over here below

For Q1) Duly noted

For Q2) Understood. EIGRP just provide the routing table with the routes available but it is not the process/protocol that decide how load balancing between the 2 routes are carried out.

Hence, can i said that in this scenario, the router will be either using per destination or it will be using round robin method But how do i check ?

By using the traffic share count parameter in show ip route ? 
if it is via destination, then the share count is 1 sided.
if it is via packet/round robin, then the share count will be even out.

For Q3)  Noted. My guess is right, so if there is no way to detect out of order frames or packets, then it will be up to the high level protocol to do so..  Thanks for the confirmation ;) So can i say - if using round robin + udp - chances of corrupted application = higher? ======================================================

For the comments on Mark ->

In the term "traffic utilization" - I have understood it to be the cause due to the imbalance use of available links -> meaning 10 packets are send using 1 link, but another link having only 5 packets sent.

So if i am using round robin, is it possible that 1 link can still be under more utilization then the other ?  (not considering the total traffic size involved)

 

Regards,
Noob

Mark Malone
VIP Alumni
VIP Alumni

EIGRP has 2 forms of load balancing equal and unequal , for equal to work there must be more than 1 route to the destination that is equal to the other route in terms of metrics K1-K5  in the routing table, only 4 routes by default (up to 32 with maximum paths) can equally load balance at the same time, this would provide you with more resiliency redundancy and share the traffic over multiple links easing congestion in busy networks , the scenario above would not work you cannot have multiple interfaces under the same ip address space as it causes over lapping subnets on the device , routers never switch traffic they only route by ip , EIGRP bases its load balancing on its metrics there very important in EIGRP as it makes it very flexible for the administrator

Unequal cost LB allows you to share traffic to other destinations that do not show up in ip routing table but are in your eigrp topology table , this allows for sharing up to 6 paths and can be manipulated further using traffic-share command so that your stronger links have better ratio than weaker links like gig to serial link , you would want more traffic going across the gig link than serial but you may still want to use your serial as your paying for it

q2) yes if you have more links you will get more traffic through as there more bandwidth available on equal cost LB as multiple gig links as an example going back to the same destination with equal cost paths , but in reality even though the links may be equal cost 1 link could be under a lot more utilization than its other similar link depending on your traffic patterns locally on each router

 

This explains it well the two types , it takes a bit of practice to get the variance working right build bigger labs with multiple paths and routers using EIGRP you will see it working a lot better

http://ericleahy.com/index.php/eigrp-equal-and-unequal-cost-load-sharing/

Hi Mark,

yes if you have more links you will get more traffic through as there more bandwidth available on equal cost LB as multiple gig links as an example going back to the same destination with equal cost paths , but in reality even though the links may be equal cost 1 link could be under a lot more utilization than its other similar link depending on your traffic patterns locally on each router

Q1) Why is that so ? how can 1 link be under more utilization then other if it is equal cost and as per the website that you give me, it seems like the load balancing/sharing is via packet, and not destination.

http://ericleahy.com/index.php/eigrp-equal-and-unequal-cost-load-sharing/#sthash.euzQx8QD.dpuf

"When performing equal cost load balancing, one packet is sent on each individual path."

 

Regards,
Noob

Noob

 

I have two comments here.

 

Having just posted a response that makes the point that EIGRP does not really do load sharing I want to make a comment about Mark's response which discusses EIGRP two forms of load balancing. The real point here is about how routing protocols determine what they will put into the routing table. Most protocols (OSPF for example) will put multiple routes into the table only if all of the routes have equal metric. And if the packet forwarding process finds multiple routes in the routing table then it will do load sharing. EIGRP gives us an option in which it can put multiple routes only when they are equal metric but also an option to put multiple routes when they have unequal metric. And if the packet forwarding process finds multiple routes in the routing table then it will do load sharing.

 

You can certainly get a link that is utilized less that the other even when doing round robin. Consider an example where two routers are connected over two links. 10 packets are transmitted over some period of time. 5 of the packets go over link 1 and the other 5 are over link 2. By coincidence all 5 packets on link 1 are length 1500 and all 5 packets on link 2 are length 200. So in this period of time link 1 has transmitted 7500 bytes while link 2 has transmitted 1000 bytes.

 

HTH

 

Rick 

HTH

Rick

Peter Paluch
Cisco Employee
Cisco Employee

Hi Koh,

Without knowing how the network was precisely set up, this whole exercise is rather vague.

A router won't allow you to configure two physical ports to be in the same IP subnet. It is impossible for a Cisco router to be configured as, for example, Router0 where Fa0/0 = 192.168.1.1/24 and Fa0/1 = 192.168.1.3/24. There is an option of configuring two physical routed interfaces of a router into a bundle, called an EtherChannel, and have them treated by the router as a single link and a single port, but in that case, there would still be just one logical interface representing the bundle, and a single IP address again.

So it is unclear what the exercise was intended to mean.

q2) I have seen threads discussing about having load balancing != bandwidth increase. I have not studied much into detailed yet, but hope to have the idea on a general level

This strongly depends on what technology is being used to utilize multiple links.

One of the commonly used technologies is the EtherChannel I've mentioned before. The EtherChannel works simply by "spreading" the traffic across the links so that some frames are carried by one link, some other frames are carried by another link in the bundle. The key to choose a particular link is derived from a combination of the address fields in the frame - for example, the source MAC address is fed into a hash function and its value points to a particular link in the bundle. Another approach would be to use the destination MAC, or (SRC XOR DST) MAC, or even look into L3 (and L4) addressing to get data for the hashing function that selects the link to carry this particular frame. Note that in this particular scenario, all frames of a single flow (all frames from the same source if SRC is used, all frames to the same destination if DST is used, all frames between a particular pair of hosts if SRC XOR DST is used, ...) get carried by a single link. This is why on an EtherChannel, a single data flow will not "go any faster" because it is still carried by a single link only. However, if you have multiple flows between different hosts, they will get spread across the links in the bundle. So with EtherChannel, the individual per-flow bandwidth is not higher but the aggregate bandwidth is, in ideal conditions, equal to the speed of a single link in the bundle multiplied by the number of the links (in EtherChannel, all links must operate on the same speed). You could ask why EtherChannel avoids using a round-robin scheme in which even a single flow could experience a bandwidth increase. The answer is rather simple: With round-robin, frames could experience reordering. Because this phenomenon would not be observable on plain Ethernet, and EtherChannel must act as an entirely transparent technology, it must not introduce frame reordering, either.

Another technology that can utilize the combined bandwidth of multiple links is the Multilink PPP. In this technology, each frame is split into as many fragments as many links are located in the bundle, and each fragment is carried over one of the links. On the other end of these links, the device defragments the frame and forwards the contents further. This technology makes much better use of all available bandwidth even for a single flow but because it is much more computationally intensive, you are not going to see it in Ethernet networks - it would require too much processing power and would introduce too great a latency.

In routed networks (and in TRILL and FabricPath), you can also see the equal cost multipathing (ECMP). This is the usual process of utilizing multiple paths to the same destination. The way this is performed depends on the device but Cisco routers usually perform per-destination load balancing, meaning that packets towards a particular single end go over a particular path. This is again done to prevent packet reordering. This means that aggregate flows to different end hosts in the same destination network will be spread across different paths but a single flow will not experience any improvement in bandwidth as they will be delivered across a single path.

So in most of these technologies, a single flow is carried over a single path, and thus does not profit from any increased bandwidth. Only when multiple flows are carried, they will - as an aggregated whole - experience an improvement.

Best regards,
Peter

Dear all,

Thank you for the feedbacks and advises. I got the topology above from the book

"CCNA Network Associate Study Guide" Edition 6 by Todd Lammle (it was 10 years back :P )

Inside the book on page 437 , it wrotes

"... the router now has a 3MB pipe through that line instead of just two 1.5Mbps T1 links..."

I googled and there isn't much information on EIGRP link bundle except on Lammle forum at

http://www.lammle.com/discussion/showthread.php?5075-EIGRP-Bundling-links

http://www.lammle.com/discussion/showthread.php?1918-Bundle-Links-on-EIGRP

But again, i am not sure if by having a "3MB" pipe, is it having traffic of 1 flow going in two physical links concurrently...or. ?

=======================================

Hi Peter,

Thanks for the insight -

Q1) so can i said that in fact, there are 3 ways in load balancing

courtesy from acampbell on the link  (how does load balancing work)
http://www.cisco.com/c/en/us/support/docs/ip/border-gateway-protocol-bgp/5212-46.html?referring_site=bodynav

1) via flow, all traffic for a single flow base on a certain algorithm (e.g. destination address), will go through 1 physical link   -- is this the per destination method ?

2) via round robin, frames/packets from a single flow, will be round robin between the available physical links -- is this the per packet method ?

3) via what you mentioned in multilink ppp -> a single frame will be fragmented into pieces and move up the available physical links

 

Q2)  In that case, in EIGRP load sharing, with 2 equal cost link, which method above is it using and how do i check ?

courtesy from mark  http://ericleahy.com/index.php/eigrp-equal-and-unequal-cost-load-sharing/, it seems like eigrp load sharing is using method 2 ? - am i right ?

 

Q3) What happen if there is an out of order issue ? will there be any re-sequencing on layer 2 frames ? (like in tcp) (e.g. frame send out in order f1,f2,f3, receiving side receive f1,f3,f2 - what will happen ? will the frames will be processed but the reodering will have to depend on high layer logic e.g. tcp if there is, if there isn't then the data might not be understandable by the application - just guessing..)


Regards,
Noob

Noob

 

Q1) In discussing load sharing people usually talk about two methods per flow/per destination and round robin. You make an interesting point that Multilink PPP technically could be considered a third method but you will not find many discussion of the topic that include MLPPP in the load sharing discussion.

 

Q2) In the case of EIGRP load sharing I would suggest that there are only two alternatives to consider. From the perspective of EIGRP MLPPP is a single interface so for EIGRP MLPPP is not sharing and the single interface carries the entire traffic (it is only at layer 2 and layer 1 that traffic is split over multiple physical links).

 

I do not want to be too pedantic but another point to make is that EIGRP does not do load sharing. EIGRP (and other routing protocols) create entries in the IP routing table. There is a separate process in IOS that forwards packets and it is this process that decides whether to use per flow or round robin load sharing.

 

Q3) This is particularly interesting. At layer 3 there is no way to identify packets that are out of order and there are very few layer 2 protocols that can detect out of order frames. And if we can not detect out of order then there is no possibility of correcting the issue at layer 2 or layer 3. It is only as you get into the higher layers (especially TCP and the application layer) that we can detect and attempt to correct out of order packets.

 

HTH

 

Rick

HTH

Rick

Hi Rick,

 

Thanks for replying on these issues as well.

 

For Q1) Duly noted

 

For Q2) Understood. EIGRP just provide the routing table with the routes available but it is not the process/protocol that decide how load balancing between the 2 routes are carried out.

Hence, can i said that in this scenario, the router will be either using per destination or it will be using round robin method But how do i check ?

By using the traffic share count parameter in show ip route ? 
if it is via destination, then the share count is 1 sided.
if it is via packet/round robin, then the share count will be even out.

 

For Q3)  Noted. My guess is right, so if there is no way to detect out of order frames or packets, then it will be up to the high level protocol to do so..  Thanks for the confirmation ;) So can i say - if using round robin + udp - chances of corrupted application = higher ?

 

======================================================

For the comments on Mark ->

In the term "traffic utilization" - I have understood it to be the cause due to the imbalance use of available links -> meaning 10 packets are send using 1 link, but another link having only 5 packets sent.

So if i am using round robin, is it possible that 1 link can still be under more utilization then the other ?  (not considering the total traffic size involved)

 

Regards,
Noob