cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
433
Views
15
Helpful
8
Replies

Load balancing again. Please help

dathaide
Level 1
Level 1

hi

I have two routers at Site A. One connection is sat 128K and the second is 56K sat conneciotn to site B. Site B has only one router terminating both 128K and 56K connections. Eigrp is the routing protocol of choice. I am using ip cef on both sides. When i use the per destination option and the 128k line files up traffic does not go over the 56k circuit. I also tried to use a route map statement applied it on the fast ethernet and pointed traffic to the 56k circuit but did not see traffic go

route-map video permit 20

match ip address 2

set interface Serial0/0.1

Any suggestions.

Although i has one 128k and one 56k can i configure bandwidth stat of both to be 56K so that eigrp thinks both are equal cost paths and load balance.

thanks

8 Replies 8

saimbt
Level 1
Level 1

Hi,

Load balancing can be achieved by the "Variance" command.

EIGRP calculates the total metric by scaling the bandwidth and delay metrics.

metric=(10000000/bandwidth)+ sum of delays * 256

No calculate the metric for both the links and select a Varience , meaning all the routes having the metric below or equal to the FD will be a part of load balancing.

moreover you can use the command "no ip route-cache", so that router does a per packet load balancing.

BEWARE: PER PACKET ROUTING IS CPU HUNGRY AND THIS CAN CRACH A LOW GRADE ROUTER. AFTER DOING THE SAME OBSERVER THE CPU LOAD.

First, I wouldn't do per packet load sharing across unequal cost links. Per packet load sharing might incrase link utilization, but it's likely to decrease application performance across the links, due to out of order packets.

Second, if the platform supports CEF, you don't have to turn off CEF to get per packet load sharing. I wouldn't, in fact. If I really wanted per packet load sharing, I'd use cef with per packet load sharing.

:-)

Russ.W

thanks for all your replies. Since they are unequal i am planning on using the variance command. do i need it at both ends corporate and remote. I am planning on using variance of 2. thus my config will be ip cef per destination;routing protocol eigrp

Please check if variance of 2 will work and traffic balanced

State is Passive, Query origin flag is 1, 1 Successor(s), FD is 40514560

Routing Descriptor Blocks:

10.61.248.25 (Serial0/1.1), from 10.61.248.25, Send flag is 0x0

Composite metric is (40514560/28160), Route is Internal

Vector metric:

Minimum bandwidth is 64 Kbit

Total delay is 20100 microseconds

Reliability is 254/255

Load is 7/255

Minimum MTU is 1500

Hop count is 1

10.61.248.21 (Serial0/0.1), from 10.61.248.21, Send flag is 0x0

Composite metric is (46228736/28160), Route is Internal

Vector metric:

Minimum bandwidth is 56 Kbit

Total delay is 20100 microseconds

Reliability is 255/255

Load is 59/255

Minimum MTU is 1500

Hop count is 1

I have a follow up question

EIGRP provides unequal load balancing with the use of the variance command. Depending on the switching it can be either process (per packet) or fast (per destination). When i use the variance command and use per destination ie fast switching. How does the variance command help???.

thanks

I would think you'd be doing cef, not fast, switching, but this is how it works (prepare for a long answer)... :-) This isn't exactly how it works, but it will provide enough information for you to understand how it works.

There is a "round robin counter" in the routing table (RIB), which is used for load sharing in both fast and process switching. If you do a show ip route, and you see several routes towards the same destination, you'll see one of them has a star next to it. This is the route that will be used to switch the next packet. Suppose you have one path with a traffic share count of 3, and another with a traffic share count of 1, and you are switching packets to the first path at the moment. Each packet that is switched will increase a "packet counter" on the route; when it reaches three, the round robin counter is incremented, resulting in the path now pointing to the other route, the higher cost one with a traffic share count of 1. The first packet switched along this path will then cause the packet counter to bit 1, which is the traffic share count, and causes the round robin counter to increment, again pointing to the first path.

Thus, for every three packets switched along one path, one will be switched along the other path.

Now, consider fast switching. How are fast switching cache entries built? By a packet being process switched. So, suppose you have the same two paths as above. The first packet comes in, and we find we have no cache entry. So, we process switch the packet, incrementing the counters just like in process switching. We build the destination cache entry to the individual host address, rather than the network address, since we are load sharing.

Each packet which is process switched causes the counters to increment, so, if you think about it, the result is that for every three route cache entries built along the first path, one will be be built along the second path. The same 3-to-1 ratio applies.

For CEF, look here:

http://www.cisco.com/en/US/tech/tk827/tk831/technologies_tech_note09186a0080094806.shtml

as a start. The best explanation you're likely to find of how it really works will be found in Inside Cisco IOS Software Architecture, though.

Hope that helps.

:-)

Russ.W

thank you for your answer

So to conclude in the case of fast switching and using the variance like you said the first packet has to be processed switch and the counter incremented to build the route cache to the the destination.

Thus traffic to that destination will always go over that link till either the route cache is cleared or the connection to the destination terminated. And since the counter is 3 this will happen 3 times before switching to the link with share count 1

right?

ye4s--in theory. :-) In practice, it almost never works out perfectly, for various reasons. For instance locally originated traffic bumps the counters, but doesn't build a cache entry, and packets that are process switched and can't be fast switched have the same result. I've never seen fast or cef run a perfect ratio.

:-)

Russ.W

It should, since 40514560 * 2 is larger than 46228763, and the reported distance is the same on both routes (which means one is the successor, the other is the feasible).

:-)

Russ.W

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: