I have setup a sample lab with EIGRP, I have the following config in my lab setup
R1#show ip route 126.96.36.199
Routing entry for 188.8.131.52/32
Known via "eigrp 100", distance 90, metric 330240, type internal
Redistributing via eigrp 100
Last update from 184.108.40.206 on FastEthernet0/0, 00:43:03 ago
Routing Descriptor Blocks:
220.127.116.11, from 18.104.22.168, 00:43:03 ago, via FastEthernet0/0
Route metric is 330240, traffic share count is 1
Total delay is 5200 microseconds, minimum bandwidth is 12987 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2
* 22.214.171.124, from 126.96.36.199, 00:43:03 ago, via FastEthernet1/0
Route metric is 165120, traffic share count is 2
Total delay is 5200 microseconds, minimum bandwidth is 80000 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2
With CEF enabled by default and upon using ICMP to send packets to 188.8.131.52 networ/address, I can see that the ICMP messages only goes to 184.108.40.206 address, 220.127.116.11 is never used.
However, if I disable CEF and do an ICMP message to networ/address 18.104.22.168 again, the traffic is now balanced in a 2:1 ratio.
My question is:
1. Why do I need to disable CEF?
2. Does disabling CEF only applicable when using ICMP to load balance? And having CEF by default will still load balance my other traffic? Or do I really need to disable CEF to achieve true load balancing?
1) The explanation for what you have experienced is that the default behavior for CEF is to balance traffic by flow. So all packets in one flow will take the same path, and packets in other flows may take other paths. Assuming that there is diversity in the source addresses and in the destination addresses, this results in a good balancing of traffic.
But with a single source and a single destination, as in your test, all of the traffic will take the same path and there is no balancing.
Disabling CEF is one way to get around this. But it is a drastic solution and not something that you would want to do. When you disable CEF the result is that all packets being forwarded must be process switched and this puts a heavier load on the router CPU. The better solution is to change the forwarding behavior of CEF. There is an option to specify that CEF should balance packet by packet. This is what you should be doing and not disable CEF.
2) Enable or disable CEF is not just about ICMP. It affects all of the traffic being forwarded by the router. As stated above, you should not disable CEF. If you want the load balancing behavior when you have limited sources or destinations you should specify packet by packet balancing and not flow by flow.
Very well explanation richard, I got your idea there.
Now assuming I have two WAN links, one is 1mbps and the other is 2mbps. Currently we plan to add Video technnology while we don't have VoIP technology today, but in the future we will surely do. Users are ranging from 300-500
What is the recommended load balancing technique using EIGRP variance command with CEF? Is it
per flow or
Also is my TRAFIC SHARE COUNT correct, 1 for the 1mbps link and 2 for the 2mbps link. Which means it will send a ratio of 1:2 traffic flow per destination
Your traffic share count is correct and the recommended setting is to leave the default( per flow load balancing) which will be ok for you
as you'll have a lot of users and so lots of different flows for eigrp load balancing.
I am in agreement with the comments made by Alain that your settings are ok and that with many users you are likely to get a good balance of traffic using the default of per flow balancing.
And I will state the conclusion even more strongly - if you are doing Video and may start doing Voice DO NOT enable per packet balancing ! The (small) improvement you may see in equalizing the traffic load on the links will be more than offset by the negative impact of out of order packets. Video and Voice are 2 of the applications most affected by out of order packets.
Thanks rick and alain for your inputs. I think per destination/flow load balancing will be the one that we will be using
I do have another concern however. How can I check that the traffic is doing per destination/flow load balancing? I know one way to check is by issuing the command show ip route <ip address> and check the traffic count ratio, then issue the command show ip cef <ip address> internal to check the hash buckets ratio distribution
But aside from that, is there any way to check that the traffic is doing unequal cost load balancing with per destination load balancing? I tried using wireshark to sniff the packets, tried doing per packet load balancing and can see that the packets are equally distributed on the links based on it's traffic count ratio. But when I do per destination/flow load balancing sniffing, it just confuses me on the outputs. Any easier way to test if they really are doing per destination/flow load balancing or if the hash buckets output in the show ip cef <ip address> internal command says that it is doing a 2:1 ratio then it really is and I don't need to check the packet flow?
To verify CEF load balancing method you can use following command:
sh ip cef exact-route
on the 2 end routers of the 2 paths you could apply a pplicy-map counting packets( or an ACL with the log command) and if you do a
ping -n 20 from same src to same dst you will only see hits on 1 router then you know it's not per-packet, then you ping from 2 different src-dst pair
and the number of hits on each router will give you the ratio count.
Thanks Alain, will try that when I get home later.
By the way below is the topology that I'm working as to where I will apply all my queries.
As we can see, I have two routers in each branch that is connected in the same LAN segment which uses GLBP for load balancing on the LAN side. My concern is that I want all the traffic to have a 2:1 ratio where Router 2 will have a ratio of 2 and R1 will have a ratio of 1 regardless of which gateway they enter (either the traffic goes to gateway with 1mbps connection or gateway with 2mbps connection). My question is if this is doable? I tried simulating this is a lab environment and I'm only able to have a 2:1 ratio when the packets enters Router 2 first. But I cannot have the right metric (bandwidth delay) setup to have a 2:1 ratio when packets enters Router 1 frist. Either I will succeed but my traffic share count on Router 2 break or I will not succeed at all
Is this setup feasible?
1. If yes what are the metrics that I should change?
2. If not what are my options after all?
There are two things here that make it difficult to maintain the 2:1 ratio. Having 2 routers at each location with each router having one link to the other side makes it difficult to balance traffic to maintain the ratio. The first part of the difficulty is how to manage the metrics so that each router will make forwarding decisions that maintain the ratio. If you solve that question then the next part of the difficulty is that it would be easy to make the routing decision when traffic is received by the router directly from a host on the LAN interface. But how can you be sure that if a router receives a packet from its neighbor at the site that it will make the routing decision that the first router expects (if router2 receives a packet from router1 how can you be sure that it will forward that packet toward the remote site and not forward it back to router1 - if router 2 were to forward back to router1 how would you resolve the resulting loop?). Also having GLBP makes it more difficult for routers to forward traffic maintaining the desired traffic ratio.
There are a couple of potential solutions that come to mind:
- if it were possible to configure the GLBP so that it passed packets in the 2:1 ratio, then you could just configure each router to prefer to forward to the remote site using its own connection which would maintain the 2:1 ratio.
- if the LAN used HSRP rather than GLBP you could configure 2 HSRP groups at each site. You could make router1 be primary/active in one group and make router2 primary/active in the other group. By assigning twice as many clients to use one virtual address as the other you could come close to maintaining the 2:1 ratio.
- you could perhaps configure some kind of Policy Based Routing to achieve the ratio. (PBR so that web traffic uses link1 while Email uses link2, etc).
I got your point, it is really hard to implement 2:1 ratio on each router in eigrp uclb with glbp using round robin as it's load balancing. So I've come up to a solution and base on your advice to just use glbp to load balance trafic using weighted load balancing to achieve 2:1 ratio going to the two eigrp routers, in that case what the two eigrp routers will do is to forward traffic on their directly connected router on another site. When one router goes down on one site, then I can just transfer the link of the down router to the runnning router and have a ready setup of eigrp variance to achieve a 2:1 on the one remaining running router. The same setup will happened on the other site
As an additional feature, the setup on one location is only 1 subnet block, the same on the other location 1 subnet block, the glbp load balancing feature to be used will be weighted having a 2:1 ratio, and the routing will be eigrp. If I want to implement a rule where a specific traffic like video traffic to go to the same WAN link for monitoring purposes, what will I use? Can this be achieve via Policy Routing having the setup I mentioned above?
If yes can you give me an idea and tips?
If not can you suggest me a solution?