cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1249
Views
0
Helpful
9
Replies

ip cef per-flow load sharing

24madaeve1
Level 1
Level 1

can anyone guide me how to use per-flow load sharing between two data centers connected through two MPLS links from different ISPs ? Or if you could come up with a load sharing mechanism across both the WAN links. I have configured my WAN links statically. I cant configure them using any other routing protocol.

9 Replies 9

Richard Burts
Hall of Fame
Hall of Fame

If you configure on the router two static routes for the remote destination, with one static route using ISP 1 and the other static route using ISP 2 then you should achieve per flow load sharing.

 

The other option that occurs to me is to configure two GRE tunnels connecting the sites to each other with one tunnel going through ISP 1 and the other tunnel going through ISP 2, then you could configure EIGRP to run over the tunnels and advertise the remote site networks and subnets. By configuring the bandwidth of the tunnels appropriately and using EIGRP variance you could achieve per flow load sharing that would approximate the available bandwidth of each connection.

 

HTH

 

Rick

HTH

Rick

thanks for your reply richard. but the problem is that I am having only one subnet at both the data-centers. Both these subnets are talking to each other. and thus the default per-destination load sharing is taking place because of which only one link is getting utilized and the other is sitting idle. do you have any article using which I can understand per-flow load sharing. as far as I know, per-flow load sharing entails port based load sharing ? But I will definitely try your eigrp scheme.

Unless you have a single source host sending to a single destination host per flow load sharing should send some traffic over both links.

 

If you are not satisfied with the results of per flow load sharing there is another alternative which you can consider. Set up a default route which sends traffic over the 50M link which will carry most of your traffic. Then configure Policy Based Routing to send certain types of traffic (perhaps file transfers or some types of traffic that will generate considerable load) and send that traffic over the 30M link. That should give you distribution of traffic over both links.

 

HTH

 

Rick

HTH

Rick

thanks richard. the problem is we are sending only one type of data (logs of servers) across the links. so would not be able to differentiate on the basis of type of data also. however, i have realized one thing lately, that whatever the mechanism I may apply to share my load across both the links, it will always lead to out-of-order delivery of packets. What do you have to say ? Even the variance method will also lead to out-of-order delivery.

 

I was thinking of one solution, I will create two GRE tunnels with tunnel-1 having ISP1 as source and destination and tunnel2 having ISP2 as source and destination. Then I will add two routes with equal AD value to throw packets out via these two tunnels. But I think that would also lead to out-of-order delivery of packets? Does it mean I cant do any sort of load-sharing across these links ?

I think that you are confusing how the routing table is built and how packets are forwarded. Using static routes or using EIGRP or any other dynamic routing protocol is how the routing table is built. And there is no possibility of out of order packets as we are building the routing table.

 

It is when we are forwarding data packets that there is the possibility of out of order packets. If you use per flow (or per destination) forwarding there will not be out of order packets because we always send #1 before we send #2. And if they are using the same forwarding path then how can #2 arrive before #1?

 

It is when you use per packet forwarding then there is the possibility of out of order packets. With per packet forwarding we send #1 on a forwarding path (ISP 1) and send #2 on a different forwarding path (ISP 2). Perhaps there are some delays in ISP 1 and #2 reaches the destination before #1.

 

So out of order packets is possible with per packet forwarding and is not possible with per flow/per destination forwarding.

 

HTH

 

Rick

HTH

Rick

Suppose, I have only one server  at site-A and site-B that talk to each other over the WAN links. I create two GRE tunnels at both the sites - one for each WAN link. Now I add two static routes at both the sites with same AD value and next hop is the tunnel's other end. So packet#1 from site-A goes out from tunnel#1 (30Mbps) and packet#2 goes from tunnel#2 (50Mbps. But packet#2 arrives earlier than packet#1 at site-B. Isnt it out-of-order delivery ?

Yes that would be an out of order delivery. And it can only happen if you enable per packet load share.

 

If you have one host at site A talking to one host at site B then load sharing is a challenge - and in fact may not be helpful in that particular situation.

 

Let me tell you about an actual experience in a live network. I worked with a customer who had a situation similar to what you are describing. They had a server at one site which backed up data to a server at a remote site (their DR site). The job that was backing up the data was taking about 4 hours every night and they anticipated that the load might increase. They had multiple links between the sites but the backup was using only a single link because it was doing per flow load sharing. They enabled per packet load share in the belief that using multiple links would make the back up go faster. They were absolutely amazed the next night when the back up took 7 hours. Some applications are very sensitive to out of order packets. Every time their back up process received an out of order packet it would go through an error routine which included a request for retransmission of the "missing" packet. So shifting from per flow to per packet load share resulted in almost doubling the run time for this job.

 

Perhaps your application is not sensitive to out of order packets. If out of order is ok then perhaps you would benefit from choosing per packet load share. But I would go into that very cautiously.

 

HTH

 

Rick

HTH

Rick

your experience is mine too. i also had enabled per-packet load sharing between my sites and as you have already pointed out in your last post, the backup process took even more longer time to finish off. my application is also sensitive to order of packet delivery and hence i had to switch back from per-packet to per-destination. I am really frustrated these days as to how to distribute the load across both the links.

-

Review Cisco Networking for a $25 gift card