cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
579
Views
0
Helpful
5
Replies

Multilink PPP Under Load

jrunham
Level 1
Level 1

Customer site connected to a PE using 2 x E1 circuits aggregated using Multilink PPP. The CE is a 1760, and the PE is a 7206. This is the customer HQ site.

Connected to the same PE are three other sites from the same customer. Each is connected with a single E1, again each CE is a 1760.

During some testing the customer did some large file transfers concurrently from each of the three remote sites into the HQ, so we had data coming into the PE via the three E1's (6Mbps) and then being routed onto the 4 Mbps Multilink PPP interface into the HQ, which is a potential bottleneck. This test failed miserably with packets being dropped on the output interface on the PE facing the HQ.

But then if Multilink PPP encap is removed, and we just connect the HQ using one E1 with HDLC encapsulation, the test works, even though we have half the bandwidth to the HQ, so an even greater potential bottleneck.

I cant figure out why this should be.

5 Replies 5

ivillegas
Level 6
Level 6

You are missing some steps in multilink ppp configuration. Check the following URL for configuring multilink PPP

http://www.cisco.com/univercd/cc/td/doc/product/aggr/10000/10ksw/mlppos.htm

If you have configured IP address directly to the interface try configuring a loopback and assigning IP address to it. The following URL with sample configuration will give you an idea.

http://www.cisco.com/warp/public/131/7.html

jsalminen
Level 1
Level 1

Well considering you are using high speed lines between the routers and mlppp originated for low speed links you should turn off at least MLPPP fragmentation (reduces router CPU utilization greatly). Also, if you are using MLPPP interleaving you probably don't need that either.

(no ppp multilink fragmentation, no ppp multilink interleave)

Thanks for your post, and initially I thought this could be down to CPU issues.

So we set up the following test. We had four Cisco 1760 routers connected to a Cisco 7206. Three were single attached via a 2 Mbps E1 circuits. The fourth was dual attached via two 2 Mbps E1 circuits. We configured Multilink PPP on the two links.

We connected four ports of a Performance Analyser to the Ethernet ports of all four Cisco 1760 routers. We then sent a Stream of data into each of the Single Attached routers and collected the data received on the Ethernet interface of the dual attached router.

If the rate of all three Streams added up to less than or equal to 4 Mbps, then we saw no packet loss. As soon as we increased the rates so the their sum added up to greater than 4 Mbps (even if it was slightly greater) we saw massive packet loss and the throughput dropped to virtually nothing.

I concluded that if the CPU did not have the processing power to handle Multilink across 2 E1 circuits, then we would have seen packet loss when the rate of all three Streams added up to 4Mbps. But we didnt.

It seems that as soon as the router has to drop packets, because we are sending more than 4Mbps of data down a 4Mbps pipe, then the throughput on the Multilink PPP drops to virtually nothing, rather than staying at 4 Mbps.

Very interesting blog. I am also having performance problems with 6xT1 MLPPP line. The site complains, i don't see 100% utilisation on the line EVER, the PE is managed by our SP, so i don't have visibility into this. Performance complaints are also from PE -> CE direction and it is quiet possible that congestion on the PE occurs. I would expect to see the line at least at 90% utilisation at that moment, but this discussion suggest otherwise...Any ideas on this ?

PS. we have fragmentation disabled on the line.

You are responding to a 10 years old thread.

Open new threads for new problems, including full documentation like configuration, show commands output, etc.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: