Showing results for 
Search instead for 
Did you mean: 

Adding T1 to MLPPP decreases bandwidth


I have 4 T1s going to a site, 2 of which are new (all 4 are with the same provider).

I initially noticed that throughput of the 4xT1 bundle seemed a bit off, so I did additional testing.

I found that whenever I have a certain T1 in the bundle (whether its 2xT1, 3xT1 or 4xT1), throughput is drastically decreased.

There are no errors on the circuit, so my provider is being difficult about "repairing" it since they also see no errors.

Is there anything I can specifically mention to them that might help the issue?

Here's now the T1s and Multilink are configured:

Serial n/0

no ip address

encapsulation ppp

ppp multilink

ppp multilink group 8577

Multilink 8577

ip unnumbered Loopback0

ppp multilink

ppp multilink group 8577

Pretty straightforward.

As an example, running a speedtest with the 3 "good" T1s I get ~4.5Mb each way as expected.

Adding the 4th T1 in, I get speeds of 3.8Mb.


Hall of Fame Master

MLPPP is not a good solution, obsolete, awkward and expensive, with poor performance. Get an Ethernet based service if you want to avoid all that.


I do not agree with the previous answer because you loose at least 14 percent to overhead by going with Ethernet encapsulation (more as the number of small packets goes up). Unless Ethernet is being delivered via fiber you also loose visibility to layer 2 errors on the T1 service modules being bonded to provide your service (which would now only be visible on the Carrier devices being used to convert bonded T1's to Ethernet when not delivered via fiber). Also, in many areas bonded T1 is still cheaper.

To address your problem apply a shaper based on percentage to your multilink interface like in this example:

 class-map match-any EF
  Description VOICE
  match ip dscp ef

 Policy-map QOS-POLICY
  class EF
   !Note: Make sure carrier is provisioned to allow 700k of EF from the site!
   priority 700
   police 700000
  class class-default
   fair-queue 512
   queue-limit 256

!-------Multilink interface config template
 Policy-map Shaper_<INT>
  Class class-default
   shape average percent 100
   service-policy QOS-POLICY

interface <MU_INTERFACE>
 service-policy output Shaper_<INT>

As the number of T1's available in the bundle changes the shaper will dynamically adjust the bandwidth. For PPP this is done by multiplying the number of in service T1's by 1.536Mbit/sec.

Also, make sure that you use percentages everywhere BUT your EF rate. your EF rate will hopefully always be policed to to a specific rate by the carrier that is not based on total bandwidth as the total bandwidth changes based on the number of in service T1's.

Content for Community-Ad
This widget could not be displayed.