cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3664
Views
13
Helpful
24
Replies

maximum number of T1 serial interfaces via MLPPP?

blackladyJR
Level 1
Level 1

Hello,

Do you know if I have a 2800 series and a 3800 series router, do you know what is the maximum number of T1 serial interface I can bundle together to form one logical multilink Port?

I usually have 4xT1 (6Mbps) and done via two vwic2-2MFT-T1/E1 interface cards. But I wonder if it is okay to do 8xT1 or even more to form a single logical Port?

thanks.

24 Replies 24

Joyce,

Yes, if you say something to the customer and it doesn't work it will be bad indeed. The way you put it is very realistic, and for that reason not many people will dare to answer this with complete confidence. In such cases, talking to cisco is the only safe way to go and get some definite answer. Even if the answer turns out not be perfectly true, you can say you did your homework and the best you could for this to work, but there were "unforeseen technical difficulties" that even cisco could not see. This is more of a politics than technical advice, but I have seen this work as well.

Kind Regards,

Maria

Also, if things for some reason turn out looking bad, consider trying 2 multilinks (4 T1's each) with per-packet load sharing between the multilinks as your last resort. As I said previously (edited post), this might actually be a balance between the 2 solutions (MLPPP and CEF per-packet load balancing). It might be the case that 8 links in a single bundle have more MLPPP protocol overhead than 2 multilinks with 4 links each.

I will try to explain this last point further. Typically, 2 x 4 = 1 x 8 in most cases. However, "Multilink PPP keeps track of packet sequencing and buffers packets that arrive early. Multilink PPP preserves packet order across the entire NxT1/E1 bundle with this ability." as the first document says. It might be the case that its harder to preserve packet order and balance traffic across 8 links in one bundle than 8 links in two bundles. If you use 8 links in 2 bundles, MLPPP does lighter work for each of the bundles, and CEF being lighter than MLPPP does the rest of the work to make those multilinks cooperate. This is not a perfect argument since I don't know implementation details, but is not unreasonable either.

p.s. You could also try per-destination CEF instead of per-packet for the 2 bundles, in case it works for you. As other people said here, you need to think about the requirements of your services.

"Is she brilliant or what?"

Maybe. As you point out if you use CEF per-packet you risk out-of-sequence deliver, which, as you also note, MLPPP guarantees. However, if you only run per-packet CEF on two links, whether real or bundles, most TCP stacks wait for 3 dup ACKs before considering a packet lost, so most TCP flows should be fine. (For impact to non-TCP, depends on the app.)

PS:

BTW:

On another aspect of T-3s, don't forget you can sometimes purchase fractional T-3. Also, it's becoming more common to find WAN Ethernet handoffs at attractive pricing. I.e. there might be other options available rather than 8 or more T-1s or a full T-3 to provide the 12 Mbps or so bandwidth you're seeking.

thanks for Maria and Joe's ideas.

Everything makes sense.

I have already tried WAN ethernet but it's not available in the area for this location. PTT only sells full DS3 and on MPLS side, I can have frac-DS3 Port rate instead of full DS3 Port. But that is still way more expensive than 6T1 approximate.

I always push for ds3 loop with frac-ds3 Port for any sites with higher than 4 T1 requirement as the rule of thumb. Just this one happens the pricing and availability works out better with more T1s and so to prompt me to look into the maximum number of interfaces MLPPP can support.

thanks everyone again.

Joyce

Looks like you've covered all the bases for private PTT. Fully understand the issue of limited options at any one location.

Have you also investigated running a logical PTT VPN across the Internet? Properly configured, I've often found they can perform very well and often they're very cost effective for bandwidth.

You also mention MPLS, no other options with FR or ATM? With the latter, the IMA T-1 modules work fairly well in the ISRs and don't place the same load on the router as MLPPP. I recall, though, there is a cap of 8 links (hardware limitation?), where with MLPPP the limitation should be an IOS cap.

Hi Joe,

FR or ATM are already retired products from public Carrier and MPLS pretty much is the norm for any big customer connection. MPLS is so much better without dealing with n(n-1)/2 pvcs to have full mesh and all those BGP attributes that can use in PE to manipulate routes.

VPN certainly is an option but the main drawback is lack of QoS in the cloud guarantee while MPLS has the guarantee to ensure business quality.

Speaking of QoS, since you are the expert :), I saw some really strange behavior yesterday and opened a tac case and waiting for reply.

I have just a LLQ and a AF31 and a default class. (forget everything we said in previous QoS topic as this one does not have the FR pvc CIR limitation when trying to do bandwidh allocation).

On a T1, I have half of those which is 768k is LLQ. Then I want everything to be AF31 so my ACL is "permit ip any any".

Max reserved is 90%.

I put AF31 with remaining 90% and default with remaining 10%.

The strange part is that when there is very light traffic (no voice going on, just about 200kbps of AF31 traffic), I saw there are 88k "5 min drop rate" and packets dropping. Very strange because the T1 is not even heavy (200k traffic out of a T1) but I have packets dropping in AF31. So really not sure if it is IOS bug or what. The 200k rate and drop rate was from "show policy-map int s0/0" output. From "show int s0/0" output, the Tx and Rx load is like 17/255 only. So really don't understand why I am dropping packet.

Any ideas?

Oops, behind the times again, didn't realize couldn't obtain FR or ATM any longer. (Client I working, at the moment, uses MPLS, but PE to CE handoffs are ATM IMA and some, I thought, FR. Of course that's not the same as end-to-end FR or ATM.)

On the isssue of VPN and QoS, yes a common lamment, but I've found the biggest congestion issues appear to be Internet ingress/egress. Igress to Internet is easy, egress from Internet can be controlled much as I've done with FR or ATM (at Internet ingress for traffic directed to remote site). Usually works/performs very well.

As noted above, client I'm working with uses MPLS in a big way. However, when they were FR and ATM, I recommended against MPLS since its QoS is much less grangular than what you can do (i.e. vendor supported QoS - not technically what could be done). Predicted worst and erratic performance; need to buy more bandwidth. Both have come true!

The QoS issue you saw, was it perhaps on a very high 12.4T release? Believe I saw a similar problem which disappeared when I dropped down to 12.4(15)T. Haven't written it up yet.

Joe,

In the old days, there are indeed using FR or ATM to get to an MPLS PE. Now most companies has "direct" access to the MPLS PE. The encapsulation itself usually is either PPP or FR. Using FR as encapsulation method allows more traffic shaping command to apply to it. We can also do MLFR (FRF.16) besides MLPPP. I don't know if other companies provide ATM encapsulation for "Direct" MPLS PE access though. Indirect Yes, I still have client site going to ATM first and then a pvc from ATM to the MPLS PE.

As for QoS, it's c2800nm-adventerprisek9-mz.123-14.T5, so not the high 12.4T.

Router1>sh policy-map int s0/0/0

Serial0/0/0

Service-policy output: QoS

Class-map: EF-Voice (match-any)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: dscp ef (46)

0 packets, 0 bytes

5 minute rate 0 bps

Match: dscp cs5 (40)

0 packets, 0 bytes

5 minute rate 0 bps

Match: ip precedence 5

0 packets, 0 bytes

5 minute rate 0 bps

Match: access-group name EF-VoiceGold

0 packets, 0 bytes

5 minute rate 0 bps

QoS Set

dscp ef

Packets marked 0

Queueing

Strict Priority

Output Queue: Conversation 264

Bandwidth 768 (kbps) Burst 19200 (Bytes)

(pkts matched/bytes matched) 0/0

(total drops/bytes drops) 0/0

Class-map: AF31 (match-any)

48286 packets, 4885745 bytes

5 minute offered rate 30000 bps, drop rate 10000 bps

Match: dscp af31 (26)

10 packets, 640 bytes

5 minute rate 0 bps

Match: access-group name AF31

48276 packets, 4885105 bytes

5 minute rate 30000 bps

QoS Set

dscp af31

Packets marked 48293

Queueing

Output Queue: Conversation 265

Bandwidth remaining 90 (%) Max Threshold 64 (packets)

(pkts matched/bytes matched) 4100/1575375

(depth/total drops/no-buffer drops) 0/580/0

Class-map: class-default (match-any)

181 packets, 11349 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

Queueing

Output Queue: Conversation 266

Bandwidth remaining 10 (%) Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

If you have maintenance, would recommend moving off 12.3T and moving to 12.4.