10-20-2003 09:58 AM - edited 03-02-2019 11:07 AM
Regards to All!
2706VXR with PA-MC-T3 channelized T3 card.
256MB DRAM
IOS 12.2.5d
The issue is lousy performance whenever I try to bundle circuits from the same channelized DS3 card with PPP and a Multilink interface:
ip cef
interface Multilink 1
description PPP bundled T1 links
ip address 10.0.0.2 255.255.255.252
ppp multilink
multilink-group 1
interface Serial1/0/5:1
no ip address
encapsulation ppp
ppp multilink
multilink-group 1
!
interface Serial1/0/6:1
no ip address
encapsulation ppp
ppp multilink
multilink-group 1
!
interface Serial1/0/7:1
no ip address
encapsulation ppp
ppp multilink
multilink-group 1
This would result in a very "lossy" link, with gobs of dropped fragments (sh ppp multi) and ping drop rates of 20+%. Note that all the interfaces involved are really channels in the SAME channelized line/interface. To use these 3 T1's bonded in this manner I had to take them out of the DS3 and have them cross-connected to me in copper runs...aggravating to say the least.
All testing of the setup was completed and all testing of the physical lines was done. We checked EVERYTHING we could think of before we took them out of the channelized DS3.
I believe that the interface was process switching and suffering filled buffer issues.
Now, I have recently updated the router to IOS 12.2.14-S3. Should I expect the same behavior then?
I am asking in advance because I have reason once again to bond two channels and though I am willing to try it again on the basis of a different IOS, I do want to know why it didn't work before (or may not now) so I have some workarounds come turn-up time.
I appreciate any and all assistance. Thank You for your time in advance.
Regards,
Brian Carroll
Network Architect
Director of Professional Services
Air Net Link, LLC
10-20-2003 11:29 AM
Hi Brian,
you might want to configure the ppp multilink interleave command on your multilink and your serial interfaces, not sure if you have tried that yet.
Here is a link to how it works:
Regards,
Georg
10-20-2003 02:02 PM
Georg,
Thanks but I did try that (and exactly off that document too, LOL). In the config I posted I went fairly "bare bones" just to demonstrate that we had a solid MLP configuration.
In the testing of it, I did try LFI, but I did not mess with the timing or create an rtp queue as this is an Internet router to me. LFI caused a noticeable hit on the CPU (but still not a whole lot but you could look at the MRTG on the CPU and tell when I'd had LFI on or off) but had no other noticeable effect.
Is anyone else doing this out there? Anyone ahd success or failure trying to do what I am doing?
Thanks again for your reply Georg!
Regards,
Brian
10-20-2003 02:13 PM
You may want to verify the status of CEF on your Multilink interface: "sh cef interface Multilink 1". You will probably notice that it is disabled by default. Try enabling CEF by typing "ip route-cache cef" under the Multilink interface. I could be wrong. Good luck.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide