01-17-2011 09:45 AM - edited 03-04-2019 11:06 AM
I'm currently working on bonding MultipleDS3's (5). I've migrated about 150 Mbits to it thus far, and it seems to work fine, except when testing the speed across the link. A speedtest seems to max at around 10-15 Mbs, I'm also running a transparent bridge across this link. I'm concerned I've done something wrong, that is impacting the maximum speed for an individual stream, such as a speedtest. Snippets of my config are below, very basic. I've got the bridge type set to IRB. I was thinking I should be able to pull nearly the full bandwidth across it with one stream (if it were idle do 200 Mbs).
interface Multilink1
ip address 192.168.100.1 255.255.255.0
ppp multilink
ppp multilink group 1
bridge-group 1
!
interface GigabitEthernet0/1
no ip address
duplex auto
speed auto
media-type rj45
no negotiation auto
bridge-group 1
!
01-17-2011 01:05 PM
Cisco doesn't officially support MLPPPo over 2 Mbps, and it is know to work poorly as you have found out.
01-17-2011 01:35 PM
It appears to work excellently, CPU is fairly low at 150 Mpbs sustained, just the max single stream speed is slower then what I would expect (10-15 Mbps), is that what you mean by works poorly? Since my posting, I had a TT in with the ILEC, and they did find two of the DS3 circuits are generating errors on one of the CISCO endpoints (both on the same PA-2T3+).
What is cisco's recommended method of transparent bridging over multiple DS3 circuits to aggregate the bandwidth? I really didn't see any warning to not use MPPP over DS3, and IOS allowed it fine, so I'm not sure where you are coming from on that advice.
01-17-2011 07:23 PM
As mentioned above, you are running an unsupported, untested and non-marketed configuration and application, consequently any good or poor result you can achieve, you will be on your own only.
If you doubt my word, have a talk with Cisco TAC, and report here.
01-18-2011 07:06 AM
Hello Michael,
running MLPPP over 5 DS3 is not good practice, in the past this had led to problems.
I guess you have a C7200 VXR with NPE-G1 or NPE-G2, the best choice would be to get a GE link between the two locations even if sub rate and to run EoMPLS over it.
Among the reasons for high use of resources is the fact that MLPPP has to re-assemby the PPP fragments ( if fragmentation occurs) to deliver the original packet so ML PPP over links like DS3 is not recommended
Hope to help
Giuseppe
01-18-2011 09:24 AM
Heh, if Fiber was an option I would have done so, there is no fiber available where I'm trying to setup this bridge, I'm trying to bridge the Fiber together, DS3 is the largest circuit available through the ILEC's, in this area. I do have NPE-G1, what alternatives are there for bonding the DS3's w/Cisco, like I said it does appear to be working as it should except for the speedtest issue (all of our customers are below that anyway). My CPU utilization is 60% at 200 Mbits, which is less then I expected (I'm running BGP/OSPF on this unit also on the other ethernet ports as well).
If the max speed is the major issue, I would just like to confirm that, I'm not concerned about out of order packets or anything else at this time. I've turned off fragmentation and am not interleaving and just using FIFO. It may be the upstream that is the problem with max speed, I will check that also.
I am running a bridge over this, is a circuit-group an option? I couldn't find much information on that as to whether it aggregates or just load balances.
01-18-2011 01:13 PM
Got it, tracked the problem down to a pair of Gigabic NIC's on my Gateways generating frame errors, not noticeable at low speed, but really bad at high speed. Multilink PPP over the DS3's is working fine.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide