10-23-2009 03:51 PM - edited 03-04-2019 06:29 AM
Summary of the issue is that we replaced a customer's T1 with a circuit with 2 bonded T1s and there were multiple issues encountered. Webpages would only partially load, users reported slower internet performance, and multiple POP3 download issues. Returning the circuit to the single T1 immidiately resolved the issue. The configuration is attached to the post. One strange symptom is that when we removed the "ppp multilink interleave" command users could not load webpages, this leads me to believe there is a multilink configuration issue. If anyone can provide some assistance, it would be greatly appreciated. Thank you.
10-23-2009 06:31 PM
Can you post the output of show interface s0/2/0 and show interface s0/3/0?
Regards,
jerry
10-26-2009 07:46 AM
10-26-2009 07:51 AM
Here you go...
Voce_BSI02#show int S0/2/0
Serial0/2/0 is up, line protocol is up
Hardware is GT96K with integrated T1 CSU/DSU
Description: 360 Order# 601738
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Link is a member of Multilink bundle Multilink1, loopback not set
Keepalive set (10 sec)
Restart-Delay is 0 secs
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d22h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 20
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 4000 bits/sec, 9 packets/sec
30 second output rate 3000 bits/sec, 8 packets/sec
4617298 packets input, 1478569214 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
2 input errors, 2 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
4577519 packets output, 1094659099 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 output buffer failures, 0 output buffers swapped out
5 carrier transitions
DCD=up DSR=up DTR=up RTS=up CTS=up
Voce_BSI02#show int S0/3/0
Serial0/3/0 is up, line protocol is up
Hardware is GT96K with integrated T1 CSU/DSU
Description: 360 Order# 601739
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Link is a member of Multilink bundle Multilink1, loopback not set
Keepalive set (10 sec)
Restart-Delay is 0 secs
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 3d22h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 20
Queueing strategy: fifo
Output queue: 0/40 (size/max)
30 second input rate 4000 bits/sec, 10 packets/sec
30 second output rate 5000 bits/sec, 10 packets/sec
4610939 packets input, 1471322904 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
4574250 packets output, 1094684298 bytes, 0 underruns
0 output errors, 0 collisions, 2 interface resets
0 output buffer failures, 0 output buffers swapped out
5 carrier transitions
DCD=up DSR=up DTR=up RTS=up CTS=up
Voce_BSI02#
10-23-2009 07:02 PM
You might try setting both "ip load-sharing per-destination" and "fair-queue" under each serial interface, remove "ppp multilink interleave" under the multilink interface, and change "ip route 0.0.0.0 0.0.0.0 Multilink1" to the other multilink side's IP address.
10-23-2009 11:57 PM
Joseph
Does the load-sharing command on the serial interfaces has any effect on multilink interfaces? . I had always thought multilink has its own algorithm to distribute packets across links
Johny
You might try disabling multilink fragmentation
ppp multilink fragment disable
Narayan
10-24-2009 04:13 AM
Narayan,
"Does the load-sharing command on the serial interfaces has any effect on multilink interfaces?"
I wouldn't think so, but it's unusual. That, and along with documentation that mentions the need for FQ for some multilink operations, is why I suggested it's removal.
"You might try disabling multilink fragmentation "
Good suggestion! Something that I had thought to suggest and then promptly forgot to add to my post. In theory (I believe), it would offer slight less throughput to any one flow, but if a flow's packets are alternated between links, should make little net difference to throughput although should reduce performance impact of the fragmentation on both ends of the multilink.
10-26-2009 07:51 AM
Thanks for your suggestions, I will try those commands the next time I am on site since I locked myself out of the router once when I entered in the ppp multilink interleave command. Have you or anyone else had these strange symptoms when working with multilink interfaces? Thanks again for your help.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide