06-20-2007 08:34 AM - edited 03-03-2019 05:31 PM
When copying a 2GB file on a Multilink that has 5942 Kb available only uses 1.5Kb
07-11-2007 10:13 AM
implemented "ppp multilink fragment delay 20" performing a test copy.....
07-11-2007 11:26 AM
I implemented it on both routers. it has not made a difference. Do you want to see any files during the copy? it has been running for 1.5 hours.
07-11-2007 12:39 PM
I think the fragment size is still too large, that is, router it is not fragmenting at all.
To confirm that, do clear counter, wait for a little of traffic to pass, then divide output bytes by output packets to get an average packet size.
Then either reduce the "framgment delay" until you see an improvement, or upgrade to the latest 12.4T where fragmentation can be set directly.
07-11-2007 02:15 PM
You are right it is not fragmenting. will see if we can upgrade again to 12.4T
07-11-2007 03:25 PM
Hi again.
I'm thinking about this all over again.
The thing is that according to manuals, and people that did real testing, there is no need for fragmentation to allow a single TCP session to use all the bandwidth in a MLPPP bundle. And this makes sense to me.
So, would you please try disabling fragmentation at all:
ppp multilink fragment disable
Then, could you try measuring the performance using iperf ? It is a very simple software to use:
http://dast.nlanr.net/Projects/Iperf/
I'm very puzzled by all this and suspecting it may be an issue with the host OS.
07-12-2007 05:40 AM
I saw that too, I don't like disagreeing with documentation, but I tried disabling fragmentation, it did not work. I have another multilink with T1 lines that does work. so I have been comparing all my results with it. When I copy files from my laptop to a server on the other end of the multilink for the T1s, not only do I get fragmentation, I get full bandwidth.
I have also tested each E1 line individually and I get full bandwidth. But as soon as I bundle any 2 or 3 my bandwidth gets degraded. Sounds like a fragmentation problem to me.
Please comment.
07-12-2007 12:25 PM
Interesting. Possibly something is messing up with TCP in this case. What else there is between the hosts doing the copy ?
Can you do the test using iperf ? You will need it on both side of the link.
How did you saw that fragmentation was active in the T1 case ?
07-13-2007 06:43 AM
Just to answer your questions
1: we plugged directly into the DMZ, so no firewall, nothing else, and our router was only sending less than 2mbps. We had a monitor on the Multilink.
2: I used the same method to calculate packet size during a large file copy and it varied between 106 bytes to 560 bytes. Where as the E1s stayed fairly consistent just under 1400.
07-18-2007 07:20 AM
ran another test:
------------------------------------------------------------
Client connecting to 10.13.1.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local firewall port 34684 connected with India server port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.7 sec 624 KBytes 479 Kbits/sec
traceroute to 10.13.1.5 (10.13.1.5), 30 hops max, 38 byte packets
1 firewall to Us Internal router Internal Serial Interface(10.7.1.2) 1.032 ms 0.574 ms 0.448 ms
2 Internal router External Interface to India(192.168.1.2) 254.841 ms 255.043 ms 258.965 ms
3 10.13.1.5 (10.13.1.5) 254.733 ms 258.659 ms 256.609 ms
It seems to me that the path from the internal router interface to the external router interface is where the slow down occurs.
07-23-2007 12:31 PM
We are in the process of upgrading to 12.4T will let you know the progress after setting the fragment on both routers which is waiting for the remote site upgrade.
07-06-2007 12:57 PM
No option under latest version
07-10-2007 05:24 AM
hi
use 2/3 ftp server(on diffrent machine) and again try to download on diffrent maching and find the collective b.w
u will get the result
regards
07-10-2007 05:37 AM
Hi,
the thing is that multilink PPP is supposed to make available the full bandwidth to a single session.
Else you would be using regular load sharing without the need for MLPPP.
07-15-2007 02:35 PM
hi
the framgmetation command will not be much help , as by default each packet will be fragmented into no of phiscial interface
i suppose somethink is missing
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 833000 bits/sec, 79 packets/sec <<<<<<<<( should be not more than 6 mbps)
5 minute output rate 37000 bits/sec, 44 packets/sec
131629 packets input, 170700329 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
78386 packets output, 10296326 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions
try
1 clear the counter
2. use diffrent queueing like wfq , what is the result and then again rever back to FIFO
also paste the other end router statistic and conf
regards
vinay verma
07-15-2007 05:07 PM
That is incorrect. Unless there is fragmentation configured, MLPPP will not fragment as the post above demonstrate.
Also, changing queuing scheme at random doesn't have much a chance of influencing MLPPP behavior.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide