cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
820
Views
0
Helpful
10
Replies

throughput through private line

gavin han
Level 1
Level 1

Hi, we have one OC-24 private line between our data centers. we are looking to get best throughput but we get max. avg throughput of 300Mbps with peaks of 800Mbps throughput. i.e. we transfered 2TB of data over this link and we got average throughput of 300Mbps with peaks of 800Mbps.

we should at least be getting 800Mbps throughput since we have OC-24 (1244Mbps) private line. we contacted our ISP but they said there isn't any problem in private line from ISP side. what can we do to increase average  throughput?

10 Replies 10

milan.kulik
Level 10
Level 10

Hi,

I suppose you are running multiple data flows in parallel to transfer the data?

BR,

Milan

Hi, only one data flow in this case. we have one server in each data center and we are transfering 2TB from one located in one data center to another server located in 2nd data center.

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

If your single flow is using TCP, a common bottleneck to performance is receiving host's receive buffer isn't large enough to support BDP (bandwidth delay product).  Is it?

If the data centers are far apart, the additional distance based latency will slow how fast TCP "ramps up".  This can be especially a problem if there are any drops that cause the TCP flow to go into congestion avoidance, where it "ramps up" is very slowly.

Another, not uncommon, issue is whether full path MTU is being used.

Try to setup a couple of web severs with big file content like the 4 gb iso files of linux.
Then try with multiple clients on other side using download accelarators like dap etc.

You can also use tools like below.

http://en.m.wikipedia.org/wiki/Measuring_network_throughput

Sent from Cisco Technical Support Android App

shivjain
Cisco Employee
Cisco Employee

Hi Durga

As you know thruput is dependent on the bandwidth and delay product. So lets work through a simple example. I have a 1Gig Ethernet link from A to B with a round trip latency of 30 milliseconds. If I try to transfer a large file from a server in A to a server in B using FTP, what is the best throughput I can expect?

First lets convert the TCP window size from bytes to bits.  In this case we are using the standard 64KB TCP window size of a Windows machine.

64KB = 65536 Bytes.   65536 * 8 = 524288 bits

Next, lets take the TCP window in bits and divide it by the round trip latency of our link in seconds.  So if our latency is 30 milliseconds we will use 0.030 in our calculation.

524288 bits / 0.030 seconds = 17476266 bits per second throughput = 17.47 Mbps maximum possible throughput

So, although you may have a 1GE link between these locations, I should not expect any more than 17Mbps when transferring a file between two servers, given the TCP window size and latency.

What can you do to make it faster?   Increase the TCP window size and/or reduce latency.

To increase the TCP window size you can make manual adjustments on each individual server to negotiate a larger window size or you can use the device called WAAS which helps you to acheive the maximum thruput.

regards

shivlu jain

http://www.mplsvpn.info

Hi,

in a case of huge round trip delay a simple TCP window increasing is not an effective solution.

I was facing a similar problem some years ago while my colleagues were transferring big files from India to Germany via FTP.

What really helped there was a client supporting mutlisession file transfer.

With 300Mbps on a single session file transfer in this case, I guess the limit might not be the RTD only.

In any case, I'd recommend using several servers to run multiple flows between different servers on each side.

This way it should be easier to load the line.

BR,

Milan

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

in a case of huge round trip delay a simple TCP window increasing is not an effective solution.

I was facing a similar problem some years ago while my colleagues were transferring big files from India to Germany via FTP.

I can say, years ago, had a single TCP flow, between USA and Germany, that throughput increased 4X after increasing receiving host's receive buffer 4x.  In your case, you know the flow had absolutely no drops?

What really helped there was a client supporting mutlisession file transfer.

Indeed it can, as multiple TCP flows can "ramp up" concurrently.  A few data transfer utilities can do this automatically.

PS:

BTW, newer TCP stacks often advertise much, much larger receive windows then they did years ago.  I.e., manual tuning of the host's TCP receive window isn't necessary.  However, it's often worthwhile to confirm this is the case.

Hi Joseph,

In your case, you know the flow had absolutely no drops?

Of course not, there have always been drops on our circuits in India.

That's another aspect which IMHO makes using huge TCP window values problematic.

BR,

Milan

Hi shivlu, gavin's original post says he has already got 300 mbps using single server each side. So increasing couple of severs and few clients capable of multi sessions would enable him to test.


Sent from Cisco Technical Support Android App

Thanks all for your kind advise.

round trip time is about 18 miliseconds.

so we were using multiple servers to test this link.

with one set of server we got about 300Mbps speed.

with other set of server we got about 32 Mbps speed.

single TCP flow each time

there wasn't any other traffic on this link.

does your post mean, our 1st set of servers must be using big TCP window size with the 2nd set of servers must be using small TCP window size.

these were not windows servers. these are linux/unix servers that we used for this test.

so would the best option to get the max. throughput is to set max. TCP window size? i.e. if roundtrip time is 18miliseconds than tcp window size should be (if I want to get 900Mbps throughput):

tcp window size/0.018 seconds = 900 Mbps

so tcp window size = 16200000 bits = 2025KB. is that right?

would MTU size matter in this case since standard MTU size is 1500 bytes? how about Jumbo frames?

sorry for throwing so many questions... =)

Review Cisco Networking products for a $25 gift card