03-31-2015 12:14 AM - edited 03-05-2019 01:08 AM
Hi,
We have been asked many times by customers question like "how much time it would take to transfer a 100 Gb data from Australia to US". What would be the correct answer to provide in such cases? I understand that this this can never be accurate but just to give some figures how could we calculate this?
It could be possible that this question may have been asked many times but I was not able to find a good concrete data for this.
Thanks,
N.
03-31-2015 05:39 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
For a rough estimate (of best case), divide quantity by end-to-end bandwidth with an allowance for L2 and L3 overhead.
The rough estimate will likely be better, possibly much, much better, than actually realized. This because, assuming you're using TCP, many (especially older) hosts TCP stacks are not configured for LFN (long fat networks) (assuming you have one) and/or TCP slow start and especially its packet lost recovery are are slowed by RTT latency.
Unfortunately, without knowing much more, its very difficult to provide an accurate estimate.
An example of the differences the above might make, years ago, I dealt with a case of slow data transfer between US and Europe. By making one config change on receiving host, transfer rate improved by 5x.
03-31-2015 06:30 AM
Hi,
I remember similar case several years ago: The task was to speed-up FTP file transfer from India to Europe.
Initailly, the guys in India had incraesed their line bandwidth twice and were disappointed because it did not decrease the time necessary to transfer their huge files at all.
Finally we realised RTD (over 200ms) was the root problem cause for FTP ACK packets.
And the solution was using an FTP client which was able to send files using multiple parallel threads.
Another possibility would be using some WAN accelerators (like Riverbeds, e.g.) which would sent "local ACKs" to the clients on both sides and improve the efficiency of the WAN file transfer.
Best regards,
Milan
03-31-2015 07:10 AM
Disclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
"And the solution was using an FTP client which was able to send files using multiple parallel threads."
Yep, although that assumes you have multiple files (for which it can work very well).
For a single large file, a variation of what Milan described, is using a utility that can slice and dice (and reassemble) a single file across multiple TCP flows.
"Another possibility would be using some WAN accelerators (like Riverbeds, e.g.) which would sent "local ACKs" to the clients on both sides and improve the efficiency of the WAN file transfer."
Yep, again, and such products can even "transfer" faster than the line rate. The latter is possible because they generally compress in-line and also use local caching (negating the need to send some data). One of their issues, their transfer rates can be very variable and inconsistent.
03-31-2015 07:23 AM
Hi Joseph,
ad "For a single large file, a variation of what Milan described, is using a utility that can slice and dice (and reassemble) a single file across multiple TCP flows."
That's eactly what the FTP client was doing!
I might find the name of that client if I search my archive. But it's several years ago so I believe there are new more efficient tolls availiable nowadays.
Probably also for other file transfer protocols.
Best regards,
Milan
03-31-2015 08:13 AM
Ah, thanks for the clarification. I've seen some that will take multiple files, and run transfers for those in parallel, but slicing and dicing a single file, is (or was) rarer. (I recall we had some utility, that would also try to compress too.)
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide