11-18-2004 07:30 AM - edited 03-02-2019 08:02 PM
Hi guys,
I need your help on this one. I know T3 throughput is 5.625MB/s:
(45Mb/s)*(1byte/8bits)=5.625MB/s
And I use extended ping to figure out my RRT delay - 70ms.
So to factor in the delay, how do I calculate how much data I can transfer one way thru T3 that has 35ms delay one way? Thanks.
11-18-2004 09:03 AM
That amount of delay isn't going to have any real effect on your throughput. What the real factors are is the server performance producing the stream of data and what type of data it is and if it requires any acknowlegements. You could have a 10 second delay going to the moon but if it is a one-way stream then you can still get a 10Gig data stream to a receiver. The best way is just to test it out with the software and equipment in hand.
11-18-2004 09:20 AM
Well, right now I am just in the planning stage. So before doing anything, I would like to calculate the theoratical transfer time. My goal is to transfer 1TB of data from west to east coast; one way traffic with 35ms delay. I just want to know "on paper" how long that woule take factoring in the delay. Without factoring the delay , I calculated to 50 hours to transfer 1TB of data.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide