Showing results for 
Search instead for 
Did you mean: 

New Circuit Latency Increase and Upload Speed

I recently did an upgrade with AT&T from an older ADI/MIS 45M/45M DS3 circuit to a ADI/MIS 100M/100M circuit. Both of these are being provided over fiber run into my office. Both of these are serviced by my local LEC Frontier Communications. In the process of test and turn-up for the new circuit I've noticed the following issues.

1. My old circuit pings to the gateway at 7-8ms whereas the new circuit is 33-34ms

2. My old circuit up and down would speed test would speed test at 40M/40M whereas the new circuit speed tests are 95M Down and 60-80M Up.

I've opened a troube ticket with AT&T but at this point have been unable to get any answers to the issues.

What I know think I know:
I believe that the old circuit is a DS3 that rides over a OC3. The hardware in use at my office is a utilizing a Cisco DS3+ Card in a Cisco 7206VXR Router with an NPE-G1. I believe that the circuit goes from Tuscola, IL to Champaign, IL to Indianapolis In, to Chicago IL.

When talking to the AT&T engineer regarding the trouble ticket he stated that it was provisioned down to St. Louis. From there I assume it goes to Indianapolis.

The AT&T engineer stated that the latency was due to it not being a serial dedicated circuit but ethernet. I just don't know enough about it to be able to dispute that but as a small ISP with microwave wireless internal WAN links that average 9ms going through ethernet connections with 10 routers in between I would have thought the new circuit going over fiber would be better.


Traceroutes for each circuit below:


Tracing route to over a maximum of 30 hops

1 33 ms 33 ms 33 ms
2 41 ms 47 ms 47 ms
3 43 ms 43 ms 43 ms
4 39 ms 40 ms 39 ms
5 39 ms 39 ms 39 ms
6 * * * Request timed out.
7 39 ms 39 ms 39 ms

Trace complete.



Tracing route to over a maximum of 30 hops

1 <1 ms <1 ms <1 ms
2 13 ms 15 ms 15 ms
3 15 ms 15 ms 21 ms
4 13 ms 15 ms 15 ms
5 12 ms 12 ms 13 ms
6 12 ms 13 ms 13 ms
7 * * * Request timed out.
8 12 ms 12 ms 12 ms

Trace complete.

Below are a sampling of speedtests

101219 - 4:29PM :    89D/56U
                            :           94D/64U
9:41PM                :    : 90D/56U
                            :          : 96D/65U
101319 - 12:24AM:   91D/69U
                              :         96D/60U
12:58PM          91D/66U
                              :          95D/63U
10:50PM          91D/60U
                              :          94D/59U
                              : DSL Reports:     96D/55U
101419 - 11:14AM:   91D/62U
                             : :         95D/67U
                             : AT&T :               96D/80U
                             : DSL Reports:      96D/56U
101419 - 3:00PM :    91D/66U
                             : :         95D/59U
                             : AT&T :               96D/80U
                             : DSL Reports:      96D/68U
101419 - 8:13PM :     58D/65U
                    :           86D/65U
                             : AT&T :                96D/85U
                             : DSL Reports:       95D/49U
101519 - 12:53AM:    90D/67U
                              : :         95D/67U
                              : AT&T :               96D/58U
                              : DSL Reports:      95D/56U


So what I'm wondering is what questions should I be asking the AT&T/Frontier engineers to get answers to the latency and non-symmetrical upload issue?

VIP Expert

Re: New Circuit Latency Increase and Upload Speed

First, perhaps a quick review of what impacts "speed".

When sending a packet or frame, the leading edge of packet or frame generally arrives taking about the same amount of time regardless of the bandwidth for copper or fiber. I.e. an electrical signal on copper or a light signal on fiber propagate at a constant speed ratio relative to light in a vacuum. I.e. it doesn't matter if sending at 300 bps or 100g; only overall distance matters.

Higher bandwidth (i.e. bit rate) allow more information to be sent in the same time.

So, everything else being equal, if you ping using a minimal size packet at 300 bps vs. 100g and ping at MTU, again at 300 bps vs. 100g, you'll see a bigger difference in time for the MTU pings.

In other words, in your case, pinging across 100 Mbps vs. 45 Mbps, you should see a decrease in ping times, but more of a delta in those times as you increase the ping packet's size.

In the above, I mentioned everything else being equal. In your case, what you had, and now have, might be very different.

For a dedicated L2 link. you may have had a virtual circuit. I.e. as you modulate your signal, it's sent along at media speed; basically about as fast if you actually had a single physical wire/fiber running exactly the same way. (BTW, shared/hub Ethernet works this way too.)

"Modern" Ethernet is often switched. Switched Ethernet, generally, is store and forward (NB: there's also "cut through" forwarding to mitigate, but not eliminate, this). I.e. every hop adds latency while waiting for the frame to be physically received before a newly constructed frame can be transmitted. Further, with switched Ethernet, ports have queues, so besides physical latency due to the actual reception and transmission at each hop, packets might be further delayed by port congestion. (In past postings, I've described port congestion as when you cannot immediately transmit because there's traffic ahead of yours, even if there's only one frame being physically transmitted just then.)

The prior (basically the difference between a virtual circuit and a switch circuit) likely accounts for the increased ping times. This might be what the AT&T engineer was trying to explain.

If the foregoing does explain your jump in latency, that jump in latency might also explain much in transmission speeds too. Many network protocols rely on some form of ACK from receiver to inform the transmitter how things are going. Increase the time for that, and the effective transfer rate decreases. (BTW, additional WAN latency is why network applications often operate slower across a WAN regardless of "bandwidth".)

All the above is a long way of saying, what you're seeing might be "normal", and you need to live with it. I.e. it's the nature of the beast.

However, I cannot say, with 100% certainly, there are no issues. On a whole, especially the big service providers, they get it right, but sometimes they too have issues that you need to rub their nose in.

An example, incidentally the same vendor, I had one circuit (also Ethernet) I didn't thing was quite right (when our load neared 100% CIR, I wasn't getting 100% on far side - I was shaping, so we didn't go past 100% of CIR). I was on them for about three months. Eventually they even complained to my management, I wouldn't accept there wasn't some issue on their side. Fortunately my management supported me. Also fortunately, they did find the problem. Turned out it was out of date firmware on a line card. The "buggy" firmware would drop packets (low percentage, like under 1%), when their port neared 100% utilization. Less loaded, no drops.


Re: New Circuit Latency Increase and Upload Speed

Hi Joseph, thanks so much for all the detail in your reply.  That give me much more to go on than I had before.  So when you go to order a new circuit from "the same vendor" do you ask for a particular ms latency or asked for it to be switched cut through or virtual circuit?  What is your list of requirements when you buy a new circuit.  How much if any does that impact the cost of the circuit?


I run a small ISP and try my best to keep my customers "perception of reality" as close as I can get it to how they like the Internet to perform.  Do you think this jump from 8-10ms latency going to 33-34ms latency is going to impact that perception much?


Once again I'm just trying to update my own best practices moving forward.



VIP Expert

Re: New Circuit Latency Increase and Upload Speed

Most concern themselves with price for the same amount of bandwidth. It's possible, if you go to a WAN vendor and note you have a requirement for low(er) latency, they might have some options to provide it (although likely at additional cost).

As to your customers noticing the increased latency, some may. Much depends on what they are doing on a network (and how observant they are).
CreatePlease to create content