cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
17111
Views
5
Helpful
12
Replies

How would you know that you have latency issues on your network?

F Martinez
Level 1
Level 1

Hello,

 

How would you know that you have latency issues on your network? 

 

 

Thanks in advance for your response!Routing

3 Accepted Solutions

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame
Latency issues are usually noticed when they are variable, i.e. when network applications respond faster or slower at different times, including sometimes just seconds apart.

Latency issues might also be noticed where there's some form of network monitoring.

BTW, network application latency variability isn't always caused by actual network latency, it might also be caused by lost packets and/or application latency on a server.

Actual network latency generally has two causes. It either is due to congestion at certain points in the network or it's caused by physical distance. The former is generally very variable, while the latter is a constant. The latter, though, often explains why a application being used across the network, when it's on a LAN works "fast" but it works slow when several thousand miles are between hosts, even when the available bandwidth is the same.

View solution in original post

Thank you for all your responses!

 

Found a good article about network latency:

 

How Much Lag Is Too Much?

The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of ​satellite internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms.

 

Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.

Reference: https://www.lifewire.com/lag-on-computer-networks-and-online-817370

View solution in original post

I'd just like to add, as it took me a while when I first started to find this answer but when reviewing a traceroute, it actually works by pinging each node and returning a response, hence you get the latency for each hop, this is critical because a lot of time, I get reports of latency and traceroutes are provided where the 5th hop is showing 300ms and the final hop is at 20ms. This is not an issue due to the actual round time trip only being 20ms. 

 

The reason you see a big difference in the hops is actually due to control plane, as it's the device specifically responding and not just passing the traffic through, it is treated with a lower priority in most devices due to control plane configuration. Although other factors can influence the response time of course. 

Example:


1 1 ms <1 ms <1 ms 10.1.1.1
2 6 ms 5 ms 5 ms 10.1.1.2
3 6 ms 5 ms 5 ms 111.111.111.123
4 6 ms 5 ms 6 ms 111.111.111.124
5 30 ms 321 ms 6 ms 108.170.246.161 
6 6 ms 6 ms 6 ms 72.14.234.155
7 6 ms 5 ms 5 ms google-public-dns-a.google.com [8.8.8.8]

Trace complete.

View solution in original post

12 Replies 12

Hi,

normally you can use ping command to check latency between 2 end points. but be specific for your scenario to get more clear answer.

regards,
Please rate this and mark as solution/answer, if this resolved your issue
Good luck
KB

balaji.bandi
Hall of Fame
Hall of Fame

In order to understand network latency, we have to consider:  bandwidth and latency.

 

Bandwidth is just one element of what a person perceives as the speed of a network.

 

Latency is another element that contributes to network speed.

 

The term latency refers to any of several kinds of delays typically incurred in processing of network data. A so-called low latency network connection is one that generally experiences small delay times, while a high latency connection generally suffers from long delays.

 

Although the theoretical peak bandwidth of a network connection is fixed according to the technology used, the actual bandwidth you will obtain varies over time and is affected by high latencies. Excessive latency creates bottlenecks that prevent data from filling the network pipe, thus decreasing effective bandwidth. The impact of latency on network bandwidth can be temporary (lasting a few seconds) or persistent (constant) depending on the source of the delays.

 

Network tools like ping tests and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common.

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Could you please give me an output of ping or traceroute that has latency.

here is example :

 

Pinging google.co.uk [216.58.206.99] with 32 bytes of data:
Reply from 216.58.206.99: bytes=32 time=33ms TTL=54

Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=30ms TTL=57

 

 

tracert 1.1.1.1

Tracing route to one.one.one.one [1.1.1.1]
over a maximum of 30 hops:

1 <1 ms <1 ms <1 ms bthub [192.168.1.254]
2 * * * Request timed out.
3 * 13 ms 13 ms 31.55.186.177
4 13 ms 14 ms 14 ms 31.55.186.176
5 13 ms 13 ms 14 ms core4-hu0-8-0-5.faraday.ukcore.bt.net [195.99.127.56]
6 13 ms 13 ms 13 ms peer8-et-7-0-3.telehouse.ukcore.bt.net [62.172.103.174]
7 14 ms 13 ms 14 ms 109.159.253.253
8 13 ms 13 ms 13 ms one.one.one.one [1.1.1.1]

Trace complete.

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Thanks for the response. I appreciate it. So how do you know that there's latency in this output? 

My concern is, is there any documents about thresholds of latency (how long the packets get from point A to point B) and bandwidth? 

Hi ,
normally, applications are defining round trip delay or bandwidth required for successful operation in their documentations. that is depending on each application. you can optimize the network according to requirement.

for ex. voice networks needs specific delay thresholds to work without issues. check below link i found.
https://www.voip-info.org/qos/

like this each application have own characteristics for network. hope this gives some idea for you.

*** Pls rate and mark solved all useful responses ***
Good Luck
Please rate this and mark as solution/answer, if this resolved your issue
Good luck
KB

Joseph W. Doherty
Hall of Fame
Hall of Fame
Latency issues are usually noticed when they are variable, i.e. when network applications respond faster or slower at different times, including sometimes just seconds apart.

Latency issues might also be noticed where there's some form of network monitoring.

BTW, network application latency variability isn't always caused by actual network latency, it might also be caused by lost packets and/or application latency on a server.

Actual network latency generally has two causes. It either is due to congestion at certain points in the network or it's caused by physical distance. The former is generally very variable, while the latter is a constant. The latter, though, often explains why a application being used across the network, when it's on a LAN works "fast" but it works slow when several thousand miles are between hosts, even when the available bandwidth is the same.

Thank you for all your responses!

 

Found a good article about network latency:

 

How Much Lag Is Too Much?

The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of ​satellite internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms.

 

Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.

Reference: https://www.lifewire.com/lag-on-computer-networks-and-online-817370

I'd just like to add, as it took me a while when I first started to find this answer but when reviewing a traceroute, it actually works by pinging each node and returning a response, hence you get the latency for each hop, this is critical because a lot of time, I get reports of latency and traceroutes are provided where the 5th hop is showing 300ms and the final hop is at 20ms. This is not an issue due to the actual round time trip only being 20ms. 

 

The reason you see a big difference in the hops is actually due to control plane, as it's the device specifically responding and not just passing the traffic through, it is treated with a lower priority in most devices due to control plane configuration. Although other factors can influence the response time of course. 

Example:


1 1 ms <1 ms <1 ms 10.1.1.1
2 6 ms 5 ms 5 ms 10.1.1.2
3 6 ms 5 ms 5 ms 111.111.111.123
4 6 ms 5 ms 6 ms 111.111.111.124
5 30 ms 321 ms 6 ms 108.170.246.161 
6 6 ms 6 ms 6 ms 72.14.234.155
7 6 ms 5 ms 5 ms google-public-dns-a.google.com [8.8.8.8]

Trace complete.

Although you're correct that network devices might be slow to respond to a ping or traceroute, as it generally is a low priority task, 300+ ms is a large jump from 5 or 6 ms. This might still indicate a burst of transit latency at that hop. A strong clue whether that device is just occasionally slow or if it is transient latency, is whether any devices, downstream, including the final device, also show occasional large jumps in their latency, of about the same amount, too. (NB: we don't see that in the sample traceroute you've posted, but especially just one default traceroute test provides little data if hunting occasional latency issues.)

Transient latency can be somewhat difficult to capture, as you're trying to detect a pattern of something that only happens occasionally with only factional second of additional delay. (A common issue for such latency issues is due to "micro bursts".)

BTW, the best way to measure latency across a path, is to use dedicated devices, for that purpose, that sit in parallel with the two hosts. Unfortunately, that's often difficult to obtain.

Agreed indeed, this is where I would use an MTR or similar as traceroute obviously isn't the best for this sort of diagnosis... I actually am facing a current issue where the IP input is at 20% CPU on a CORE router, pushing the overall processing close to 80%, I've got maintenance in to hopefully fix the issues, but in the meantime during diagnostics I have been closely watching transit traffic compared to direct traffic, the latency on direct traffic is sometimes less than 1ms and sometimes more than 500ms, but through the traffic is never more than 5ms and average less than 1ms(please note I am testing this on an ISP backbone network hence the low latency) but agreed, it can be sporadic or it could be something like my current existing scenario which isn't phased by utilizing CEF rather than packet switching.

And with regards to the IBM study, this, in my opinion, is 100% true, it's the difference in latency...Even for gamers! If you have low jitter but high latency you can mentally adjust rather than sporadic jitter which causes all sorts of issues.

BTW, decades ago IBM had published a study about the impact of a user's perception on how applications delays impacted users of those applications.

The long and short of that report, users are most impacted by highly variable response times, as they never know what to expect. Even constant multi-minute responses were okay 'cause users would expect them and might do something like "if I make this request, I'll have time to get a cup of coffee".

That report even suggested slowing "too fast" responses so that all responses, for the same input, took the same amount of time. I've never done that, but on networks, I've found QoS policies, like fair-queue, often decrease how variable applications' responses time are, and this does seem to be better accepted by users.

Personally, though, I say "computers should wait on people, people shouldn't wait on computers" is the ideal. Indeed, users can get used to using satellite Internet, but even if they've never experienced anything better, ask them how much they like it.

Back in ye olden times, when I started in the industry, I used 300 baud dial-up and direct terminal connections at 1200 baud, neither was much of a joy to use. (Though, I did become proficient doing intra-line editing - if you don't know what that is - you're fortunate - laugh.)

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco