- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2019 11:27 PM
Hello,
How would you know that you have latency issues on your network?
Thanks in advance for your response!Routing
Solved! Go to Solution.
- Labels:
-
Other Routing
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-12-2019 11:25 AM
Latency issues might also be noticed where there's some form of network monitoring.
BTW, network application latency variability isn't always caused by actual network latency, it might also be caused by lost packets and/or application latency on a server.
Actual network latency generally has two causes. It either is due to congestion at certain points in the network or it's caused by physical distance. The former is generally very variable, while the latter is a constant. The latter, though, often explains why a application being used across the network, when it's on a LAN works "fast" but it works slow when several thousand miles are between hosts, even when the available bandwidth is the same.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2019 10:05 PM
Thank you for all your responses!
Found a good article about network latency:
How Much Lag Is Too Much?
The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of satellite internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms.
Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.
Reference: https://www.lifewire.com/lag-on-computer-networks-and-online-817370
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 03:55 AM
I'd just like to add, as it took me a while when I first started to find this answer but when reviewing a traceroute, it actually works by pinging each node and returning a response, hence you get the latency for each hop, this is critical because a lot of time, I get reports of latency and traceroutes are provided where the 5th hop is showing 300ms and the final hop is at 20ms. This is not an issue due to the actual round time trip only being 20ms.
The reason you see a big difference in the hops is actually due to control plane, as it's the device specifically responding and not just passing the traffic through, it is treated with a lower priority in most devices due to control plane configuration. Although other factors can influence the response time of course.
Example:
1 1 ms <1 ms <1 ms 10.1.1.1
2 6 ms 5 ms 5 ms 10.1.1.2
3 6 ms 5 ms 5 ms 111.111.111.123
4 6 ms 5 ms 6 ms 111.111.111.124
5 30 ms 321 ms 6 ms 108.170.246.161
6 6 ms 6 ms 6 ms 72.14.234.155
7 6 ms 5 ms 5 ms google-public-dns-a.google.com [8.8.8.8]
Trace complete.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2019 11:37 PM
normally you can use ping command to check latency between 2 end points. but be specific for your scenario to get more clear answer.
regards,
Good luck
KB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2019 11:46 PM - edited 01-10-2019 11:47 PM
In order to understand network latency, we have to consider: bandwidth and latency.
Bandwidth is just one element of what a person perceives as the speed of a network.
Latency is another element that contributes to network speed.
The term latency refers to any of several kinds of delays typically incurred in processing of network data. A so-called low latency network connection is one that generally experiences small delay times, while a high latency connection generally suffers from long delays.
Although the theoretical peak bandwidth of a network connection is fixed according to the technology used, the actual bandwidth you will obtain varies over time and is affected by high latencies. Excessive latency creates bottlenecks that prevent data from filling the network pipe, thus decreasing effective bandwidth. The impact of latency on network bandwidth can be temporary (lasting a few seconds) or persistent (constant) depending on the source of the delays.
Network tools like ping tests and traceroute measure latency by determining the time it takes a given network packet to travel from source to destination and back, the so-called round-trip time. Round-trip time is not the only way to specify latency, but it is the most common.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-11-2019 12:03 AM
Could you please give me an output of ping or traceroute that has latency.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-11-2019 12:13 AM
here is example :
Pinging google.co.uk [216.58.206.99] with 32 bytes of data:
Reply from 216.58.206.99: bytes=32 time=33ms TTL=54
Pinging 1.1.1.1 with 32 bytes of data:
Reply from 1.1.1.1: bytes=32 time=30ms TTL=57
tracert 1.1.1.1
Tracing route to one.one.one.one [1.1.1.1]
over a maximum of 30 hops:
1 <1 ms <1 ms <1 ms bthub [192.168.1.254]
2 * * * Request timed out.
3 * 13 ms 13 ms 31.55.186.177
4 13 ms 14 ms 14 ms 31.55.186.176
5 13 ms 13 ms 14 ms core4-hu0-8-0-5.faraday.ukcore.bt.net [195.99.127.56]
6 13 ms 13 ms 13 ms peer8-et-7-0-3.telehouse.ukcore.bt.net [62.172.103.174]
7 14 ms 13 ms 14 ms 109.159.253.253
8 13 ms 13 ms 13 ms one.one.one.one [1.1.1.1]
Trace complete.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-11-2019 05:42 PM
Thanks for the response. I appreciate it. So how do you know that there's latency in this output?
My concern is, is there any documents about thresholds of latency (how long the packets get from point A to point B) and bandwidth?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-11-2019 11:52 PM
normally, applications are defining round trip delay or bandwidth required for successful operation in their documentations. that is depending on each application. you can optimize the network according to requirement.
for ex. voice networks needs specific delay thresholds to work without issues. check below link i found.
https://www.voip-info.org/qos/
like this each application have own characteristics for network. hope this gives some idea for you.
*** Pls rate and mark solved all useful responses ***
Good Luck
Good luck
KB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-12-2019 11:25 AM
Latency issues might also be noticed where there's some form of network monitoring.
BTW, network application latency variability isn't always caused by actual network latency, it might also be caused by lost packets and/or application latency on a server.
Actual network latency generally has two causes. It either is due to congestion at certain points in the network or it's caused by physical distance. The former is generally very variable, while the latter is a constant. The latter, though, often explains why a application being used across the network, when it's on a LAN works "fast" but it works slow when several thousand miles are between hosts, even when the available bandwidth is the same.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2019 10:05 PM
Thank you for all your responses!
Found a good article about network latency:
How Much Lag Is Too Much?
The impact of lag depends on what a person is doing on the network and, to some degree, the level of network performance they have grown accustomed to. Users of satellite internet, expect very long latencies and tend not to notice a temporary lag of an additional 50 or 100 ms.
Dedicated online gamers, on the other hand, strongly prefer their network connection to run with less than 50 ms of latency and will quickly notice any lag above that level. In general, online applications perform best when network latency stays below 100 ms and any additional lag will be noticeable to users.
Reference: https://www.lifewire.com/lag-on-computer-networks-and-online-817370
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 03:55 AM
I'd just like to add, as it took me a while when I first started to find this answer but when reviewing a traceroute, it actually works by pinging each node and returning a response, hence you get the latency for each hop, this is critical because a lot of time, I get reports of latency and traceroutes are provided where the 5th hop is showing 300ms and the final hop is at 20ms. This is not an issue due to the actual round time trip only being 20ms.
The reason you see a big difference in the hops is actually due to control plane, as it's the device specifically responding and not just passing the traffic through, it is treated with a lower priority in most devices due to control plane configuration. Although other factors can influence the response time of course.
Example:
1 1 ms <1 ms <1 ms 10.1.1.1
2 6 ms 5 ms 5 ms 10.1.1.2
3 6 ms 5 ms 5 ms 111.111.111.123
4 6 ms 5 ms 6 ms 111.111.111.124
5 30 ms 321 ms 6 ms 108.170.246.161
6 6 ms 6 ms 6 ms 72.14.234.155
7 6 ms 5 ms 5 ms google-public-dns-a.google.com [8.8.8.8]
Trace complete.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 06:09 AM
Transient latency can be somewhat difficult to capture, as you're trying to detect a pattern of something that only happens occasionally with only factional second of additional delay. (A common issue for such latency issues is due to "micro bursts".)
BTW, the best way to measure latency across a path, is to use dedicated devices, for that purpose, that sit in parallel with the two hosts. Unfortunately, that's often difficult to obtain.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 09:05 AM
And with regards to the IBM study, this, in my opinion, is 100% true, it's the difference in latency...Even for gamers! If you have low jitter but high latency you can mentally adjust rather than sporadic jitter which causes all sorts of issues.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 06:22 AM - edited 01-16-2019 06:27 AM
BTW, decades ago IBM had published a study about the impact of a user's perception on how applications delays impacted users of those applications.
The long and short of that report, users are most impacted by highly variable response times, as they never know what to expect. Even constant multi-minute responses were okay 'cause users would expect them and might do something like "if I make this request, I'll have time to get a cup of coffee".
That report even suggested slowing "too fast" responses so that all responses, for the same input, took the same amount of time. I've never done that, but on networks, I've found QoS policies, like fair-queue, often decrease how variable applications' responses time are, and this does seem to be better accepted by users.
Personally, though, I say "computers should wait on people, people shouldn't wait on computers" is the ideal. Indeed, users can get used to using satellite Internet, but even if they've never experienced anything better, ask them how much they like it.
Back in ye olden times, when I started in the industry, I used 300 baud dial-up and direct terminal connections at 1200 baud, neither was much of a joy to use. (Though, I did become proficient doing intra-line editing - if you don't know what that is - you're fortunate - laugh.)
