02-22-2017 05:02 AM - edited 03-08-2019 09:27 AM
Is it advisable to keep different arp-timeout values at two sides of a connected interface? I have two distribution layer switches connected through a L3 link and /30 mask. I get ping times of 70-80 ms most of the times, even when I am pinging a directly connected IP. Any thoughts?
02-22-2017 05:25 AM
Hi
you wont get a consistent reply using ping to an ip interface on a switch when your pinging an ip on a switch as its going through the backplane so its not prioritised or resourced so if the switch is under any pressure its your ICMP packets that will be delayed making it look like theres latency
what exactly is the issue your having ? if your testing pings it should be from device to device not from say pc to a loopback or vlan interface as it will be off over a period due to this , use ip sla end - end for real tests of latency
unless theres a reason to change the arp timeout it should be left at default both switches
02-22-2017 07:52 AM
Problem is: Let's say I have a connected link - x.x.x.1 in one switch the other one has x.x.x.2(/30 mask). When I am pinging x.x.x.2 from the first switch I am getting 70 ms RTT. This is the problem.
02-22-2017 07:56 AM
is the 2nd switch cpu under pressure at all , or is there any Copp in place for icmp traffic
if you ping from a pc connected to switch 1 through that link to a pc connected to switch 2 are the results the same , not pinging from switch to switch as that's not going to give accurate results depending on what happening on the switch
it does sound a bit high for direct connection between 2 switches , is it always 70ms or just spiking to this ?
02-22-2017 08:01 AM
I get that 60-70% of the times. Not exactly spiking. And both the switches are not under pressure. CPU utilization is just fine in both the switches.
02-22-2017 08:04 AM
it could be the backplane processing the icmp traffic , try test it with 2 laptops going across the same link see if its the same response or use ip sla between the switches to get a proper accurate test measure rather than standard icmp check
what type of switches are they
02-23-2017 01:23 AM
They are 6840 switches.
02-23-2017 01:29 AM
did you have chance to test passing over the link rather than pinging it directly between the switches or use ip sla between the switches ?
02-23-2017 02:53 AM
Nope. I did not get a chance to do that. I am afraid that I won't get any at all. This is for one of our customers and we do not have unhindered access to their network.
Can nested QoS justify a latency like that?
service-policy type lan-queuing input INGRESS_EGRESS_QUEUING
service-policy type lan-queuing output INGRESS_EGRESS_QUEUING
policy-map type lan-queuing INGRESS_EGRESS_QUEUING
class type lan-queuing AIG_RTP
priority
class type lan-queuing AIG_VIDEO
bandwidth remaining percent 20
class type lan-queuing AIG_CON_SGL
bandwidth remaining percent 5
class type lan-queuing AIG_BUSINESS
bandwidth remaining percent 30
random-detect dscp-based
random-detect dscp 18 percent 80 100
random-detect dscp 20 percent 70 100
random-detect dscp 22 percent 60 100
class type lan-queuing AIG_HTD
bandwidth remaining percent 10
random-detect dscp-based
random-detect dscp 10 percent 80 100
random-detect dscp 12 percent 70 100
random-detect dscp 14 percent 60 100
class class-default
random-detect dscp-based
random-detect dscp 8 percent 80 100
!
02-23-2017 04:32 AM
qos yes can be the cause of latency for icmp but it would need to be applied to that particular interface your crossing and the class default usually is where icmp would take effect if not prioritized specifically somewhere in the qos markings
Copp qos can be the cause as well as icmp is limited on the backplane to save resources and for security
it should only be an issue with qos if the interface is getting maxed out and other traffic is being pushed through before icmp
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide