09-23-2021 12:43 AM
Hi All,
Is there any possibility that that drops on output queue affects the IP SLA probe when sending icmp-echo? Drop is randomly happening and when I ping point-to-point I'm not able to detect any drops.
LOGS: 05:00:20.847: %TRACK-6-STATE: 11 ip sla 11 reachability Up -> Down 05:00:20.847: %TRACK-6-STATE: 21 ip sla 21 reachability Up -> Down 05:00:20.847: %TRACK-6-STATE: 31 ip sla 31 reachability Up -> Down 05:00:55.899: %TRACK-6-STATE: 21 ip sla 21 reachability Down -> Up 05:01:00.899: %TRACK-6-STATE: 11 ip sla 11 reachability Down -> Up 05:01:00.899: %TRACK-6-STATE: 31 ip sla 31 reachability Down -> Up 06:38:26.360: %TRACK-6-STATE: 11 ip sla 11 reachability Up -> Down 06:38:26.360: %TRACK-6-STATE: 21 ip sla 21 reachability Up -> Down 06:38:26.360: %TRACK-6-STATE: 31 ip sla 31 reachability Up -> Down 06:39:06.408: %TRACK-6-STATE: 11 ip sla 11 reachability Down -> Up 06:39:06.408: %TRACK-6-STATE: 21 ip sla 21 reachability Down -> Up 06:39:06.408: %TRACK-6-STATE: 31 ip sla 31 reachability Down -> Up 11 ip sla 11 reachability Up 00:48:28 21 ip sla 21 reachability Up 00:48:28 31 ip sla 31 reachability Up 00:48:28 #ping 10.1.2.1 source 10.1.2.2 repeat 5000 size 1500 df-bit (cut) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Success rate is 100 percent (5000/5000), round-trip min/avg/max = 1/2/32 ms #sh policy-map int g0/1 GigabitEthernet0/1 Service-policy output: etm-GrupoSalinas-Elektra-lima Class-map: class-default (match-any) 261661475791 packets, 146841546500637 bytes 30 second offered rate 11024000 bps, drop rate 0000 bps Match: any Queueing queue limit 4096 packets (queue depth/total drops/no-buffer drops) 0/1011428/0 <--- not inc. as of now (pkts output/bytes output) 261660459030/152073955468691 shape (average) cir 30000000, bc 140000, be 0 target shape rate 30000000 Overhead Accounting Enabled No Congestion at the time that the total drop increased.. Could be bursty traffic..
Here's the QOS policy:
Router in question: policy-map parent-policy class class-default shape average 30000000 140000 0 account user-defined 20 queue-limit 4096 packets
From other site, Nested QOS is being implemented. Should I replicate this on this site and this purpose of this child policy is prioritize the traffic and smoothen the connection?
Working site: policy-map parent-policy class class-default shape average 150000000 600000 0 service-policy child-policy <-- Child ! policy-map child-policy class realtime priority class priority bandwidth remaining percent 40 random-detect dscp-based class missioncritical bandwidth remaining percent 39 random-detect dscp-based class transactional bandwidth remaining percent 16 random-detect dscp-based class general bandwidth remaining percent 1 random-detect dscp-based class class-default bandwidth remaining percent 4 random-detect dscp-based 10 ip sla 10 reachability Up 1w2d <- Stable
Thank you
09-23-2021 01:32 AM
depends on where the QOS policy is applied, are you using the same interface to IP SLA as source?
09-23-2021 03:00 AM
@balaji.bandi , Yes, source interface of IP SLA is the one with QOS drops.
09-23-2021 08:18 AM
Just to make it clear to understand the issue remove the QoS for testing, what was the outcome ?
09-23-2021 09:16 AM
That I need to check. BTW, the values that I'm using still acceptable?
ip sla 1
icmp-echo x.x.x.x source-interface x/y
threshold 1000
timeout 2000
frequency 3
09-23-2021 07:18 AM
"Is there any possibility that that drops on output queue affects the IP SLA probe when sending icmp-echo?"
Sure, but often the idea behind SLA is to "see" issues happening to your normal/production traffic too.
"Drop is randomly happening and when I ping point-to-point I'm not able to detect any drops."
The "random" part is likely why you don't catch it with your manual ping tests, another purpose of SLA is to catch "random", or more likely occasionally, issues. Also, is it possible SLA drops are not on interface, but further downstream and/or on return path?
"From other site, Nested QOS is being implemented. Should I replicate this on this site and this purpose of this child policy is prioritize the traffic and smoothen the connection?"
Depends on your QoS policy you wish to support. The policy with the child policy does prioritize kinds of traffic, differently. As to "smoothen", not so much that policy, except for some of the higher prioritized classes. Even then, WRED, can be quirky. I generally recommend against using it unless you're a QoS expert, as WRED isn't as simplistic as its usual documentation would lead you to believe.
09-23-2021 10:47 AM
Hello,
just to be sure (and I hope I am reading this correctly): you are shaping 30Mbps on a GigabitEthernet link ? 30 Mbps seems low, by today's standards...
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide