05-01-2020 07:33 AM
Hello. We have at our head end 100 Mbps MPLS circuit. We have 18 remote MPLS sites, with varying speeds, with several recently upgraded to 50 and 20 Mbps. On our head end MPLS through solarwinds reporting and on the interface I'm seeing at most the ability to send 10 Mbps outbound to those sites. In doing an iperf test I'm seeing that as well. The egress from the remote sites show the correct outbound TCP speeds, but ingress maxes out at 10 Mbps. Our vender CTL has pointed the finger at packet loss\drops on our head end. When I do a show policy-map on our head end MPLS interface I"m seeing the class-default class-map is showing quite a bit of flowdrops, while our other policy maps are not.
I thought QoS only goes into affect if the interface reaches capacity? Currently we are only seeing 11% down and 16% up so I wasn't expecting to see flowdrops. Can someone help me understand this better? Below is our QoS policy, and the show policy-map of our MPLS interface after clearing the counters and letting it run for 5 minutes.
at-scmn-hqdc-mpls-rt01#sh policy-map int gi0/0/1
GigabitEthernet0/0/1
Service-policy output: wanedge
Class-map: voicefromlan (match-any)
2231140 packets, 448019011 bytes
5 minute offered rate 5051000 bps, drop rate 0000 bps
Match: ip dscp ef (46)
Queueing
queue limit 104 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2231140/448019011
bandwidth 25% (25000 kbps)
QoS Set
precedence 5
Marker statistics: Disabled
Class-map: cntrfromlan (match-any)
10706 packets, 4037643 bytes
5 minute offered rate 52000 bps, drop rate 0000 bps
Match: ip dscp cs3 (24)
Match: ip dscp af31 (26)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 10706/4037643
bandwidth 10% (10000 kbps)
QoS Set
precedence 4
Marker statistics: Disabled
Class-map: rdpfromlan (match-any)
2554174 packets, 590033055 bytes
5 minute offered rate 6783000 bps, drop rate 0000 bps
Match: access-group 101
Queueing
queue limit 145 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2554174/590033055
bandwidth 35% (35000 kbps)
QoS Set
precedence 2
Marker statistics: Disabled
Class-map: class-default (match-any)
536367 packets, 452882473 bytes
5 minute offered rate 4657000 bps, drop rate 116000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 12/10326/0/10326
(pkts output/bytes output) 526031/441180043
Fair-queue: per-flow queue limit 16 packets
shape (average) cir 10000000, bc 100000, be 100000
target shape rate 10000000
class-map match-any voicefromlan
match ip dscp ef
class-map match-any rdpfromlan
match access-group 101
class-map match-any voicefromwan
match ip precedence 5
class-map match-any cntrfromwan
match ip precedence 4
class-map match-any cntrfromlan
match ip dscp cs3
match ip dscp af31
!
policy-map lanedge
class voicefromwan
bandwidth percent 25
set dscp ef
class cntrfromwan
bandwidth percent 10
set dscp cs3
class class-default
fair-queue
policy-map wanedge
class voicefromlan
bandwidth percent 25
set precedence 5
class cntrfromlan
bandwidth percent 10
set precedence 4
class rdpfromlan
bandwidth percent 35
set precedence 2
class class-default
fair-queue
shape average percent 10
interface GigabitEthernet0/0/1
description CID- CenturyLink ETH1000-XXXX
bandwidth 100000
ip flow monitor NetFlowMonitor1 input
ip flow monitor NetFlowMonitor1 output
ip address X.X.X.X X.X.X.X
negotiation auto
service-policy output wanedge
Solved! Go to Solution.
05-01-2020 10:08 AM
I was able to resolve this by adjusting the default class fair-queue to use an average shape percent of 20. I'm not getting my outbound traffic speeds.
However I still have the final question, doesn't the QoS policy only start to work when our bandwidth is reached on the interface? We are using only 25% up/down on this interface throughout the day.
class class-default
fair-queue
shape average percent 20
05-01-2020 07:35 AM
I will also note when I do a packet capture I'm seeing a lot of TCP fast retransmissions and duplicate acknowledgements, which I think falls in line with the QoS policy dropping packets.
05-01-2020 08:03 AM
I also tried adjusting our que to 600 packets, and still not seeing the throughput I would expect at all. Getting 2 Mbps outbound via iperf test to a site that has 20 Mbps throughput.
Class-map: class-default (match-any)
143663 packets, 144814868 bytes
5 minute offered rate 2992000 bps, drop rate 3000 bps
Match: any
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops/flowdrops) 14/204/0/204
(pkts output/bytes output) 143456/144543620
Fair-queue: per-flow queue limit 150 packets
shape (average) cir 10000000, bc 100000, be 100000
target shape rate 10000000
05-01-2020 10:08 AM
I was able to resolve this by adjusting the default class fair-queue to use an average shape percent of 20. I'm not getting my outbound traffic speeds.
However I still have the final question, doesn't the QoS policy only start to work when our bandwidth is reached on the interface? We are using only 25% up/down on this interface throughout the day.
class class-default
fair-queue
shape average percent 20
05-01-2020 11:43 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide