cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1282
Views
5
Helpful
4
Replies

Assistance Adjusting QoS fair-queue

Travis-Fleming
Level 1
Level 1

Hello. We have at our head end 100 Mbps MPLS circuit. We have 18 remote MPLS sites, with varying speeds, with several recently upgraded to 50 and 20 Mbps. On our head end MPLS through solarwinds reporting and on the interface I'm seeing at most the ability to send 10 Mbps outbound to those sites. In doing an iperf test I'm seeing that as well. The egress from the remote sites show the correct outbound TCP speeds, but ingress maxes out at 10 Mbps. Our vender CTL has pointed the finger at packet loss\drops on our head end. When I do a show policy-map on our head end MPLS interface I"m seeing the class-default class-map is showing quite a bit of flowdrops, while our other policy maps are not.

 

I thought QoS only goes into affect if the interface reaches capacity? Currently we are only seeing 11% down and 16% up so I wasn't expecting to see flowdrops. Can someone help me understand this better? Below is our QoS policy, and the show policy-map of our MPLS interface after clearing the counters and letting it run for 5 minutes.

 

at-scmn-hqdc-mpls-rt01#sh policy-map int gi0/0/1
GigabitEthernet0/0/1

Service-policy output: wanedge

Class-map: voicefromlan (match-any)
2231140 packets, 448019011 bytes
5 minute offered rate 5051000 bps, drop rate 0000 bps
Match: ip dscp ef (46)
Queueing
queue limit 104 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2231140/448019011
bandwidth 25% (25000 kbps)
QoS Set
precedence 5
Marker statistics: Disabled

Class-map: cntrfromlan (match-any)
10706 packets, 4037643 bytes
5 minute offered rate 52000 bps, drop rate 0000 bps
Match: ip dscp cs3 (24)
Match: ip dscp af31 (26)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 10706/4037643
bandwidth 10% (10000 kbps)
QoS Set
precedence 4
Marker statistics: Disabled

Class-map: rdpfromlan (match-any)
2554174 packets, 590033055 bytes
5 minute offered rate 6783000 bps, drop rate 0000 bps
Match: access-group 101
Queueing
queue limit 145 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 2554174/590033055
bandwidth 35% (35000 kbps)
QoS Set
precedence 2
Marker statistics: Disabled

Class-map: class-default (match-any)
536367 packets, 452882473 bytes
5 minute offered rate 4657000 bps, drop rate 116000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 12/10326/0/10326
(pkts output/bytes output) 526031/441180043
Fair-queue: per-flow queue limit 16 packets
shape (average) cir 10000000, bc 100000, be 100000
target shape rate 10000000

 

class-map match-any voicefromlan
match ip dscp ef
class-map match-any rdpfromlan
match access-group 101
class-map match-any voicefromwan
match ip precedence 5
class-map match-any cntrfromwan
match ip precedence 4
class-map match-any cntrfromlan
match ip dscp cs3
match ip dscp af31
!
policy-map lanedge
class voicefromwan
bandwidth percent 25
set dscp ef
class cntrfromwan
bandwidth percent 10
set dscp cs3
class class-default
fair-queue
policy-map wanedge
class voicefromlan
bandwidth percent 25
set precedence 5
class cntrfromlan
bandwidth percent 10
set precedence 4
class rdpfromlan
bandwidth percent 35
set precedence 2
class class-default
fair-queue
shape average percent 10

 

interface GigabitEthernet0/0/1
description CID- CenturyLink ETH1000-XXXX
bandwidth 100000
ip flow monitor NetFlowMonitor1 input
ip flow monitor NetFlowMonitor1 output
ip address X.X.X.X X.X.X.X
negotiation auto
service-policy output wanedge

1 Accepted Solution

Accepted Solutions

I was able to resolve this by adjusting the default class fair-queue to use an average shape percent of 20. I'm not getting my outbound traffic speeds.

 

However I still have the final question, doesn't the QoS policy only start to work when our bandwidth is reached on the interface? We are using only 25% up/down on this interface throughout the day.

 

class class-default
fair-queue
shape average percent 20

View solution in original post

4 Replies 4

Travis-Fleming
Level 1
Level 1

I will also note when I do a packet capture I'm seeing a lot of TCP fast retransmissions and duplicate acknowledgements, which I think falls in line with the QoS policy dropping packets.

I also tried adjusting our que to 600 packets, and still not seeing the throughput I would expect at all. Getting 2 Mbps outbound via iperf test to a site that has 20 Mbps throughput.

 

Class-map: class-default (match-any)
143663 packets, 144814868 bytes
5 minute offered rate 2992000 bps, drop rate 3000 bps
Match: any
Queueing
queue limit 600 packets
(queue depth/total drops/no-buffer drops/flowdrops) 14/204/0/204
(pkts output/bytes output) 143456/144543620
Fair-queue: per-flow queue limit 150 packets
shape (average) cir 10000000, bc 100000, be 100000
target shape rate 10000000

I was able to resolve this by adjusting the default class fair-queue to use an average shape percent of 20. I'm not getting my outbound traffic speeds.

 

However I still have the final question, doesn't the QoS policy only start to work when our bandwidth is reached on the interface? We are using only 25% up/down on this interface throughout the day.

 

class class-default
fair-queue
shape average percent 20

Joseph W. Doherty
Hall of Fame
Hall of Fame
Queue drops happen down at the millisecond (or less) time period, so looking at "average" 5 minute utilization tells you very little. (Well perhaps it does reveal a low average utilization due to such a high level of drops, but perhaps TCP flows never really take advantage of the available bandwidth.)

You mention 100 Mbps and remote sites. Depending on your latency, you might be encountering LFNs (long fat networks) issues. These often cause TCP flows to be unable to obtain the bandwidth that's available.

Looking at your first posted stats:

Class-map: class-default (match-any)
536367 packets, 452882473 bytes
5 minute offered rate 4657000 bps, drop rate 116000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops/flowdrops) 12/10326/0/10326
(pkts output/bytes output) 526031/441180043
Fair-queue: per-flow queue limit 16 packets
shape (average) cir 10000000, bc 100000, be 100000
target shape rate 10000000

We see a 5 minute drop rate average, and we also see (at this instant) 12 packets enqueued, and we also see flow queues for just 16 packets (likely way, way too small for 100 Mbps and non-LAN latency - are you familiar with BDP [bandwidth delay product]?).

"I thought QoS only goes into affect if the interface reaches capacity?"

It may, if that's what the QoS is tied too. However, it can also be impacted by shapers, which can also "congest". This is why when you first increased the shaper's average bandwidth, you saw an improvement. (It was like you provided a physical interface of twice the bandwidth.)

BTW, your QoS might be much improved, but such improvement would depend on whether your MPLS vendor can offer any QoS support on egress from the MPLS cloud to your sites. If not, you could still accomplish much assuming all your traffic is only between hub and spokes. If you can have spoke to spoke traffic, you really want MPLS QoS support (too).