cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
22612
Views
0
Helpful
12
Replies

QoS packet drops

We have a 3945 router which is connected to our Service Provider's MPLS Cloud.  There is  a 100Mb circuit being delivered via Ethernet and is terminated on the Gi0/0 interface.  We have the interface configured for 100Mb/full duplex.  There is also QoS applied to this interface.  We are noticing a large amount of dropped packets on the interface and they are continually incrementing.  After looking at the policy maps we have noticed that our QoS class maps are showing packet drops and if they are added up they equal the packet drop total on the interface.  When looking at our monitoring tools the interface utilization is staying around the 30% to 40% range so congestion does not appear to be an issue.  What could be the cause of this issue?  Could it be the default packet limit of 64 on the class maps?  Should that number be increased?  TAC wants us to remove QoS from the interface to see if the packet drops stop, however we don't want to take the chance on causing a possible larger issue.  I have provided some of the configuration below for details.  You can see the large amount of packet drops. Any suggestions would be great.

interface GigabitEthernet0/0
 bandwidth 100000
 ip address 192.168.X.X 255.255.255.252
 no ip proxy-arp
 load-interval 30
 duplex full
 speed 100
 service-policy output outside-interface

------------------------------------------------------------------------------------------------------------------
GigabitEthernet0/0 is up, line protocol is up 
  Internet address is 192.168.X.X/30
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, 
     reliability 255/255, txload 77/255, rxload 39/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full Duplex, 100Mbps, media type is RJ45
  output flow-control is unsupported, input flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 6184419
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/6183594 (size/max total/drops)
  30 second input rate 15641000 bits/sec, 8770 packets/sec
  30 second output rate 30238000 bits/sec, 9094 packets/sec
     26022084196 packets input, 7636292617079 bytes, 0 no buffer
     Received 721265 broadcasts (0 IP multicasts)
     0 runts, 0 giants, 0 throttles 
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 687008 multicast, 0 pause input
     29836277613 packets output, 13811234076080 bytes, 0 underruns
     0 output errors, 0 collisions, 12 interface resets
     687003 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     10 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

-------------------------------------------------------------------------------------------------------------------

GigabitEthernet0/0 

  Service-policy output: outside-interface

    queue stats for all priority classes:
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 299068133/22978214688

    Class-map: VOICE_LIST (match-all)  
      299068136 packets, 22978214836 bytes
      30 second offered rate 113000 bps, drop rate 0000 bps
      Match:  dscp ef (46)
      Match: access-group name VOICE_PORT_LIST
      Priority: 10% (10000 kbps), burst bytes 250000, b/w exceed drops: 0
      

    Class-map: SIGNALING_LIST (match-any)  
      20247018 packets, 11952090960 bytes
      30 second offered rate 38000 bps, drop rate 0000 bps
      Match: ip dscp cs3 (24)
        5500848 packets, 7190829433 bytes
        30 second rate 0 bps
      Match: access-group name SIGNALING_PORT_LIST
        14746170 packets, 4761261527 bytes
        30 second rate 38000 bps
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/261/0
      (pkts output/bytes output) 20246748/11951729025
      bandwidth 5% (5000 kbps)

    Class-map: QOS_COS4_APP (match-any)  
      20217105080 packets, 7703169856381 bytes
      30 second offered rate 14219000 bps, drop rate 3000 bps
      Match: access-group name COS4_APP
        20217105074 packets, 7703169886153 bytes
        30 second rate 14219000 bps
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/1104630/0
      (pkts output/bytes output) 20216000046/7702297837698
      QoS Set
        dscp af21
          Packets marked 20217105086
      bandwidth 25% (25000 kbps)

    Class-map: CONTROL_DATA_LIST (match-any)  
      2355131 packets, 142344105 bytes
      30 second offered rate 0000 bps, drop rate 0000 bps
      Match: access-group name CONTROL_DATA_LIST
        2355131 packets, 142344105 bytes
        30 second rate 0 bps
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 2355131/142344105
      QoS Set
        dscp af22
          Packets marked 2355131
      bandwidth 5% (5000 kbps)

    Class-map: class-default (match-any)  
      9304431490 packets, 6080631255346 bytes
      30 second offered rate 10813000 bps, drop rate 8000 bps
      Match: any 
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/5079115/0
      (pkts output/bytes output) 9299351923/6074164654781
      QoS Set
        dscp default
          Packets marked 9301920701
      bandwidth 30% (30000 kbps)

1 Accepted Solution

Accepted Solutions

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

It's an engineering judgement call.

What you really want to do, cannot be configured.

To allow any one class, TCP to hit full rate, you need may need your 230 packets.  The problem is, you have more than one class.  If you split the packets across classes, you guarantee the "right size" for all aggregate traffic, but rarely all classes demand their full allocations, and when they don't other classes that could use the packets don't get them.

Conversely, if you distribute more than 230 packets, it's possible for the aggregate to queue packets that should be dropped.

So buffer allocation is an over subscription type game.  Best for any one class would be 230 packets.  Best for all classes, as an aggregate, is 230 divided between them.  Best for real-world use is usually somewhere between those two extremes.

You just need to experiment.  Determine what appears to work best for you, most of the time, between the two extremes.

View solution in original post

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

First I want to comment that monitoring only showing 30 to 40% utilization, often indicates little because monitoring period is often measured in minutes while congestion can happen in microseconds to milliseconds.

What's of interest in your posted stats, two classes show active drops but neither shows queued packets, so you might be dealing with microburst packet loss.  If so, increasing your queue limits might decrease the drop rates.  (NB: 64 packets is probably too shallow for 100 Mbps.  For optimal TCP transmission rate, I've often found a queue limit that supports up to about half the BDP works well.)

 

I understand what you are saying about microburst packet loss.  We do want to look at adjusting the queue size however we don't want to take a chance of depleting resources on the router.   What concerns are there in around that.  We don't want to just beginning increasing the queue without any direction.  You mentioned that you often found a queue limit that supports up to about half the BDP works well.  Can you elaborate more on that?  I appreciate all your input.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

BDP (bandwidth delay product)

Assuming you had about 50 ms end-to-end latency, your BDP would be 100 Mbps * .05 = 5 Mb = 625 KB / 1500 B =  417 packets /2 = (about) 208 packets.  (NB: That would be what your maximum might be, aggregate [used], across all your class queues.)

As to resources, as long as you have sufficient free RAM, you should be able to allocate what you need.

I have 5 class queues so I would take the 208 in your example above and divide that out across my queues?  Do you usually do it as equal as possible across all queues or do you give certain queues more depth?  This has been really helpful.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

I usually assign them where I expect or find they are needed.  In your case, the two classes showing the highest drop rates.  (NB: You can also increase the queue depth for you SIGNALING_LIST class.  I would expect that to be both light bandwidth usage, but packets you really want to avoid dropping.  Don't concern yourself so much with "sharing" it queue limit with the classes with much heavier bandwidth consumption.)

Final question.  I did the math for our circuit and came up with 230 packets.  With what you stated would I divide that amongst the two classes that are having issues and increase both of them to 115?  This would leave all the others at 64.  Is that correct with what you suggest.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

It's an engineering judgement call.

What you really want to do, cannot be configured.

To allow any one class, TCP to hit full rate, you need may need your 230 packets.  The problem is, you have more than one class.  If you split the packets across classes, you guarantee the "right size" for all aggregate traffic, but rarely all classes demand their full allocations, and when they don't other classes that could use the packets don't get them.

Conversely, if you distribute more than 230 packets, it's possible for the aggregate to queue packets that should be dropped.

So buffer allocation is an over subscription type game.  Best for any one class would be 230 packets.  Best for all classes, as an aggregate, is 230 divided between them.  Best for real-world use is usually somewhere between those two extremes.

You just need to experiment.  Determine what appears to work best for you, most of the time, between the two extremes.

BDP would be 100 Mbps * .05 = 5 Mb = 625 KB / 1500 B =  417 packets /2 = (about) 208 packets. 

 

can you please explain the above mathematics. I really want to understand how values are in KB then total packet/2.

Appreciate your response, In my case I am stuck in the same scenario. I want to calculate the same thing.

 

thanks in advance.

100 Mbps times 50 ms - 100,000,000 bits in .05 seconds = 5,000,000 bits
5,000,000 bits divided by 8 bits per bytes = 625,000 bytes
625,000 bytes divided by 1,500 bytes per packet = 416 and 2/3 packets
The reason for halving 417 (416 and 2/3 rounded up) is because I believe (although I might be mistaken) that the TCP sender will have sent half the BDP number of packets for the complete BDP number of packets right before it opens the TCP transmission windows to the full BDP. If that's correct, a router should not see more than half the number of BDP packets in its transmission queue.

Not taken into account the transmission rate time difference between how fast the router receives packets vs. how fast it can transmit them. Transmission rate will be slower (otherwise packets wouldn't queue), but the how quickly packets will queue will depend on the delta between received and sending rates.

Thank you for all your input.  Cisco TAC basically told me to just double the size of the queue for the classes as long as I do not go over the queue size of the interface. for example they said to increase the default size to either 128, 256 and so on, but make sure the aggregate does not exceed the 1000 size on the interface.  Otherwise I need to increase the queue size of the interface as well.  They also stated I should look into adding shaping to reduce the burst and limit the transmit to 100Mbps.  Your approach is much more methodical and just makes more sense.  I will be taking your approach and proceed with caution.

Here was part of TAC response:

 

  • The queue-limit will need to adjust depends of how many packets the class is getting then it will adjust with the proper size.

We recommend to double size the queue-limit and eventually the drops will disappear, you can go ahead and increase the queue for these 2 class-maps

 

  • When you apply QoS under interface, the queueing strategy will turn as a class based-queueing and will the interface queue up to 1000 pkts. If you eventually will reach the total size of your QoS queues over 1000 you would need adjust the queue on the interface higher that the total amount of queues. You can increase it up to 2000 size for instance :
  • Shaping is a method to adjust the proper rate of the link transmit. Currently the link does not have this limitation so the interface thinks it can transmit more than 100mbps since there is no limitation. Shaping will also adjust the packets to reduce the burst and if any packets transmits over 100mbps will try to retransmit it.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

TAC must have missed the fact that your gig interface is already running at 100 Mbps.  Shaping would only make sense if the interface's physical bandwidth is more than some logical bandwidth cap.

As to their suggestion to keep doubling your queue depths, that's a quick and dirty technique, but it does have some disadvantages.

If you create a too large queue, packets will be queued when you're beyond your BDP.  This just adds latency and can force massive drops if a TCP flow is in slow start and doubles its transmission window size.

Although we often like to see no drops, drops provide traditional TCP flow management control.  No drops is not always an ideal, although we want as few as possible, i.e. the minimum while still providing full rate transfers.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card