cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1732
Views
9
Helpful
6
Replies

QoS-drops

Anukalp S
Level 1
Level 1

Hi.. I have configured service policy on tunnel interface but i am seeing packets are being drop in default class even if bandwidth is not full utilized.

I have been running 20MB mpls circuit and configured GRE tunnel b/w two locations over mpls circuits. I have assigned 15mb to FTP traffic.

pls tell me why there is drops even if link bandwidth is almost available.

-----------------------------------------------------------------------

class-map match-any ftp
 match ip dscp af21
!
!
policy-map police
 class ftp
    bandwidth 15000
policy-map shape
 class class-default
    shape average 20000000
  service-policy police
 

interface Tunnel1

service-policy output shape

-------------------------------------------------------------------------------

Router#sh policy-map interface t1
 Tunnel1

  Service-policy output: shape

    Class-map: class-default (match-any)
      2056697 packets, 1360260464 bytes
      5 minute offered rate 566000 bps, drop rate 0 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/424/0
      (pkts output/bytes output) 2075327/1394463992
      shape (average) cir 20000000, bc 80000, be 80000
      target shape rate 20000000

      Service-policy : police

        Class-map: ftp (match-any)
          0 packets, 0 bytes
          5 minute offered rate 0 bps, drop rate 0 bps
          Match: ip dscp af21 (18)
            0 packets, 0 bytes
            5 minute rate 0 bps
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 0/0
          bandwidth 15000 kbps

        Class-map: class-default (match-any)
          2056697 packets, 1360260464 bytes
          5 minute offered rate 566000 bps, drop rate 0 bps
          Match: any

          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/424/0
          (pkts output/bytes output) 1952029/1304014921

----------------------------------------------------------------------------------------

 

 

2 Accepted Solutions

Accepted Solutions

Hello.

You are running 20M link, the maximum packet size is 1518.

So if the whole queue is filled with those 1518-bytes packets, then queueing delay would be - 64*1518*8 / 20 000 000 = 39 ms (for BE traffic only) and 156 ms if class-default compete with FTP (but I see no matches for the class).

If you set queue size to 128 -> maximum queueing delay would become 78 ms (for BE traffic only).

If 78 ms queueing delay suits you, you may increase queue size, but as Joseph wrote, 424 packets of more than 2 millions is excellent result and there is no good reason to "improve" it.

View solution in original post

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As Vasilii has described, you can compute the additional delay you might impose by increasing queue's limit (NB: keep in mind, though, queue depth is in packets, so if the same number of packets were smaller in size, maximum latency would be reduced).

If you want to allow a single TCP flow to be able to use all the bandwidth, you'll need to insure your queue can handle about half the BDP.

Doing so, again, risks increasing the possible maximum latency.  However, using QoS, you can sometimes get the best of both.  Low latency for light weight flows and maximum transfer rate for bandwidth hogs.

View solution in original post

6 Replies 6

Hello.

You might see the drops due to traffic bursts.

Under class-default you have only 64 packets queue, so whenever it's full, you should observe packet loss. You may increase queue size, but this would increase queueing delay during congestions.

PS: it's expected to see drops when you configure QoS.

Thanks Joseph & Vasilii.. so you dont suggest to increase queue size in this case as packets drops of this amount is expected.??

Hello.

You are running 20M link, the maximum packet size is 1518.

So if the whole queue is filled with those 1518-bytes packets, then queueing delay would be - 64*1518*8 / 20 000 000 = 39 ms (for BE traffic only) and 156 ms if class-default compete with FTP (but I see no matches for the class).

If you set queue size to 128 -> maximum queueing delay would become 78 ms (for BE traffic only).

If 78 ms queueing delay suits you, you may increase queue size, but as Joseph wrote, 424 packets of more than 2 millions is excellent result and there is no good reason to "improve" it.

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

As Vasilii has described, you can compute the additional delay you might impose by increasing queue's limit (NB: keep in mind, though, queue depth is in packets, so if the same number of packets were smaller in size, maximum latency would be reduced).

If you want to allow a single TCP flow to be able to use all the bandwidth, you'll need to insure your queue can handle about half the BDP.

Doing so, again, risks increasing the possible maximum latency.  However, using QoS, you can sometimes get the best of both.  Low latency for light weight flows and maximum transfer rate for bandwidth hogs.

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Just to add to what Vasilii has already posted.

Although you have drops, they seemed to be at a very low percentage.

64 packets is probably too small for 20 Mbps over anything but a LAN.  Often you want, at least, about half the BDP (bandwidth delay product).

Anukalp S
Level 1
Level 1

Thanks Joseph & Vasilii for making things clear to me.

Review Cisco Networking for a $25 gift card