cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
604
Views
0
Helpful
5
Replies

Output drops and Service policies

joshhboss
Level 1
Level 1

I have a port config that i have been using for a while (admittedly i didnt write the service policies or port config) and i guess lately ive been paying more attention to the interfaces. I was wondering if the settings on this port make sense with the high level of output drops. Im going to post the service policies and how they were written, port config, and the sh interface of the port in question. 

 

Thanks in advance 

 

class-map match-all default_traffic
  description this is to mark traffic for a policy map
 match access-group 111
!
policy-map 20Megs_in_policy
 class default_traffic
  police 20000000 1000000 exceed-action drop
policy-map 3Megs_in_policy
 class default_traffic
  police 3000000 1000000 exceed-action drop
policy-map 30Megs_in_policy
 class default_traffic
  police 30000000 1000000 exceed-action drop
policy-map 15Megs_in_policy
 class default_traffic
  police 15000000 1000000 exceed-action drop
policy-map 50Megs_in_policy
 class default_traffic
  police 50000000 1000000 exceed-action drop
policy-map 5Megs_in_policy
 class default_traffic
  police 5000000 1000000 exceed-action drop
policy-map 100Megs_in_policy
 class default_traffic
  police 100000000 1000000 exceed-action drop
policy-map 1Meg_in_policy
 class default_traffic
  police 1000000 1000000 exceed-action drop
policy-map 150Megs_in_policy
 class default_traffic
  police 150000000 1000000 exceed-action drop
policy-map 25Megs_in_policy
 class default_traffic
  police 25000000 1000000 exceed-action drop
policy-map 10Megs_in_policy
 class default_traffic
  police 10000000 1000000 exceed-action drop

description DHCP-Er6
 switchport access vlan 130
 switchport mode access
 switchport protected
 bandwidth 100000
 ip device tracking maximum 1
 speed auto 10 100
 srr-queue bandwidth share 1 255 1 1
 srr-queue bandwidth limit 35
 queue-set 2
 storm-control broadcast level 1.00
 storm-control multicast level 10.00
 storm-control action shutdown
 storm-control action trap
 service-policy input 50Megs_in_policy

GigabitEthernet1/0/1 is up, line protocol is up (connected) 
  Hardware is Gigabit Ethernet, address is c8f9.f99a.a681 (bia c8f9.f99a.a681)
  Description: DHCP-Er6
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, 
     reliability 255/255, txload 41/255, rxload 2/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported 
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 00:00:01, output hang never
  Last clearing of "show interface" counters 00:05:52
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4447
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 958000 bits/sec, 618 packets/sec
  5 minute output rate 16397000 bits/sec, 1621 packets/sec
     207626 packets input, 39594902 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles 
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     543870 packets output, 677681713 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

Is there something wrong or is the port doing exactly as it should. I was see if the drops were more the 1% and it looks like they are pretty dang close. Thank you  

5 Replies 5

Joseph W. Doherty
Hall of Fame
Hall of Fame

Does your QoS policies make sense?  Cannot say unless you define what your QoS goals are.

As to reviewing config, is would be helpful if you described the platform.  It appears to be a switch config, but exact model and software it's running define QoS features that are available (which can vary much between switch platforms).

BTW, you're asking about the impact of your policies, which are ingress polices, but the drops are, I believe, for egress, not ingress.  Further, on your device, if a switch, we need additional detailed egress queue stats, to see where your total egress drops are happening.

 

Those policies are to just regulate the end clients that are hard wired in. We only get a finite amount of bandwidth for the entire event so my friend the "engineer" lol provided me with those configs so that i could limit the upward traffic. and the srr-queues are there to do the same. 

and im using old school - WS-C2960S-48LPS-L running - C2960S-UNIVERSALK9-M

But i love them and ive been able to learn a great deal with them. But yes they are a tad bit older, they really do run great tho.. even for a beginner like me. 

If this case, the interface is for a client, the srr-queue limit command would mainly  control you "down" bandwidth, i.e. it would have little impact to your "upward" traffic.  However, the ingress policers would regulate that.  Also, however, rather than regulating "upward" bandwidth at each client's ingress port, possible a better approach would be to regulate aggregate ingress bandwidth from clients (unsure the 2960 supports that) or manage egress on the actual uplink port (to event).

Just a few other considerations.  For 2960 QoS configuration statements to work, QoS needs to be enabled.

"srr-queue bandwidth limit 35" is an egress port "shaper", although somewhat imprecise.   Basically, egress rate will be "shaped" at that % of physical interface bandwidth.

"srr-queue bandwidth share 1 255 1 1"  This command allocates the greatest ratio of egress bandwidth to the 2nd queue, but if, in fact, all (or almost) traffic uses just that queue, this command doesn't matter.  Often of greater benefit is increasing logical queue limits to max and/or adjusting how buffers are allocated (more important on a 2960 as they have less physical buffer space then the similar vintage 3560/3750s).

Increased buffer allocations often exchange some additional queuing latency for reduced drops.

 

 

 

joshhboss
Level 1
Level 1

It now over 1%.. Im going to have to try to do something tomorrow in the morning.. I cant even tdr test the cable right now they are still working. 

GigabitEthernet1/0/1 is up, line protocol is up (connected) 
  Hardware is Gigabit Ethernet, address is c8f9.f99a.a681 (bia c8f9.f99a.a681)
  Description: DHCP-Er6
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec, 
     reliability 255/255, txload 12/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported 
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output 00:00:01, output hang never
  Last clearing of "show interface" counters 06:53:17
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 560132
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 65000 bits/sec, 89 packets/sec
  5 minute output rate 5014000 bits/sec, 447 packets/sec
     15806560 packets input, 2625940698 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles 
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     39000236 packets output, 48842499762 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

I honestly dont know how to get the stats 

maybe..

SiteOpsOfficeSw204#show interfaces gigabitEthernet 1/0/1 stats 
GigabitEthernet1/0/1
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor          0          0      16623    1537626
             Route cache          0          0          0          0
                   Total          0          0      16623    1537626
SiteOpsOfficeSw204#show interfaces gigabitEthernet 1/0/1 counters 

Port            InOctets    InUcastPkts    InMcastPkts    InBcastPkts 
Gi1/0/1       2626914352       15817048              0              0 

Port           OutOctets   OutUcastPkts   OutMcastPkts   OutBcastPkts 
Gi1/0/1      48914682475       38467641         238499         347677 
SiteOpsOfficeSw204#show interfaces gigabitEthernet 1/0/1 stats    
GigabitEthernet1/0/1
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor          0          0      16625    1537754
             Route cache          0          0          0          0
                   Total          0          0      16625    1537754
SiteOpsOfficeSw204#

Again, determine if QoS is enabled.  Is so, you can obtain more detailed stats.

BTW, without egress QoS tuning, often having QoS disabled works "better".  (This because, on these devices, when QoS is enabled, it "reserves" buffers for egress queus and other interfaces [which is why QoS tunning might be of benefit].)

Review Cisco Networking for a $25 gift card