cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
821
Views
0
Helpful
4
Replies

20Gbps Port channel discards - Nexus 7k Default Policy

Sam Almeida
Level 1
Level 1

Hi, I have 2x 10Gbps link  (setup as a port-channel 20Gbps)  When the links hits about 50percent traffic - around 10Gbps i see discards on the interface. Looks like its being dropped by Nexus 7k default policy. What would be the recommended way to get this fixed?

 

Config

interface port-channel123
  switchport
  switchport access vlan 123
  mtu 9216

interface Ethernet7/1
  switchport
  switchport access vlan 123
  mtu 9216
  channel-group 123 mode active
  no shutdown

interface Ethernet1/1
  switchport
  switchport access vlan 123
  mtu 9216
  channel-group 123 mode active
  no shutdown

 

 

Nexus7k# show int po123
port-channel123 is up
admin state is up
  Hardware: Port-Channel, address: e02f.6da0.2abd (bia e02f.6da0.2abd)
  MTU 9216 bytes, BW 20000000 Kbit, DLY 10 usec
  reliability 255/255, txload 3/255, rxload 1/255
  Encapsulation ARPA, medium is broadcast
  Port mode is access
  full-duplex, 10 Gb/s
  Input flow-control is off, output flow-control is off
  Auto-mdix is turned off
  Switchport monitor is off
  EtherType is 0x8100
  Members in this channel: Eth1/1, Eth7/1
  Last clearing of "show interface" counters never
  2 interface resets
  Load-Interval #1: 30 seconds
    30 seconds input rate 59298680 bits/sec, 8040 packets/sec
    30 seconds output rate 301193920 bits/sec, 19097 packets/sec
    input rate 59.30 Mbps, 8.04 Kpps; output rate 301.19 Mbps, 19.10 Kpps
  Load-Interval #2: 5 minute (300 seconds)
    300 seconds input rate 341378536 bits/sec, 32048 packets/sec
    300 seconds output rate 552364904 bits/sec, 41829 packets/sec
    input rate 341.38 Mbps, 32.05 Kpps; output rate 552.36 Mbps, 41.83 Kpps
  RX
    11996078021 unicast packets  51496 multicast packets  70349 broadcast packets
    11996173986 input packets  19137490181632 bytes
    367490606 jumbo packets  0 storm suppression packets
    0 runts  0 giants  0 CRC/FCS  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  1531155 input discard
    0 Rx pause
  TX
    9399605287 unicast packets  2172924 multicast packets  6 broadcast packets
    9401752337 output packets  14036625859282 bytes
    216013568 jumbo packets
    0 output error  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble  0 output discard
    0 Tx pause

 

 

Nexus7k# show policy-map interface po123 INput type queuing


Global statistics status :   enabled

port-channel123

  Service-policy (queuing) input:   default-4q-8e-in-policy
    SNMP Policy Index:  301991781

    Class-map (queuing):   2q4t-8e-in-q1 (match-any)
      queue-limit percent 10
      bandwidth percent 50
      queue dropped pkts : 0
      queue dropped bytes : 0
      queue transmit pkts: 302723181  queue transmit bytes: 48848337084

    Class-map (queuing):   2q4t-8e-in-q-default (match-any)
      queue-limit percent 90
      bandwidth percent 50
      queue dropped pkts : 1531155
      queue dropped bytes : 2710366321
      queue transmit pkts: 362980779040  queue transmit bytes: 289420756491895

 

 

Nexus7k# show class-map type queuing 2q4t-8e-in-q-default

  Type queuing class-maps

  ========================

    class-map type queuing match-any 2q4t-8e-in-q-default

      Description: Classifier for Ingress queue 2 of type 4q2t8e

      match cos 0-4

      match dscp 0-39

 

 

4 Replies 4

Manoj Papisetty
Cisco Employee
Cisco Employee

Hi Sam,

If this is a regular port channel and you have a single stream which has 10G+ traffic, the drops are expected. In a port-channel, the traffic is load balanced per flow based on the load balance algorithm.  

You can check which port the destination packet takes by using this command - http://www.cisco.com/web/techdoc/dc/reference/cli/nxos/commands/l2/show_port-channel_load-balance.html

If you are using multiple flows and still facing the same issue, try changing your load balance algorithm and things should work normal. 

PS: This analysis is purely done because you are hitting the issue exactly at 50% with two links. Hence I do not suspect a QoS issue here.

Regards,

Manoj 

Hi Manoj,

Thanks for your reply. I had a look and we are using below load balancing

Module 1:
  Non-IP: src-dst mac
  IP: src-dst ip rotate 0
Module 7:
  Non-IP: src-dst mac
  IP: src-dst ip rotate 0

 

I believe the traffic was one flow between 2 devices. Do you think its worth creating a new policy map and change 'bandwidth percent 50' to somewhere around 80 percent and apply to port-channel?

 

Thanks

Sam

Hi Sam,

If it is a single flow - you will not able to cross 10G. One flow can only go via one link and not both in a PC. 

You can check the specific interface which the port channel is using by giving the src-dest addresses in the same command. Refer to the examples on that link. 

Also the command "sh port-channel traffic interface port-channel <pc-no>" will show you some interesting output. 

Regards,

Manoj

Thanks Manoj. It actually looks like its not a single flow.

I see around 9-10Gbps of traffic on Port-channel, on both member ports I see traffic divided 55/45 percent

Review Cisco Networking products for a $25 gift card