cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2764
Views
0
Helpful
8
Replies
Highlighted
Beginner

N5K/2KFEX 10Gb ports High TX pause counts on low bandwidth throughput

G'day All,

I am relatively new to the nx family and I presently have an issue of performance issues being reported on a server enclosure that is connected to a couple of 2k FEXs that are connected to a 5596.

The physical connections from the 5k to the 2k show no TX pause hits, but the 2k phyiscal interfaces to the server enclosure show very high counts on the TX pause counters.

It appears to be a pretty simple setup, 2 10Gb interfaces, in a lacp port channel from each module in the enclosure connect to the 2ks. The switch config is pretty standard trunk config.

looking at the physical interfaces on the switch, as I type, the speed is around 13Mbps input and 18Mbps output, so by no mean pushing a 10Gb interface hard. The port channel interfrace is around input 15Mbps and output is 26Mbps, but I see the TX pause counter steadily rising on the po interface and the physical switch interfaces.

The server enclosure has been looked at by the vendor and they are pointing at the high TX pause as a sign of congestion so I am trying to work out what is going from the network side.

Looking at the basics, it look ok to me. But I am clearly missing something as I can't see why there would be TX pause frames being sent out by the N5K when the interfaces look like they are pushing under 1% of the 10Gb capacity.

I've attached the sh int and sh run int of the ports in question, to saving flooding the post with all the config.

Any help would be greatly appreciated. Thanks

 

Thank You.

 

JS

 

8 REPLIES 8
Highlighted
Beginner

Could you confirm that jumbo

Could you confirm that jumbo MTU setting is enabled on your switches? Run the following command this will show you if jumbo frame is allowed

show queuing interface eth 1/1

or use the following link.

http://routing-bits.com/2011/04/05/jumbo-mtu-on-nexus-5000/

Highlighted
Beginner

Hi,

Hi,

Thanks for the reply.

It doesn't look to me that the jumbo MTU setting is enabled. Below is the output from the sh queuing int e1/1:

 sh queuing int eth 1/1
Ethernet1/1 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR            100

  RX Queuing
    qos-group 0
    q-size: 469760, HW MTU: 1500 (1500 configured)
    drop-type: drop, xon: 0, xoff: 469760
    Statistics:
        Pkts received over the port             : 1124463263
        Ucast pkts sent to the cross-bar        : 1057473509
        Mcast pkts sent to the cross-bar        : 66989754
        Ucast pkts received from the cross-bar  : 267110687
        Pkts sent to the port                   : 2973124559
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

  Total Multicast crossbar statistics:
    Mcast pkts received from the cross-bar      : 2706013872

In newer nx-os images, do jumbos still need to be configured using a policy map?

 

Thanks for all the help.

 

Cheers,

Highlighted
Beginner

Have you tried to change the

Have you tried to change the jumbo frame siz using policy map?

Highlighted
Cisco Employee

James,

James,

We(Cisco) have created a document to help with troubleshooting this issue

http://www.cisco.com/c/en/us/support/docs/switches/nexus-2000-series-fabric-extenders/200260-Troubleshooting-Tx-Pauses-on-Nexus-2232.html

Would be nice to get feedback if it helps you or if its missing anything.

-Raj

Highlighted
Beginner

Hi Raj,

Hi Raj,

Thanks for this. I will go through it and see what result it yields and I'll report back.

Thanks again.

JS

Highlighted
Beginner

I haven't yet. I am going to

I haven't yet. I am going to look at doing that shortly. I'll report back if it does fix the issue.

Highlighted
Hall of Fame Expert

Hi,

Hi,

What model FEX are you using? Having TX pause on 2232 FEXs model is a known issue.  These FEX model come with a very low amount of buffer.

HTH

Highlighted
Beginner

Hi,

Hi,

The FEXs are 2232's. Is this an offficial bug or just a known issue? Is there a known work around?

When I look at the sh queuing int the output for the buffer size is bigger than other interfaces on the same FEXs that aren't having this issue..

sh int e113/1/22

sh queuing int e113/1/22
if_slot 54, ifidx 0x1f790140
Ethernet113/1/22 queuing information:
  Input buffer allocation:
  Qos-group: 0
  frh: 8
  drop-type: drop
  cos: 0 1 2 3 4 5 6
  xon       xoff      buffer-size
  ---------+---------+-----------
  0         142080    151040

 

Thanks for all the help...

CreatePlease to create content
Content for Community-Ad

Cisco COVID-19 Survey