I am relatively new to the nx family and I presently have an issue of performance issues being reported on a server enclosure that is connected to a couple of 2k FEXs that are connected to a 5596.
The physical connections from the 5k to the 2k show no TX pause hits, but the 2k phyiscal interfaces to the server enclosure show very high counts on the TX pause counters.
It appears to be a pretty simple setup, 2 10Gb interfaces, in a lacp port channel from each module in the enclosure connect to the 2ks. The switch config is pretty standard trunk config.
looking at the physical interfaces on the switch, as I type, the speed is around 13Mbps input and 18Mbps output, so by no mean pushing a 10Gb interface hard. The port channel interfrace is around input 15Mbps and output is 26Mbps, but I see the TX pause counter steadily rising on the po interface and the physical switch interfaces.
The server enclosure has been looked at by the vendor and they are pointing at the high TX pause as a sign of congestion so I am trying to work out what is going from the network side.
Looking at the basics, it look ok to me. But I am clearly missing something as I can't see why there would be TX pause frames being sent out by the N5K when the interfaces look like they are pushing under 1% of the 10Gb capacity.
I've attached the sh int and sh run int of the ports in question, to saving flooding the post with all the config.
Any help would be greatly appreciated. Thanks
Could you confirm that jumbo MTU setting is enabled on your switches? Run the following command this will show you if jumbo frame is allowed
show queuing interface eth 1/1
or use the following link.
Thanks for the reply.
It doesn't look to me that the jumbo MTU setting is enabled. Below is the output from the sh queuing int e1/1:
sh queuing int eth 1/1
Ethernet1/1 queuing information:
qos-group sched-type oper-bandwidth
0 WRR 100
q-size: 469760, HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 469760
Pkts received over the port : 1124463263
Ucast pkts sent to the cross-bar : 1057473509
Mcast pkts sent to the cross-bar : 66989754
Ucast pkts received from the cross-bar : 267110687
Pkts sent to the port : 2973124559
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Inactive)
Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar : 2706013872
In newer nx-os images, do jumbos still need to be configured using a policy map?
Thanks for all the help.
We(Cisco) have created a document to help with troubleshooting this issue
Would be nice to get feedback if it helps you or if its missing anything.
The FEXs are 2232's. Is this an offficial bug or just a known issue? Is there a known work around?
When I look at the sh queuing int the output for the buffer size is bigger than other interfaces on the same FEXs that aren't having this issue..
sh int e113/1/22
sh queuing int e113/1/22
if_slot 54, ifidx 0x1f790140
Ethernet113/1/22 queuing information:
Input buffer allocation:
cos: 0 1 2 3 4 5 6
xon xoff buffer-size
0 142080 151040
Thanks for all the help...