cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3585
Views
0
Helpful
17
Replies

Jumbo Frames On Nexus

Steven Williams
Level 4
Level 4

I have an ESX host connected to a port. The port is not configured for jumbo frames, I have to check the host when I get access, but why am I seeing jumbo packets on the port regardless?

 

3k-01# show queuing interface eth1/18
Ethernet1/18 queuing information:
qos-group sched-type oper-bandwidth
0 WRR 100
qos-group 0
HW MTU: 1500 (1500 configured)
drop-type: drop, xon: 0, xoff: 0
Statistics:
Ucast pkts sent over the port : 154866699773
Ucast bytes sent over the port : 137202695582974
Mcast pkts sent over the port : 966595686
Mcast bytes sent over the port : 81594718844
Ucast pkts dropped : 269
Ucast bytes dropped : 353807
Mcast pkts dropped : 713
Mcast bytes dropped : 845457

Pkts dropped by RX thresholds : 0
Bytes dropped by RX thresholds : 0

!

!
3k-01# show int eth1/18
Ethernet1/18 is up
Dedicated Interface
Hardware: 10/100/1000 Ethernet, address: f4cf.e2fd.c639 (bia f4cf.e2fd.c639)
Description: bnapvmw009 VMware ES vmnic2
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA
Port mode is trunk
full-duplex, 1000 Mb/s
Beacon is turned off
Input flow-control is off, output flow-control is off
Switchport monitor is off
EtherType is 0x8100
Last link flapped 35week(s) 5day(s)
Last clearing of "show interface" counters 128w0d
54 interface resets
30 seconds input rate 28580664 bits/sec, 3362 packets/sec
30 seconds output rate 21519648 bits/sec, 2845 packets/sec
Load-Interval #2: 5 minute (300 seconds)
input rate 28.87 Mbps, 3.36 Kpps; output rate 19.48 Mbps, 2.64 Kpps
RX
91848149055 unicast packets 19238480 multicast packets 27230868 broadcast packets
91894626879 input packets 68785471861302 bytes
20565577906 jumbo packets 0 storm suppression bytes
0 runts 0 giants 0 CRC 0 no buffer
0 input error 0 short frame 0 overrun 0 underrun 0 ignored
0 watchdog 0 bad etype drop 0 bad proto drop 0 if down drop
0 input with dribble 12 input discard
8476 Rx pause
TX
154893186195 unicast packets 381195997 multicast packets 883070416 broadcast packets
156157452608 output packets 137310063179497 bytes
51360561731 jumbo packets
0 output errors 0 collision 0 deferred 0 late collision
0 lost carrier 0 no carrier 0 babble 982 output discard
0 Tx pause

3k-01#

 

17 Replies 17

Steven Williams
Level 4
Level 4
vmnic's are set to 1500MTU so not 9000....

As for as your output concern it was setup to MTU1500, Do you have global config setup for jumbo in system config ?

 

or post the configuration to verify.

 

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

No policy defined as network-qos. so its not enabled for sure.

did you do a show run all | i MTU in case its hidden by default , shouldn't be , but worth checking

No return results.

off one of my LAB nexus 3172 switches , might be hidden alright

xxxxx# sh run all | i mtu
system jumbomtu 9216
class-map type control-plane match-any copp-s-l3mtufail
class copp-s-l3mtufail


# sh run |i mtu
class-map type control-plane match-any copp-s-l3mtufail
class copp-s-l3mtufail


with mtu lowercase this is my result..
BNA-HSA-3k-01# show run all | inc mtu
system jumbomtu 9216
class-map type control-plane match-any copp-s-l3mtufail
class copp-s-l3mtufail
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
mtu 1500
BNA-HSA-3k-01#

so system wide jumbo is enabled then by default in the background for L2
,odd because my 5/7/9ks dont do this , maybe its something specific to the 3ks

but that statement would suggest its ready to support any jumbo frames , so if something is sending them the ports are ready to receive it

system jumbomtu 9216

Its just strange because all ports are configured for 1500mtu . I hate "unicorn" things like this.

Yes that caught me before but if you read the docs some say this too on the 5ks
Note: For L2, you cannot set the MTU per interface but only system wide.

Now thats changed on my 9ks you can do both, i recently looked into it as we have an enclave a DC within in a DC and those switches someone wanted storage connected and i had to check it wasnt going to break the other systems connected , some ports can be 9216 and some 1500 and it can be global , but once its set system wide at L2 on switches that only support that then all ports will accept 9216 but may still show as 1500

I agree it should change once global it should show all ports same

I have nothing but 1g links on my esx to these posts so just strange to see anything jumbo, because I would never change the mtu on the vmnic to 9000 or 9216 since they are 1g. Would be interested to see what the packet size is from the esx host. I guess i would need to span that port and capture it.

Yes wireshark it probably best way to pinpoint it but it can still be enabled on GB interfaces too but may not suit your setup as you said

https://www.cisco.com/en/US/products/ps5989/products_configuration_guide_chapter09186a00806ec94e.html#wp1895752

Jumbo frames are larger than standard frames and require fewer frames. Therefore, you can reduce the CPU processing overhead by using jumbo frames with your network interfaces. ... Jumbo frames can be used for all Gigabit and 10 Gigabit Ethernet interfaces that are supported on your storage system.

Interesting comment there, so is this to say I can reduce the CPU time within my hosts but enabling jumbo frames? Or are we talking more storage array orientated.

Yes thats particular to NetApp storage traffic and very specific applications ,the traffic can be packaged and moved in a certain way it reduces cpu load ,

https://library.netapp.com/ecmdocs/ECMP1368834/html/GUID-D3AB10A1-D15A-490D-8DCE-34BE73C3DACF.html