cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2026
Views
5
Helpful
6
Replies

UCI MTU question

This question arises from a UCI environment in our datacenter where I'm trying to track down a fragmentation question.  We have 7018 switches with Fabric Interconnects 6296 used to connect to ESX hosts.  The interfaces on the 7018 have an MTU of 9216, while the fabric interconnects have an MTU of 1500.  As I understand it, the FI should discard frames larger than 1500 - but is that right? If so, how should the MTU be set up?  We have the MTU set for 9216 on the L3 uplinks leaving the switch and up at distribution (a standard L3 buildout to the core).

If the Fabric Interconnects is dropping frames, how can I check this? 

6 Replies 6

Keny Perez
Level 8
Level 8

You will need to set a "QoS System Class" and define the "CoS" for the priority you decide to enable like seen here:

https://brianragazzi.files.wordpress.com/2014/11/03-ucs-system-class.jpg

Note: Keep in mind that if there is traffic coming to the FIs which is not matching an enabled "CoS", that traffic will through the "Best Effort" queue and if that does not have Jumbo frames enabled, the traffic is dropped so it is important to also know what CoS your source is sending back to the FIs.

The above will apply for the FIs; for the blade's NICs, you need to create a "QoS Policy" like this:

http://bradhedlund.s3.amazonaws.com/2010/ucs-qos-vmware/qos_policy1-bh-small.png

and then you apply it as it is seen in this image (at the bottom)

http://ine-workbooks.s3.amazonaws.com/ucs/images/5.3.12.png

 

To see if the fabric is dropping packets, you would use "show queuing interface e#/#"    >> check for ingress queuing discards

 

I hope that helps, if it does, please rate the answer.

 

-Kenny

So if I understand correctly, if I have:

 

 

and I see:

FICUCI011-A(nxos)# sho queuing int eth2/3
Ethernet2/3 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR             60
        1       WRR             40

  RX Queuing
    qos-group 0
    q-size: 360640, HW MTU: 9000 (9000 configured)
    drop-type: drop, xon: 0, xoff: 360640
    Statistics:
        Pkts received over the port             : 4180309176
        Ucast pkts sent to the cross-bar        : 3643501840
        Mcast pkts sent to the cross-bar        : 536803115
        Ucast pkts received from the cross-bar  : 367087643
        Pkts sent to the port                   : 426848085
        Pkts discarded on ingress               : 4225
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

    qos-group 1
    q-size: 79360, HW MTU: 2158 (2158 configured)
    drop-type: no-drop, xon: 20480, xoff: 40320
    Statistics:
        Pkts received over the port             : 44066300
        Ucast pkts sent to the cross-bar        : 44066300
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 0
        Pkts sent to the port                   : 0
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

  Total Multicast crossbar statistics:
    Mcast pkts received from the cross-bar      : 59760439

 

I have an MTU on that FI port of 1500, and if the upstream switch is set for 9216 and I see:

FI(nxos)# sho int eth2/3
Ethernet2/3 is up
 Dedicated Interface
  Belongs to Po101
  Hardware: 1000/10000 Ethernet, address: 547f.eea8.ec02 (bia 547f.eea8.ec02)
  Description: U: Uplink
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
  reliability 255/255, txload 7/255, rxload 1/255
  Encapsulation ARPA
  Port mode is trunk
  full-duplex, 10 Gb/s, media type is 10G
  Beacon is turned off
  Input flow-control is off, output flow-control is off
  Rate mode is dedicated
  Switchport monitor is off
  EtherType is 0x8100
  Last link flapped 10week(s) 3day(s)
  Last clearing of "show interface" counters never
  30 seconds input rate 76517144 bits/sec, 19228 packets/sec
  30 seconds output rate 294304176 bits/sec, 31248 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 66.57 Mbps, 16.29 Kpps; output rate 291.55 Mbps, 30.08 Kpps
  RX
    179787971985 unicast packets  948674311 multicast packets  164764044 broadcast packets
    180901410340 input packets  158673106928437 bytes
    84244508706 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    266665461733 unicast packets  36693548 multicast packets  29005133 broadcast packets
    266731160414 output packets  322932801177306 bytes
    96297922454 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble 0 output discard
    0 Tx pause
  1 interface resets

 

I'm both sending (dot1q's) and receiving jumbos, but the ingress discards indicate that the jumbos are being sent through the 'Best Effort' queue and dropped.   I always thought that if L2 was configured for 1500 and saw something larger than 1518 come in, it would just drop automatically.

 

So does this:

 

Mean that the FI interfaces connected to the servers are running at an MTU of 9000?

 

Again, thanks for the great info

In the QoS system class you will want to set the MTU to 9216 instead of 9000. If this is set at 9000 it does not allow for packet overhead on a 9000 MTU frame and could be the cause of discards you are seeing. On the vNIC itself you would set the MTU to 9000 while in the QoS policy it will be 9216. I would change this and see if you are still seeing discards.

So the MTU setting on the interfaces themselves (the "MTU" parameter seen int the output of the "show interface" command) can be below the QoS setting, and the interface will still take frames larger than that MTU see in the 'show output' command?

 

thanks

Are you asking if the "QoS System Class" can be set to a value lower than the vNIC QoS Policy?

 

-Kenny
 

Hello,

 

Your vNIC can be configured for 9000 because from an OS perspective jumbo frames are no larger than a 9000 mtu. When you get to the switching realm this changes to 9216 due to potential protocol overhead in the packet header as jonjorda suggested. Setting the QoS and CoS values are only relevant at the switching level not the OS as we will have stripped the headers, etc off the packet before sending it to the server (OS).

Please keep in mind, if you are using jumbo frames you'll need to make sure you have an MTU of 9216 on any switching interface that the packet will touch. Depending on the upstream device this may be done directly on the switchport interface or through a QoS setting i.e. Nexus 5k.

Also, make sure your CoS values match up between the UCS and upstream switches.

 

Hope this helps.

 

Justin

Review Cisco Networking for a $25 gift card