cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3224
Views
5
Helpful
12
Replies

Cisco UCS Direct Attached ISCSI Storage

Joycthomas
Level 1
Level 1

We have the Dell Compellent Storage (SC800 )directly  attached to FI 

System details..

- Cisco UCS 5108 Chassis

- FI 6248UP

Ports for the storage configured as Appliance ports

- Qos System class Paltinum created with MTU 9000 and enabled.

- Appliance port Interface Priority Changed to Platinum

- iSCSI VLAN configured and updated on the appliance port interface

- vNIC MTU updated to 9000

- MTU 9000 configured in the iscsi ports in Compellent Controller

Problem is.. the Appliance port is not taking the MTU setting as set by the Qos Policy / Class

It is still showing the MTU as 1500.

Interface details as below.

ucs-fabric-A(nxos)# show interface ethernet 1/5
Ethernet1/5 is up
 Dedicated Interface
  Hardware: 1000/10000 Ethernet, address: 002a.6ad5.838c (bia 002a.6ad5.838c)
  Description: A: Appliance
  MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec
  reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA
  Port mode is access
  full-duplex, 10 Gb/s, media type is 10G
  Beacon is turned off
  Input flow-control is on, output flow-control is on
  Rate mode is dedicated
  Switchport monitor is off
  EtherType is 0x8100
  Last link flapped 3week(s) 6day(s)
  Last clearing of "show interface" counters never
  30 seconds input rate 688 bits/sec, 86 bytes/sec, 0 packets/sec
  30 seconds output rate 5784 bits/sec, 723 bytes/sec, 0 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 264 bps, 0 pps; output rate 2.55 Kbps, 0 pps
  RX
    530433629 unicast packets  5761694 multicast packets  2 broadcast packets
    536195325 input packets  3162146182272 bytes
    523017160 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    5684306 Rx pause
  TX
    307454989 unicast packets  584323 multicast packets  33724 broadcast packets
    308073036 output packets  95755485744 bytes
    9071242 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble 0 output discard
    19810 Tx pause
  1 interface resets

 

Is there anything else to be changed to change the Appliance port MTU to 9000?

Could not anything in the UCS manager GUI.

Any CLI commands available to change the MTU of the appliance ports?

 

12 Replies 12

Wes Austin
Cisco Employee
Cisco Employee

Thanks to @Kirk J

 

With NXOS devices (i.e. N5k, UCSM FIs) it is the QOS settings the determine whether a particular frame is allowed to pass with a given MTU size.

 

For QOS MTU info, you can run the following from the nxos cli:
#show queuing int eth x/y


FI-A(nxos)# show queuing int eth 1/1
Ethernet1/1 queuing information:
TX Queuing
qos-group sched-type oper-bandwidth
0 WRR 50
1 WRR 50

RX Queuing
qos-group 0
q-size: 360640, HW MTU: 1500 (1500 configured) <<<<<<<<<<<<<<<<<<<< It's this line that actually shows MTU (for best effort QOS-group 0 class in this case)
drop-type: drop, xon: 0, xoff: 360640
Statistics:
Pkts received over the port : 7988166
Ucast pkts sent to the cross-bar : 7988005
Mcast pkts sent to the cross-bar : 161
Ucast pkts received from the cross-bar

 

: 599898
Pkts sent to the port : 599898
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Inactive)

qos-group 1
q-size: 79360, HW MTU: 2158 (2158 configured)
drop-type: no-drop, xon: 20480, xoff: 40320
Statistics:
Pkts received over the port : 744035
Ucast pkts sent to the cross-bar : 744035
Mcast pkts sent to the cross-bar : 0
Ucast pkts received from the cross-bar : 1228985
Pkts sent to the port : 1228985
Pkts discarded on ingress : 0
Per-priority-pause status : Rx (Inactive), Tx (Active)

Total Multicast crossbar statistics:
Mcast pkts received from the cross-bar : 0

 

There is a good example that shows the common output and configuration utilization QOS on the FI at https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/117601-configure-UCS-00.html

Thank you Wes.

I ran the command and got the following output.

ucs-fabric-A(nxos)# show queuing int eth 1/5
Ethernet1/5 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR             25
        1       WRR             25
        2       WRR             50

  RX Queuing
    qos-group 0
    q-size: 240640, HW MTU: 9000 (9000 configured)
    drop-type: drop, xon: 0, xoff: 240640
    Statistics:
        Pkts received over the port             : 0
        Ucast pkts sent to the cross-bar        : 0
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 0
        Pkts sent to the port                   : 0
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Active), Tx (Inactive)

    qos-group 1
    q-size: 79360, HW MTU: 2158 (2158 configured)
    drop-type: no-drop, xon: 20480, xoff: 40320
    Statistics:
        Pkts received over the port             : 0
        Ucast pkts sent to the cross-bar        : 0
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 0
        Pkts sent to the port                   : 0
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Active), Tx (Active)

    qos-group 2
    q-size: 90240, HW MTU: 9000 (9000 configured)
    drop-type: no-drop, xon: 17280, xoff: 37120
    Statistics:
        Pkts received over the port             : 530560466
        Ucast pkts sent to the cross-bar        : 530560466
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 307575259
        Pkts sent to the port                   : 307765674
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Active), Tx (Active)

  Total Multicast crossbar statistics:
    Mcast pkts received from the cross-bar      : 190415

 

So the qos-group2 is the one configured with MTU 9000. Based on the numbers shown there, it should be ok.

So to confirm, I tried to ping the Storage ISCI port ( from the windows server, UCS Blade)with MTU 9000 but it is not going through.

C:\Windows\system32>ping -f -l 9000 10.0.20.2

Pinging 10.0.20.2 with 9000 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.

Ping statistics for 10.0.20.2:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss).

Any suggestions?

If you review the link provided in the previous post, it has instructions to configure and test end to end jumbo MTU.

Take a look at https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/117601-configure-UCS-00.html and see if you are missing anything.

Do you have a qos policy set at the vnic level (this is what tags the traffic with the qos class that the FIs allow the jumbo frames on)?

 

Thanks,

Kirk...

Thank you Kirk for taking time to reply to my query.

Configuration seems to be right..

 

- Qos System class configured - Platinum ( no drop, MTU 9000)

_ Interface priority changed for Appliance port to Platinum

- Qos Policy iscsi created  with Priority Platinum and Burst 9000

- Qos Policy applied to Vnic eth2 and eth3 with with MTU 9000

ucs-fabric-A /org/service-profile # show vnic eth2 detail

 

vNIC:

   Name: eth2

   Fabric ID: A

   Dynamic MAC Addr: 00:25:B5:00:00:1E

   Desired Order: 5

   Actual Order: 3

   Desired VCon Placement: Any

   Actual VCon Placement: 1

   Desired Host Port: ANY

   Actual Host Port: NONE

   Equipment: sys/chassis-1/blade-1/adaptor-1/host-eth-3

   Host Interface Ethernet MTU: 9000

   CDN Source: vNIC Name

   Ethernet Interface Admin CDN Name:

   Ethernet Interface Oper CDN Name:

   Template Name: iSCSI-eth2-FabA

   Oper Nw Templ Name: org-root/lan-conn-templ-iSCSI-eth2-FabA

   Adapter Policy: Windows

   Oper Adapter Policy: org-root/eth-profile-Windows

   MAC Pool: Mac-Pool1

   Oper MAC Pool: org-root/mac-pool-Mac-Pool1

   Pin Group:

   QoS Policy: iscsi

   Oper QoS Policy: org-root/ep-qos-iscsi

   Network Control Policy: default

   Oper Network Control Policy: org-root/nwctrl-default

   Stats Policy: default

   Oper Stats Policy: org-root/thr-policy-default

   Virtualization Preference: NONE

   Parent vNIC DN:

   Redundancy Type: No Redundancy

   Redundancy Peer:

   Current Task:

- Compellent Side iscsi configured with MTU 9000.

Please see the attached screenshots.

 

 

Is the Compellent marking the frames as platinum as well?

You are allowing best effort class with 9000 mtu as well, so even if there's a mismatch the jumbo frames should be allowed.

From FIs, connect to nxos

nxos#show queuing int eth x/y (where x/y is your appliance port)

 

Also is the Vswitch and VMKport group configured for 9000 MTU as well?

 

Thanks,

Kirk...

 

Thank you Kirk.

Compellent Iscsi configured for MTU 9000.

Command result of the suggested command is below. I am assuming qos-group 2 is the one for the Storage and it seems to be working. Port 1/5 and 1/6 are appliance ports connected to the storage.

#show queuing int eth 1/5
Ethernet1/5 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR             25
        1       WRR             25
        2       WRR             50

  RX Queuing
    qos-group 0
    q-size: 240640, HW MTU: 9000 (9000 configured)
    drop-type: drop, xon: 0, xoff: 240640
    Statistics:
        Pkts received over the port             : 0
        Ucast pkts sent to the cross-bar        : 0
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 0
        Pkts sent to the port                   : 0
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

    qos-group 1
    q-size: 79360, HW MTU: 2158 (2158 configured)
    drop-type: no-drop, xon: 20480, xoff: 40320
    Statistics:
        Pkts received over the port             : 0
        Ucast pkts sent to the cross-bar        : 0
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 0
        Pkts sent to the port                   : 0
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

    qos-group 2
    q-size: 90240, HW MTU: 9000 (9000 configured)
    drop-type: no-drop, xon: 17280, xoff: 37120
    Statistics:
        Pkts received over the port             : 530572339
        Ucast pkts sent to the cross-bar        : 530572339
        Mcast pkts sent to the cross-bar        : 0
        Ucast pkts received from the cross-bar  : 307587326
        Pkts sent to the port                   : 307778425
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

  Total Multicast crossbar statistics:
    Mcast pkts received from the cross-bar      : 191099

 

Operating system on the client side is Windows 2012 R2 and no VMware. 

Basically, Cisco blade running Windows 2012 . Storage is directly attached to FI ( Appliance port),  to Dell Compellent Sc8000 Controller iSCSI port

 Dell Iscsi configured with 9000 MTU. Details are in the attachment I sent earlier.

I also noticed one thing while pinging the Storage with MTU option.

If I ping with packet size 8900 it goes through.


C:\Windows\system32>ping -f -l 8900 10.0.10.2

Pinging 10.0.10.2 with 8900 bytes of data:
Reply from 10.0.10.2: bytes=8900 time<1ms TTL=64
Reply from 10.0.10.2: bytes=8900 time<1ms TTL=64
Reply from 10.0.10.2: bytes=8900 time<1ms TTL=64
Reply from 10.0.10.2: bytes=8900 time<1ms TTL=64

Ping statistics for 10.0.10.2:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms

But if I ping with  9000, it does not go through. Do we have to set the MTU in Qos / Interface to 9216 to make it work?

C:\Windows\system32>ping -f -l 9000 10.0.10.2

Pinging 10.0.10.2 with 9000 bytes of data:
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.
Packet needs to be fragmented but DF set.

Ping statistics for 10.0.10.2:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

 

The config guides outlined in previous posts mention to use "9216" on UCS QoS and upstream switches.

 

Please change this configuration and report back if your issue is resolved.

Thank you Wes.

Will there be any network disconnection when we change the MTU in Qos?

We need to change MTU in vNIC s also, right?

Thanks,

 

 

There should be no impact adjusting the QoS class to 9216. The vNIC will remain at 9000. Please refer to the screen captures in the configuration document.

Thank you Wes for the clarification.

We are planning  to test the Qos MTU change to 9216 by next week. Hope that will resolve the issue.

Thanks you very much for the help.

Regards,

Joy Thomas

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card