Showing results for 
Search instead for 
Did you mean: 


Nexus 5500 iSCSI QoS limitations

There are some limitations regarding QoS on the Nexus Fabric Extenders. One big limitation is that they currently cannot mark packets or qos-groups using ACL's or DSCP. As far as I know, on FEX ports you can only mark ALL packets w/ a COS value or trust that they are already marked correctly?? We have an iSCSI environment that will have iSCSI hosts on the 5k's as well as the 2248TP's. Below is what I have come up w/ for QoS to prioritize the iSCSI traffic. Please let me know if you see any problems or areas for improvement.

ip access-list ISCSI-ACL

  10 permit tcp any any eq 860

  20 permit tcp any any eq 3260

class-map type qos match-all COS5-CMAP

  match cos 5

class-map type qos match-all ISCSI-CMAP

  match access-group name ISCSI-ACL

class-map type queuing GRP5-Q-CMAP

  match qos-group 5

policy-map type qos ISCSI-PMAP

  class ISCSI-CMAP

    set qos-group 5

  class class-default

    set qos-group 0

policy-map type qos GLOBAL-01-PMAP

  class COS5-CMAP

    set qos-group 5

  class class-default

    set qos-group 0

policy-map type queuing GLOBAL-01-IN-Q-PMAP

  class type queuing GRP5-Q-CMAP

    bandwidth percent 50

  class type queuing class-default

    bandwidth percent 50

policy-map type queuing GLOBAL-01-OUT-Q-PMAP

  class type queuing GRP5-Q-CMAP

    bandwidth percent 50

  class type queuing class-default

    bandwidth percent 50

class-map type network-qos GRP5-NQ-CMAP

  match qos-group 5

policy-map type network-qos GLOBAL-01-NQ-PMAP

  class type network-qos GRP5-NQ-CMAP

    pause no-drop

    mtu 9216

  class type network-qos class-default


system qos

  service-policy type queuing input GLOBAL-01-IN-Q-PMAP

  service-policy type queuing output GLOBAL-01-OUT-Q-PMAP

  service-policy type qos input GLOBAL-01-PMAP

  service-policy type network-qos GLOBAL-01-NQ-PMAP


interface Ethernet1/5

  service-policy type qos input ISCSI-PMAP


interface Ethernet100/1/1

  untagged cos 5

Prashanth Krishnappa
Cisco Employee

From a configuration perspective it look ok. But any reason you are using no-drop for iSCSI which is TCP traffic?

no-drop for TCP can cause problems when there is a resource/buffer contention. TCP flow control(sliding window) should be sufficient in my opinion. But if you are experiencing problems due to packet drops etc, I would suggest that be

investigated first before implementing no-drop class for TCP.

Is pause no-drop not something that should be deployed for iSCSI traffic? I've seen a few examples with Nexus 5000s shown using a no-drop class for iSCSI and thought it probably makes sense to use for this traffic.



Gustavo Novais

Hello, picking from your config, I have a similar issue, but with FCoE traffic in the mix.

Some of the interfaces attached to Nexus switch iscsi, but others switch FCoE, never both. I'd like to apply a service policy per interface depending on the traffic they switch

What I've found out on 5.2.1N1 is that even if specifying a different queuing policy and applying it to the interface, the interface will keep the system QoS queuing policy.

I'd like that on interface A (iscsi - qos group 4) i have 50% BW for iscsi, and 50% class default and on interface B (fcoe qos-group 2) I have 50% FCoE and 50% class default.

I've defined two queuing policies besides the one applied to system qos, to provide those values, but they just got ignored...

Is this normal behaviour? I was under the impression that interface specific policies would override system qos ones...

The temporary workaround was to define a single system QoS policy with 40% iscsi, 40%FcoE and 20% for the rest...

Thanks for any insight


Content for Community-Ad