cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
704
Views
0
Helpful
4
Replies

Splitting 6248-Fabric-Interconnect from Nexus 5k to 4500X - QoS Question

michael.lorincz
Level 4
Level 4

Hi Community,

I have a customer with a PRIMARY pair of Fabric Interconnects (6248-FI) and a SECONDARY pair of Fabric Interconnects (6248-FI), both pairs currently connect to a pair of Nexus5K (vpc). Each FI pair supports a UCS chassis and their VMs are spread across the two UCS chassis. The customer would like to physical relocate the SECONDARY pair of 6248-FIs and the associated UCS chassis, the plan is to move the uplinks (L2 Trunk POs) from the Nexus5Ks and connect the FIs to the 4500X cores (vss).

Jumbo frames are enabled on the port channels and interfaces. 

The customer's existing Nexus 5K are already setup for jumbo frames and QoS via system level configuration. What I'd like to know is how to best translate the following configuration to the 4500X to also queue and prioritize iscsi traffic (vlan 80).

!!!CURRENT GLOBAL CONFIG ON EXISTING NEXUS5Ks, I WOULD LIKE TO DETERMINE 4500X EQUIVALENT!!!!

class-map type qos class-fcoe
class-map type qos match-all ISCSI-CLASS
match protocol iscsi
match cos 4
class-map type queuing class-fcoe
match qos-group 1
class-map type queuing ISCSI-CLASS
match qos-group 4
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
policy-map type qos ISCSI-IN-POLICY
class ISCSI-CLASS
set qos-group 4
class class-default
policy-map type queuing ISCSI-IN-POLICY
class type queuing ISCSI-CLASS
bandwidth percent 10
class type queuing class-default
bandwidth percent 90
policy-map type queuing ISCSI-OUT-POLICY
class type queuing ISCSI-CLASS
bandwidth percent 10
class type queuing class-default
bandwidth percent 90
class-map type network-qos class-fcoe
match qos-group 1
class-map type network-qos ISCSI-CLASS
match qos-group 4
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multicast-optimize
policy-map type network-qos ISCSI-NQ-POLICY
class type network-qos ISCSI-CLASS
set cos 4
pause no-drop
mtu 9216
class type network-qos class-default
multicast-optimize
system qos
service-policy type qos input ISCSI-IN-POLICY
service-policy type queuing input ISCSI-IN-POLICY
service-policy type queuing output ISCSI-OUT-POLICY
service-policy type network-qos jumbo

Any help would be appreciated!

1 Accepted Solution

Accepted Solutions

Paul Chapman
Level 4
Level 4

Hi Michael -

I'm sorry to say that the person who created this configuration put in a lot of work for not much gain.

  1. You can't create a no-drop class on the Catalyst as it doesn't support PFC (priority flow control)
    1. Even on the Nexus you have to have peer devices that can send PFC signaling, so it's unlikely that the current configuration works as expected.
  2. iSCSI expects the network to be lossy and relies on that to provide consistent delivery
  3. Providing guaranteed bandwidth only applies when there is congestion
    1. If you're experiencing congestion between a single B-series chassis and the storage, you may need to consider disjoint L2 or static pinning to isolate storage from other traffic

There's no direct equivalent on the Catalyst for you.  Basically to match this, you need to tag the iSCSI traffic with COS 4 as it ingresses on the switch, then create a standard MQC policy to apply a 10% bandwidth to the interesting traffic (NOT Priority queue as that may have unintended consequences).

I don't think this is a great use of time, but there you have it.

PSC

View solution in original post

4 Replies 4

Paul Chapman
Level 4
Level 4

Hi Michael -

I'm sorry to say that the person who created this configuration put in a lot of work for not much gain.

  1. You can't create a no-drop class on the Catalyst as it doesn't support PFC (priority flow control)
    1. Even on the Nexus you have to have peer devices that can send PFC signaling, so it's unlikely that the current configuration works as expected.
  2. iSCSI expects the network to be lossy and relies on that to provide consistent delivery
  3. Providing guaranteed bandwidth only applies when there is congestion
    1. If you're experiencing congestion between a single B-series chassis and the storage, you may need to consider disjoint L2 or static pinning to isolate storage from other traffic

There's no direct equivalent on the Catalyst for you.  Basically to match this, you need to tag the iSCSI traffic with COS 4 as it ingresses on the switch, then create a standard MQC policy to apply a 10% bandwidth to the interesting traffic (NOT Priority queue as that may have unintended consequences).

I don't think this is a great use of time, but there you have it.

PSC

Hi Paul, thanks for taking the time to respond, I really appreciate it. I'm rather green with QoS and data center best practices so please excuse my inexperience. I have a couple of questions for clarification.

RE #3. I don't believe there is congestion at the moment (relatively small load), but I'm curious what you mean by disjoint L2 and Static Pinning, any chance you could elaborate?

Lastly, in an effort to provide some level of QoS albeit not identical I may opt to pursue the tag iSCSI as COS 4 w/ MQC policy. I'll caveat the reasons for that approach with the customer but as a matter of understanding could you let me know what you think it wouldn't be a great use of time? I suppose the only benefit would be bandwidth reservation in the even traffic load increases over time.

Oh, and how would you suggest I identify the iSCSI traffic on the 4500X ingress? I don't believe it has a 'match protocol iscsi' class map option like the Nexus'.

Thanks again!

Mike Lorincz

Hi Mike -

In terms of #3 you need to understand how the FIs deal with the uplinks.  By default the FIs expect to connect to a single unique upstream network.  A mechanism in the FIs prevents loops by forcing a particular port to receive broadcast, multicast, and unknown unicast and all others will ignore the traffic.  If we introduce a 2nd discrete upstream network (imagine adding an isolated DMZ switch to the mix), we will run into a problem with the loop prevention mechanism.  To get around this you create a disjoint L2 configuration.  In this case you have the data network and the iSCSI network.

Alternately, let's assume that you have 2 uplinks from each FI to the upstream network and you are not using port-channels.  You could add a 3rd and manually pin the iSCSI vNICs to the new uplink and ensure that only that port is used.  Needless to say, this is messier and more difficult than disjoint L2 (and that's not a walk in the park either).

In terms of tagging, it is preferable to tag inside the UCS before the traffic gets to the upstream network.  Basically, you would use the "Gold" QoS System Class (could be renamed, or any class that uses COS 4) which would then be vNICs through a QoS Policy.

In your Catalyst you end up with something along the lines of this:

class-map match-any ISCSI-CLASS
match cos 4
policy-map ISCSI-POLICY
class ISCSI-CLASS
bandwidth percent 10
!
int Te1/1
desc UCS Uplink
switch mode trunk
spanning-tree portfast trunk
qos trust
service-policy input ISCSI-POLICY
mtu 9170

As an option, you could insert an additional match based on an access list that identifies the L3 network involved in your iSCSI setup.

I don't have a test environment to validate this on right now, so this is just an estimate of what I think is needed.  Test, Test, Test!

PSC

This is great, thank you for the help.

MRL

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco