cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
18604
Views
60
Helpful
32
Replies

ASK THE EXPERTS - NEXUS 5000 AND 2000 SERIES

ciscomoderator
Community Manager
Community Manager

Welcome to the Cisco Networking  Professionals Ask the Expert conversation. This is an opportunity to learn how to troubleshoot Nexus 5000 & 2000 series with Lucien Avramov.  Lucien Avramov is a Customer Support Engineer at the Cisco Technical  Assistance Center. He currently works in the data center switching team  supporting customers on the Cisco Nexus 5000 and 2000. He was previously  a technical leader within the network management team. Lucien holds a  bachelor's degree in general engineering and a master's degree in  computer science from Ecole des Mines d'Ales. He also holds the  following certifications: CCIE #19945 in Routing and Switching, CCDP,  DCNIS, and VCP #66183.

Remember to use the rating system to let Lucien know if you have received an adequate response.

Lucien might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered   questions in other discussion forums shortly after the event. This  event  lasts through August 30, 2010. Visit this forum often to view  responses  to your questions and the questions of other community  members.

32 Replies 32

Lucien,

I see you listed the compatibility link for 1Gb transceivers in this blog.

I bought the GLC-T-AX SFP (Axiom flavor) for the Nexus 5020 - this is for a trunk link back to my 3750 (CAT5) switch port.

I cannot get the link to come up...no lights, no indication of any kind that the two switches see each other.

Do you know if anyone else is having trouble using these Axiom SFPs?   ...or am I missing something?

Thanks,

  Paul.

Nexus is very sensitive to which SFPs you are using.

You need to use Cisco branded SFP.

The compatibility guide for 1GB (I listed only the 10GB earlier) is here:

http://www.cisco.com/en/US/docs/interfaces_modules/transceiver_modules/compatibility/matrix/OL_6981.html#wp139256

You can see that it mentions : GLC-T and not the GLC-T-AX.

If you do a show interface on your Nexus CLI or a show interface brief, it will tell you the reason the link is down "SFP validation failed" for example.

Also I will add here on the 5010, only the first 8 ports can be downgraded to a speed of 1000 and for the 5020, the first 16 ports.

Thanks for replying so quickly. I do see that the -AX flavor is not listed but the switch does recognize it...see below.

Do you see anything that would indicate that it is not supported?  I've also opened a TAC case to verify either way!

shonexus01# sho int ether1/1 transceiver details
Ethernet1/1
    sfp is present
    name is OEM
    part number is GLC-T-AX
    revision is 1.0
    serial number is MGT801992
    nominal bitrate is 1300 MBits/sec
    cisco id is --
    cisco extended id number is 4

    Invalid calibration

shonexus01#

Thanks - hopefully, others looking to purchase 1Gb SFPs read this note if it turns out that these SFPs from Axiom are not supported!!

     Paul.

The problem is that your SFP is not CISCO, it's OEM.

A proper example of output would be:

show int e1/1 transceiver details

Ethernet1/1

sfp is present

name is CISCO-MOLEX INC

type is 10Gbase-(unknown)

part number is 74752-9025

revision is A

serial number is MOC12432433

nominal bitrate is 12000 MBits/sec

Link length supported for copper is 1 m(s)

cisco id is --

cisco extended id number is 4

Forgot to include you an example for 1GB of the GLC-T (again GLC-T-AX is not cisco part) :

show int e1/1 transceiver details

Ethernet1/1

    sfp is present

    name is CISCO-AVAGO    

    type is 10Gbase-(unknown)

    part number is ABCU-5710RZ-CS2

    revision is    

    serial number is AGM120527BX    

    nominal bitrate is 1300 MBits/sec

    Link length supported for copper is 100 m(s)

    cisco id is --

    cisco extended id number is 4

Pavel Dimow
Level 1
Level 1

Hello Lucien,

We are trying to migrate some C6500 QoS configuration to Nexus 5000. It's a typical scenario when migrating from standalone

C6500 to VSS as distribution with N5K and FEX 2148 as access layer. However we have some problems mostly regarding QoS.

The first problem is that we can't perform packet DSCP marking (yes, we can classify)

but instead we need to use CoS marking.

Q1: Is there any plans in the near future to support DSCP marking on N5K?

Other problem is that Nexus is lacking policing capabilities and the documentation is not

clear on this matter.

Q2: For example is there anyway we can achieve something like

police cir 10000000 bc 4000 be 4000 conform-action set-dscp-transmit af42 exceed-action policed-dscp-transmit violate-action policed-dscp-transmit

I see that we can guarantee BW but I can't find any info whats happens when congestion occur?

Q3: If don't use, and don't plan to use FCoE at all, should we just "reserve" 0% of BW to FCoE traffic? If not what's the minimum

we can reserve for fcoe class?

class  type  queuing  class-fcoe

bandwidth  percent  0

Q4: Regarding Q2

Is it possible remark traffic that exceeds CIR? We can use bandwidth command for CIR but I can't see anything else we can use

for remarking.

Thank you

Q1: 2148T hardware can only support CoS based traffic classification.
You can still configure ACL based classification on the 2148T interfaces (including DSCP),  this actual classification will occur on the Nexus 5000.
I don't know the specific about roadmaps for future support on DSCP, I advise you here to contact your Cisco System Engineer / Account team to find out more. Also, note that the 2148T supports two classes of services and 2 queues. The 2200 series (2224,2232,2248) support 6 hardware queues.

Q2: You can only do CoS marking, not DSCP, so there is no way to police with dscp either.

There are 3 policy types:
-qos: define traffic classification rules. Attach point is system qos or ingress interface
-queuing: set priority queue, deficit weigh round robin. Attach point is system qos, egress, ingress interface.
-network-qos: system class type (drop / no-drop), MTU per class of service, Buffer size, marking. Attach point is system qos.

Prefer service policy attached under interface when same type of service policy is attached at both system qos and interface.
Qos and network-qos policy-map are required to create new system classes.

As indicated above, network-qos policy is used for packet marking.

When congestion occurs, depending on the policy configuration:
-PAUSE upstream transmitter for lossless traffic
-Tail drop for regular traffic when buffer is exhausted
Priority flow control (PFC) or 802.3X PAUSE can be deployed to ensure lossless for applications that can not tolerate packet loss.
A buffer management module monitors buffer usage for no-drop class of service. It will signal MAC to generate PFC or PAUSE when the buffer usage crosses threshold.

The FCoE traffic is assigned to class-fcoe which is a no-drop system class.
The other class of service by default have normal drop behavior (tail drop) and can be configured as no-drop.

Q3: Exactly, you can use bandwitdh percent 0 for the fcoe class if you don't plan on using it.

Q4: No remarking due to platform limitations

I want to use your question here to explain more about the QoS on Nexus 5000/2000 with configuration examples:

The service policy configured under “system qos” will be populated from N5k to FEX only when all the matching criteria are “match cos”.
If there are some other match clauses, such as match dscp, or match ip access-group in the qos policy-map the FEX won’t accept the service policy and all the traffic will be placed into the default queue.

For the ingress traffic(from server to network direction) if the traffic is not marked with CoS value it will be placed in the default queue on FEX.
Once the traffic is received on N5k it will be classified based on configured rule and are placed in the proper queue.

For the egress traffic (from N5k to FEX to server) it is recommended to mark the traffic with CoS value on N5k so that FEX can classify and queue the traffic properly.

Here is a a complete configuration for N5k and Nexus 2248 to classify the traffic and configure proper bandwidth for each type of traffic.
The configuration example only applies to N5k and 2248. The configuration for 2148 is slightly different due to the fact that 2148 has only two queues for user data. The Nexus 2248 has 6 hardware queues for user data, which is same as Nexus 5000. You can apply similar configuration by reducing the number of queues used.

Class-map for global qos policy-map, which will be used to create CoS-queue mapping.
class-map type qos voice-global
  match cos 5
class-map type qos critical-global
  match cos 6
class-map type qos scavenger-global
  match cos 1
class-map type qos video-signal-global
  match cos 4

This qos policy-map will be attached under “system qos”. It will be downloaded to 2248 to create CoS to queue mapping.
policy-map type qos classify-5020-global
  class voice-global
    set qos-group 5
  class video-signal-global
    set qos-group 4
  class critical-global
    set qos-group 3
  class scavenger-global
    set qos-group 2

class-map type qos Video
  match dscp 34
class-map type qos Voice
  match dscp 40,46
class-map type qos Control
  match dscp 48,56
class-map type qos BulkData
  match dscp 10
class-map type qos Scavenger
  match dscp 8
class-map type qos Signalling
  match dscp 24,26
class-map type qos CriticalData
  match dscp 18

This qos policy-map will be applied under all N5k and 2248 interfaces to classify all incoming traffic based on DSCP marking. T
he policy-map will be applied under Nexus 2248 interfaces the traffic will be classified on N5k.

policy-map type qos Classify-5020
  class Voice
    set qos-group 5
  class CriticalData
    set qos-group 3
  class Control
    set qos-group 3
  class Video
    set qos-group 4
  class Signalling
    set qos-group 4
  class Scavenger
    set qos-group 2

class-map type network-qos Voice
  match qos-group 5
class-map type network-qos Critical
  match qos-group 3
class-map type network-qos Scavenger
  match qos-group 2
class-map type network-qos Video-Signalling
  match qos-group 4

This policy-map type network-qos will be applied under “system qos” to define the MTU, marking and queue-limit(not configured here).

policy-map type network-qos NetworkQoS-5020
  class type network-qos Voice
    set cos 5
  class type network-qos Video-Signalling
    set cos 4
    mtu 9216
  class type network-qos Scavenger
    set cos 1
    mtu 9216
  class type network-qos Critical
    set cos 6
    mtu 9216
  class type network-qos class-default
    mtu 9216

  class-map type queuing Voice
    match qos-group 5
  class-map type queuing Critical
   match qos-group 3
  class-map type queuing Scavenger
   match qos-group 2
  class-map type queuing Video-Signalling
   match qos-group 4

The queuing interface will be applied under “system qos” to define the priority queue and how bandwidth is shared among non-priority queues.
policy-map type queuing Queue-5020
  class type queuing Scavenger
    bandwidth percent 1
  class type queuing Voice
    priority
  class type queuing Critical
    bandwidth percent 6
  class type queuing Video-Signalling
    bandwidth percent 20
  class type queuing class-fcoe
    bandwidth percent 0
  class type queuing class-default
    bandwidth percent 73

The input queuing policy determines how bandwidth are shared for FEX uplink in the direction from FEX to N5k.
The output queueing policy determines the bandwidth allocation for both N5k interfaces and FEX host interfaces.

system qos
service-policy type qos input classify-5020-global
service-policy type network-qos NetworkQoS-5020
service-policy type queuing input Queue-5020
service-policy type queuing output Queue-5020

Apply service-policy type qos under physical interface in order to classify traffic based on DSCP.
For etherchannel member the service-policy needs to be configured under interface port-channel.

interface eth1/1-40
service-policy type qos input Classify-5020

interface eth100/1/1-48
service-policy type qos input Classify-5020

You can check if the CoS to queue mapping are properly configured under FEX interfaces with the command show queuing interface.
You can also use it to check the bandwidth and MTU configuration. And, the same commands apply for N5K.

N5k# sh queuing interface ethernet 100/1/1
Ethernet100/1/1 queuing information:
  Input buffer allocation:
  Qos-group: 0  2  3  4  5  (shared)
  frh: 2
  drop-type: drop
  cos: 0 1 2 3 4 5 6
  xon       xoff      buffer-size
  ---------+---------+-----------
  21760     26880     48640   
  Queueing:
  queue    qos-group    cos            priority  bandwidth mtu
  --------+------------+--------------+---------+---------+----
  2        0            0 2 3           WRR        73      9280
  4        2            1               WRR         1      9280
  5        3            6               WRR         6      9280
  6        4            4               WRR        20      9280
  7        5            5               PRI         0      1600
  Queue limit: 64000 bytes
  Queue Statistics:
  queue  rx              tx                  
  ------+---------------+---------------     
  2      113822539041    1             
  4      0               0             
  5      0               0             
  6      417659797       0             
  7      0               0             
  Port Statistics:
  rx drop         rx mcast drop   rx error        tx drop      
  ---------------+---------------+---------------+---------------
  0               0               0               0             
  Priority-flow-control enabled: no
  Flow-control status:
  cos     qos-group   rx pause  tx pause  masked rx pause
  -------+-----------+---------+---------+---------------
  0              0    xon       xon       xon
  1              2    xon       xon       xon
  2              0    xon       xon       xon
  3              0    xon       xon       xon
  4              4    xon       xon       xon
  5              5    xon       xon       xon
  6              3    xon       xon       xon
  7            n/a    xon       xon       xon
N5k#

Latest QoS configuration guide:

Hi,

thank you for your reply, regarding DSCP classification, are there any drawbacks
when classification occurs on N5K instead on FEX?

Regarding drawbacks, I'm not sure, at the end of the day the traffic from/to the FEX will go trough the N5K as it's in it's data path. Probably this involves more CPU usage on the N5K, but I don't see a major drawback.

About your previous post, you can remark traffic with CoS on the 5k.

For remarking of DSCP : it will be supported in the future. As a workaround, you can mark CoS instead of DSCP and then remap the Cos to the desired DSCP on an upstream switch.

As far as policing, more policing features are coming soon, again I can't provide you here the timeframes, but your system engineer can.

Can you please provide us with the example config? As far as I can see, there is no option

to remark traffic, with CoS that exceeded the CIR.

To remark CoS that exceeded CIR you need policing on top of marking.

What I meant is that you can remark CoS from one value to another by matching CoS ingress, and changing it to another qos-group that sets a different CoS value.

If you look at my example above that can be achieved with the match, set qos-group then set cos for that qos-group (in the network-cos) policy-map.

ronbuchalski
Level 1
Level 1

Lucien,

Two Questions:

1) Can you provide an update on allowing the Nexus 5K to support more than twelve N2Ks?  We were told that there was a roadmap plan to allow the 5K to support up to sixteen N2Ks.

2) I have an environment where Data Center VLANs exist in two adjacent locations.  The Data Center has a N7K core, and the old Data Center has a Cat4500 core.  They are interconnected via 10GB.  Most of the L3 SVIs for the Data Center VLANs still reside on the Cat4500 core, as are the STP roots.

Within the new Data Center, the N5Ks are connected to the N7Ks via vPC.  I would like to support port-channel on N2K-attached hosts, so have enabled vPC on the N5Ks for this purpose.  My concern is with STP root priority.  With vPC enabled, does the STP root need to live on the N7K?  N5K?  Or can it continue to live on the Cat4500 until migration is complete?

Thank you,

-rb

1) Yes it is on the roadmap and will be supported. I'm not able to disclose a date for this feature, the best I can do on this forum is to advise you to contact your Cisco System Engineer / Sales / Account team, they can provide you the specifics offline.

2) The question here is just spanning tree related, the vpc is just seen as a regular etherchannel from the 7k and 4500 side.

When you will take out of production your 4500, then you will have spanning tree to reconvergence and elect the new root.

If you move the root before from the 4500 to the 7k, the same process will occur. So really it shouldnt matter at this point.

Design guide covering on page 6 the best practices for spanning-tree on NX-OS with VPC:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572829-01_Design_N5K_N2K_vPC_DG.pdf

mario-leitao
Level 4
Level 4

Posted on wrong discussion

NguyenT11
Level 1
Level 1

Hi Lucien,

I have servers with CNA adapters for both IP and SAN connectivity and want to connect to two Nexus 5K.  Do you recommend using VPC from the 5K and aggregating the links on the server, or would using VPC-HM (assuming I use Nexus 1000V) be a better route to go.  Do you have a sample configuration of how I would connect the server using VPC from the 5K assuming that I have 2 SAN fabrics (say VLAN 100 for fabric_A and VLAN 200 for fabric_B)?

Thanks

Review Cisco Networking products for a $25 gift card