cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
cancel
15168
Views
10
Helpful
27
Replies

Ask The Expert: Nexus 5000 and 2000

ciscomoderator
Community Manager
Community Manager

Read the bioWith Lucien Avramov

Welcome to the Cisco Support Community Ask the Expert conversation. Get an update on Nexus 5000 and 2000 from Lucien Avramov. Lucien is a technical marketing engineer in the Server Access Virtualization Business Unit at Cisco, where he supports the Cisco Nexus 5000, 3000 and 2000 Series. He was previously a customer support engineer and Technical Leader in the Cisco Technical Assistance Center. He holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales as well as the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183. 

Remember to use the rating system to let Lucien know if you have received an adequate response. 

Lucien might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through March 9 , 2012. Visit this forum often to view responses to your questions and the questions of other community members.

27 Replies 27

Hi Lucien, another question regarding FabricPath and vPC+

Do I need separate Portchannels/links between the two CORE/Distribution switches for vPC+ peer link and FabricPath ? Can I use the same link that I have for FabricPath for vPC+ peer link ?

This vPC+ setup will be used as a L2 link for an existing DC. Is this supported topology where I have vPC+ going to two different 3750s at access layer of the existing DC ?

Nexus 7009 at CORE/Distribution have F2 line cards.

Thank you

Lisa

Great question. You don't need to have a seperate link between core/distribution switches for the vpc+ peer-link and fabricpath, they are the same link. The VPC+ peer link is called a FabricPath core port. You can use the same link therefore for FabricPath and the vPC+ Peer-Link.

If you're 3750s switches are in a stack with the cross stack port channel, it will be supported. Otherwise, you will not be able to do this with 2 independant switches, as it seems to be the case as you describe it.

I have another question, Lucien:

How does load balacing with FCOE work?

For classical Lan traffic it is SRC/DST MAC (default) as we know it for years. What is it for FCoE?

Thank you

Lisa

On the Nexus 5000, the default load balancing mechanism on the LACP port-channel is source-destination. If we leave it in this state, all the FCoE traffic will take the same link in the port channel when the Nexus 5000 is forwarding frames to the upstream device.

To enable the Nexus 5000 to load balance using exchange IDs, we configure it for 'source-dest-port' load balancing.

Nexus5000(config)# port-channel load-balance ethernet ?

  destination-ip    Destination IP address

  destination-mac   Destination MAC address

  destination-port  Destination TCP/UDP port

  source-dest-ip    Source & Destination IP address   --------> SID/DID

  source-dest-mac   Source & Destination MAC address

  source-dest-port  Source & Destination TCP/UDP port --------> SID/DID/OXID

  source-ip         Source IP address

  source-mac        Source MAC address

  source-port       Source TCP/UDP port

jefflingle
Level 4
Level 4
Lucien,

I'm deploying 5596UP's in a vPC design with my servers each connecting via 4 x 10GB links (2 iSCSI SAN/ 2 LAN).  Some of these servers are ESX hosts which are apart of a cluster.  I have been reading up on VN-Link, VN-tag, and Adapter-FEX and understand some of the benefits of using a 1000v and vn-tag capable nics over ESX vswitch would be easier management and monitoring, but are there any performance imporvemnts with this or any thing else worth noting?

Thanks.

The performance between dVS or 1000v should be similar. You get many extra features with Nexus 1000v that you don't get with the vmware dVS.

If you need higher performance then you can look at the adapter-fex solution as this will give you up to 30% of improved performance compared to software switches (so depending on the workload type your vm's will use you may benefit from adapter-fex technology).

Thanks,

are there any docs you can link me to explaining the adapter-fex performance in more detail?

Not yet, it's in the works.

Thanks for the help, what is the best way for me to find out when that doc is released?  Is it something you can send to me?

Absolutely! Please send me a private message and also look at the CCO

page white paper section on the Nexus 5000 page.

nikisal
Level 1
Level 1

hi

we are connecting 5548's and 5596's to esxi hosts and cannot get multiple vlans to trunk ,  is this a nexus issue or on the server/vmware side

Using trunks between ESXi is most common practice when connecting ESX hosts to switches. Take a look at the port configuration, make sure you in have 'switchport mode trunk'. Take a look then at the show interface e1/x
(where x is the port going to your ESX host), and make sure it says 'connected' as state, if not the reason will be indicated. Make sure you configure the list of vlans on the ESXi side as well, and you should be good to go.

IZZYJOV76
Level 1
Level 1

I have a nexus N5K C5672UP, that stuck at login 

(c) Copyright 2016, Cisco Systems.
N5K-C5672UP BIOS v.2.1.7, Thu 06/16/2016

Booting kickstart image: bootflash:/n6000-uk9-kickstart.7.0.2.N1.1.bin

...............................................................................
...................................Image verification OK

Booting kernel......

 

I'm trying to get into rommon, and see if i can load the kickstart, but none of the key works to get into rommon

 

or if you have any other tricks under your sleeves

 

Thanks

 

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: