With Lucien Avramov
Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn about design, configuration and troubleshooting of the Cisco Nexus 2000, 3000, 5000 Series with cisco Expert Lucien Avramov. Lucien Avramov is a technical marketing engineer in the Server Access Virtualization Business Unit at Cisco, where he supports the Cisco Nexus 5000, 3000 and 2000 Series. He has several industry certifications including CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183
Remember to use the rating system to let Lucien know if you have received an adequate response.
Lucien might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community discussion forum shortly after the event. This event lasts through October 21, 2011. Visit this forum often to view responses to your questions and the questions of other community members.
Can you suggest any drawbacks or any potentiall pitfalls in using a pair of Nexus 3064s for core routing/switching and a pair of 5596 switches for the access layer?
Also, do you know when to expect VPC to be available on the Nexus 3064 switches?
Thanks for your question.
There are limitations for L3 multicast type of traffic.
What would be the applications / usage of this type of design?
Any reason why you would prefer a higher tier with N3K instead of N5K with L3?
VPC is already shipping on the latest N3K code on CCO.
We have a small server room that requires 10Gig connectivity plus several 6513s. A Nexus 7000 doesn't seem ideal for this location so I decided to use a pair 3064 switches that will be the aggregation point for approximately10 6500s via 2 x 1Gig uplinks. The 6500s are for dense user access requirements. I would have used the 5596 switches as an alternative for the 3064s as the fiber aggregation but I was told they will not support Netflow in the routing module nor is it in the road map.
The 5596 switches will server 10Gig access for my virtual servers. If the Netflow feature is offered down the road I may use the 5500s instead.
Thank you Lucien!
Hi out there
We have a couple of nexus 5010p and are low on ports - I was considering this 10 gig 6 ports expansion-module but on the other hand I can get many extra ports trough a 10 gig fex instead - are there any drawbacks by this? how many fex can I expect a 5010 can handle and will it add som extra latency compared with the "build-in ports" ?
best regards /ti
You can have up to 12 Fabric Extenders on the 50x0 family. So if you go with 10 2232, that will give you up to 320 10GE ports, modulo you tolerate a 4:1 oversubscription. No drawbacks as far as capability: the 10 GE fabric extender port will provide you 10 GE ethernet or FCoE capacity as a 10GE port on the Nexus 5000. If will add around 800 ns latency. Do you have specific latency requirements?
Well specific latency requirements - just as fast as possibly - we are running iSCSI instead of FCoE (or FC) so as fast as possibly might be the best answer here...
My concern is just that I have a pretty fast cut-though switch here and if I add some extra latency with the fex instead of using the build-in ports it might have been better to buy the expansion module instead. I don't expect that I am going to use all the ports in the fex (we are cabling directly from the nexus to a 10gig pass-through module of a Dell M1000 bladecenter so we 10 gig all the way though to the servers) - but one newer know - better have to many ports than be low on ports..
best regards /ti
The latency on the 5010 is around 3.4 usec, a fex will add you 0.8 usec if you add 10G ports on the FEX instead of the 5010, so that would be a choice for you to make. So there is a trade off.. port count / port cost or port latency.
Do you have a 5500 serie? If so, can you make sure you are running 5.0.3 code on Nexus 5000, since that is the version that supports 1GE on 5500 series.
Another question - I have added a vpc between 2 nx5010 where I have a 20 g peerlink between and a mgmt-interface acting as keepalive link. Now I add another vpc between a remote set of nexus (similary) and has tested if this can be used as failover between two sites - works fine. but - what happens if my keepaliave link fails (uplinket it to a 3. switch where we run all the management links on)
best regards /ti
If you peer-keepalive fails and your peer-link is up you are fine. If your peer-link fails and your keep-alive is up the secondary will suspend it's vpc ports.
Here is a useful document the vPC operations guide:
ok - now I try to understand what you are telling me - the secondary will suspends its vpc ports - this means that the the primary will continue to forward packets and the other nexus - the secondary - will suspend its vpc similary as if I run into a blocked port (spt) - so that I f.ex in my case here with a vpc between a local and remote set of nexus will only run on one link?