cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10640
Views
14
Helpful
4
Replies

L1 and L2 ports on Nexus 5K

RonaldNutter
Level 1
Level 1

I am starting to bring up a Nexus 5K/2K configuration.  Our Cisco SE has told us that we need to VPC connections between each fo the 5K pairs.  We will also need a vpc keepalive connection.  He is suggesting that we use the management interface for this.  Although the cisco docs I have say this interface isnt available, I know that it does work.  Are the ports labeled L1 and L2 available to use for a VPC Keepalive connection ?  Would prefer not to sacrifice the management ethernet port for this task.

Ron

4 Replies 4

Oleksandr Nesterov
Cisco Employee
Cisco Employee

Hi Ronald

VPC keepalive link is used for exchanging keepalives between peers - that doesn't mean that you need to dedicate some link for keep-alives.

Cisco recommended design is to connect management ports to out-of-band network (some dedicated switch), and then configure them as keepalive links. So in this case links won't work ONLY as keepalive links, you will be able to manage your switches through it, gather snmp stastitc, etc…

You are not sacrifising management links - you adding new functionality.

But sure you can use nexus links (Eth) which is not recommended. n that case better would be to put this links in the separate vrf to avoid mixing peer-links with other your routes.

Hope that helps,

Alex

Prashanth Krishnappa
Cisco Employee
Cisco Employee

The L1, L2 links are for Fabric Interconnect(FI) in UCS which is similar architecture like Nexus 5k but they are not active on Nexus 5000/5500. As Oleksandr says your options for keep-alives are

1)To use mgmt0 interface

2)In-band SVI

rravergalileo
Level 4
Level 4

I've battled between the two options in the deployments I've done.  I've seen deployment guides that recommend one way and another deployment guide that recommends the other way.  My question is why make a peer keep-alive link depend on another device or two devices.  If I'm going to have my managment connections go out to another switch I'm going to put them on A and B leg switches in case of failures.  This creates two different spots for a failure to happen which disables your VPC peer-keepalive and restricts what configuration you can do and activate.  In my 7k deployments I've taken a port off of the 48 port gig line card and directly connected both.  If the line card goes out I'm going to be worrying about other things and I've seen my VPC peer keep-alive stay up longer because I don't have to worry about other switches and configuration on them.  On my 5k deployments I had to pull it back to two different switches because I don't have L3 license on them.  I can see the advantages and disadvantages for both of them but I really think a case can be made for either.  Just weigh your pro's and con's in your environment and make the appropriate decision.

If the VPC Keepalive-Link goes down, due to an outage in the "management" network where the 5K management cables are normally run to, this does not cause an outage.  The VPC Keepalive-link only needs to be up in two scenarios; 1) When the VPC peer-link is established, and 2) when the VPC peer-link goes down.  If the vpc keepalive-link is down when the peer-link goes down, both switches will believe they are the sole surviving switch and both will take the VPC primary role.  This results in a very negative dual-active VPC scenario.  Outside of those two scenarios, the vpc keepalive-link does not affect anything and can even be disconnected (if you dare) without any kind of adverse consequences.

Most deployments use the management interfaces on the 5Ks for a vpc keepalive.  I usually do that if I have a good management network, especially because you don't need a lot of fault tolerance on the keepalive.  However, in one instance, I did not have control of the management network so instead I took two ports on the front of each 5K, inserted some chearper 1gig sfps, and turned them into a layer-3 port-channel placed in a VRF I named vpckeepalive.  I admit this is over-engineered, but I had ports to spare.  In 7K deployments I always do a layer 3 routed 2-port port-channel across diverse modules for the VPC-keepalive.

Review Cisco Networking products for a $25 gift card