cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2150
Views
5
Helpful
5
Replies

Is it no longer a requirement that vpc peer keep alive use mgmt0?

MGStickler
Level 1
Level 1

The first N7K/N5K environment that I implemented required that the vpc keep keep alive "signal" be sourced (and possible destined) for the mgmt 0 port of the VDC (on the N7K)  that is implementing vpcs.

Did that change with 5.X?

What is best practice?

We are seeing some strange behavior in an environment that I didn't design or implement and a vrf was created for the vpc peer keep alive.

Thanks!

5 Replies 5

jdewberr
Level 1
Level 1

Hi Mark,

I always ever recommend keeping mgmt0 available to answer SSH & other protocols.  If and when the N5K box has problems with the filesystem & images the mgmt0 interface has networking at the loader & kickstart prompts.

I'd like to add a question here to help me better  understand.  On the Nexus5000 are you using the 55xx platform with the  L3 module?  If you have the 55xx with the L3 module you could setup a  /30 circuit on a separate vrf.  Could you elaborate on the strange  behavior?  Are you having a problem with routes/connectivity?

--

Joe

The first Nexus data center I built was a Nexus 7010 and 5020 environment. I used mgmt0 for vpc peer keep alive because Cisco told me I HAD to use this interface for vpc peer keep alive. This was over a year ago and I could not get VPC to work until I took this advice. This data center was very stable.

I am now troubleshooting a very unstable environment that has a number of issues. It is strictly a Nexus 7010 environment with two N7Ks that have 4 VDCs each. They are running OTV as well. They have no Nexus 5Ks.

One VDC is for Admin, one is Core, one is aggregation (the only one with VPCs) and one is for OTV.

I'm just trying to eliminate anything that could be causing instablity so I was going to recommend that they move vpc peer keepalive to mgmt0 rather than using a vrf.

How about CMP? I usually put that on the same out-of-band management network infrastructure that I put mgmt0 on.

You dont have to use the Mg0 as the peer-keepalive link anymore in the 5x releases.

Generally it is still however recommeded and better (IMHO) to use the Mg0 (an out-of-band) interface as the peer-keep alive link.

However that are times what it is be required to use a physical interface for the peer-keepalive. One such a example is relevant to your setup of using multiple VDCs.  If you put two VDC of the same N7K chassis in a VPC domain, your peer-keepalive link wont come up (if using the Mg0) since there is one active physical Mg0 interface. The Mg0 interface of the non-system VDC are virtual piggy-backing off the system-VDC Mg0. This can be obeserved on your out-of-band switch connected to the Mg0 ports, since all the Mg0 IPs will be using the same MAC address.

I have a similar setup, in the lab for testing. 2x N7Ks, without crossing linking, using 4DCs.

CMP is a different funky thing. Think of that as a console port available via a ethernet network. You can not use that for your peer-keepalive link, since the CMP run completely seperate. Its runs it own mini OS, with really limited config.

HTH

Excellant response. This relates to another post I just made. What are the guidlines when running VPCs between VDCs? Do both need a peer domain, peer link and peer-keepalive link or just the VDC that has STP root, etc?

VDC should be treated as individual switches, as they have no relation other then being physically on the same chassis.

Ergo, the same configuration requirement must be met as if the two VDC were two seperate physical switches.

Yes, with VDC this still means running physical cabling between the VDCs, even though they on the same chassis.

hth