cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
15754
Views
0
Helpful
14
Replies

vPC Keepalive link

thekid1970
Level 1
Level 1

Hello,

I was wondering if anybody can assist me with some vPC design best practice. I'm trying to get my head wrapped around and build some engineering/configs together before my equipment arrives. I have two nexus 7010 boxes arriving soon.

I will go ahead and configure the peer link using two 10g ports between the 7k's. Now how do I go ahead and configure the keepalive link. (questions)

1. use a copper dedicated single link using the mgmt ports on the supervisor for the keepalive?

2. use a copper port from 10/100/1000 line card (N7K-M148GT-11) for the keepalive? 

Also this network will have a dedicated mgmt switch for all the devices to be managed. If I go ahead with number one (above), how will I use that port to manage that device?

I hope this makes sense and some one can assist me.

Thanks,

JR 

1 Accepted Solution

Accepted Solutions

If you like, you can change the role priority to like 1000 or so. It is your call.

Other than that, your vPC peer keepalive link setup looks okay.

Regards,

jerry

View solution in original post

14 Replies 14

darren.g
Level 5
Level 5

thekid1970 wrote:

Hello,

I was wondering if anybody can assist me with some vPC design best practice. I'm trying to get my head wrapped around and build some engineering/configs together before my equipment arrives. I have two nexus 7010 boxes arriving soon.

I will go ahead and configure the peer link using two 10g ports between the 7k's. Now how do I go ahead and configure the keepalive link. (questions)

1. use a copper dedicated single link using the mgmt ports on the supervisor for the keepalive?

2. use a copper port from 10/100/1000 line card (N7K-M148GT-11) for the keepalive? 

Also this network will have a dedicated mgmt switch for all the devices to be managed. If I go ahead with number one (above), how will I use that port to manage that device?

I hope this makes sense and some one can assist me.

Thanks,

JR 

JR.

Either will work, but you've hit the nail on the head - if you use the management port from the Sup for the peer keep-alive, then you can;t connect it for management purposes.

From memory, if you configure the vPC keepalive without options, it assumes the "managment" vrf context for its updates (the config guide specifies you need to use a different routig context from your "main" context for the keep-alive traffic).

I went for option 2 out of your choices - took a port from the M148GT card on each N7K and used that for keepalive - not so much because I wanted the management port free, but because I have redundant sups, and nobody could really tell me what would happen if I configured the keepalive to use the management port of SUP1 and it failed over onto SUP2. :-)

If you choose to use your management ports for the keepalive, then you could simply put one of your 1 gig ports from the M148GT module into your "management" VLAN and connect it to your dedicated switch - but that kinda defeats the purpose of out of bandwidth management.

You do have a third option for out of band management - each Supervisor module has a "CMP-MGMT" port - the purpose of this port is for true out of band management - you could use your management ports for the keepalive, then use the CMP-MGMT ports for your OOB management switch connection.

So, you could have int mgmt0 as your peer keepalive, and interface cmp-mgmt module (x) connected to your management switch - which would save you using the M148GT card.

Just remember, whichever port you use to put it into the appropriate VRF - you run into all kinds of troubles getting vPC to work if you don't. :-)

Cheers

Darren,

I really appreciate your feedback. So you went with configuring a port on the M148GT card on both 7K's that's ended up being your keepalive?

In your design did you use MGMT 0 using one of the Supervisor (one each on 7k) connecting to a switch to manage the 7K's?

If not, what did you do with those ports?

I'm just trying to come up with a design. I know the design consists installing a dedicated switch to handle all of mgmt devices. So I'm just thinking should I add the keepalive link using the mgmt 0 ports or keep that seperate using the M148GT card. My only preference is that I want to be able to manage both 7K's however I configure in this scenario. Just a FYI,  I also have 2 SUP per 7K.

Can you maybe give me some sample config on how you went configuring your keepalive from your ports on the M148GT?

I'm looking foward to seeing more of your input.

Thanks,

JR

hi John

thekid1970 wrote:

Darren,

I really appreciate your feedback. So you went with configuring a port on the M148GT card on both 7K's that's ended up being your keepalive?

In your design did you use MGMT 0 using one of the Supervisor (one each on 7k) connecting to a switch to manage the 7K's?

If not, what did you do with those ports?

I'm just trying to come up with a design. I know the design consists installing a dedicated switch to handle all of mgmt devices. So I'm just thinking should I add the keepalive link using the mgmt 0 ports or keep that seperate using the M148GT card. My only preference is that I want to be able to manage both 7K's however I configure in this scenario. Just a FYI,  I also have 2 SUP per 7K.

Can you maybe give me some sample config on how you went configuring your keepalive from your ports on the M148GT?

I'm looking foward to seeing more of your input.

Thanks,

JR

Yes, I have used a port on one of my M148GT cards in each N7K for the keepalive link.

I don't use an Out-of-Band management solution - instead, I simple created an SVI on VLAN1 with a management IP range, and put both N7K's into it (along with VLAN1 on all my access switches which are connected using VPC's). The Management ports on my Sups are unused, as are the dedicated O-O-B ports, although I do have IP addresses configured on them so I can simply plug in direct if necessary.

I do this for the keepalive link

Slot 8 is an M148GT card in this switch

interface Ethernet8/48
description VPC Peer Keep-Alive link
no switchport
vrf member keepalive
ip address 10.255.254.1/24
no shutdown

vpc domain 1
role priority 1
peer-keepalive destination 10.255.254.2 source 10.255.254.1 vrf keepalive interval 400 timeout 3
peer-gateway
reload restore delay 300

And, on the second switch of the pair, the following (slot 4 also an M148GT)

interface Ethernet4/48
description VPC Peer Keep-Alive link
no switchport
vrf member keepalive
ip address 10.255.254.2/24
no shutdown

vpc domain 1
peer-keepalive destination 10.255.254.1 source 10.255.254.2 vrf keepalive interval 400 timeout 3
peer-gateway
reload restore delay 300

You also have to specify the VRF on each switch - it's a single line

vrf context keepalive

which gets you a seperate routing table specifically for the keepalive transactions thus

nexus# sh ip route vrf keepalive
IP Route Table for VRF "keepalive"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]

10.255.254.0/24, ubest/mbest: 1/0, attached
*via 10.255.254.1, Eth8/48, [0/0], 28w3d, direct
10.255.254.1/32, ubest/mbest: 1/0, attached
*via 10.255.254.1, Eth8/48, [0/0], 28w3d, local
nexus#

The reason I didn't use the mgmt ports in the Supervisors for the keepalive is simple - in a dual sup environment, what happens if the supervisor fails over for some reason? Your peer keep-alive link goes away, which is a Bad Thing. :-) It's not really an issue in a single sup environment - if your sup fails, you're got bigger things to worry abotu than your peer keepalives - but I couldn't really get a straight answer out of what'd happen if i DID use the mgmt ports, so I decided against it.

I manage my switches through my normal network link. Each switch (including access switches) has something similar to this configured on it

interface Vlan1
no shutdown
delay 10
description Management IP network
no ip redirects
ip address 10.250.1.1/24

and when I need to telnet into the device I simply telnet to that address.

VLAN1 is not used for *any* other traffic - everything else, even unused ports, are in different access VLAN's - so I'm reasonably confident that switch management traffic is isolated from anything else.

Cheers.

Darren

Hi Darren,

The reason I didn't use the mgmt ports in the Supervisors for the keepalive is simple - in a dual sup environment, what happens if the supervisor fails over for some reason? Your peer keep-alive link goes away, which is a Bad Thing.

I think you mis-understood how the management interface works in a dual SUP N7K. The management port will not go away, it will failover to the active SUP.

In reality, the management ports on the standby SUP is a standby port, only the management interface on the active SUP is accepting traffic, during a switchover, the newly active SUP will take over all the management traffics. Of course, this also assumes that you are connecting all the management interfaces into a separate OOB network.

HTH,

jerry

jeye wrote:

Hi Darren,

The reason I didn't use the mgmt ports in the Supervisors for the keepalive is simple - in a dual sup environment, what happens if the supervisor fails over for some reason? Your peer keep-alive link goes away, which is a Bad Thing.

I think you mis-understood how the management interface works in a dual SUP N7K. The management port will not go away, it will failover to the active SUP.

In reality, the management ports on the standby SUP is a standby port, only the management interface on the active SUP is accepting traffic, during a switchover, the newly active SUP will take over all the management traffics. Of course, this also assumes that you are connecting all the management interfaces into a separate OOB network.

HTH,

jerry

Jerry

No, I understood perfectly what happens when the failover happens - perhaps you don't understand how connecting two supervisors worth of management ports and using them for keepalive would work.

Consider - SUP 1 on switch 1 connects to SUP 1 on switch two

                SUP 2 on switch 1 connects to SUP2 on switch two

                    SUP1, SW1          SUP2, SW2

                       active                   standby

                         |                              |

                         |                              |

                      active                     standby

                    SUP1, SW2          SUP2, SW2

This is fine as long as SUP 1 is primary on both switches - keepalives work happily because they're physically connected to each other.

But what happens if SUP 1 in switch one fails, and fails over to SUP 2?

                    SUP1, SW1          SUP2, SW1

                       failed                    active

                         |                             |

                         |                             |

                      active                    standby

                    SUP1, SW2          SUP2, SW2

Now we have SUP 2 in switch one with the IP address of the keep alive link active, but it's *not* physically connected to active IP address on switch 2 - which is currently running on SUP 1.

Now, you *could* run these connections through a stand alone switch to allow you to connect all ports together in one big lump so it doesn;t matter which SUP has the "active" IP address of the keep alive link - but that adds complexity, goes against the best practises guide for vPC's from Cisco, and you also have to consider - what happens if the switch being used to amalgamate these four ports together fails?

Much, much easier to use a port on another card and remove the risk of losing the vPC keepalive.

Cheers.

Darren

Now, you *could* run these connections through a stand alone switch to allow you to connect all ports together in one big lump so it doesn;t matter which SUP has the "active" IP address of the keep alive link - but that adds complexity, goes against the best practises guide for vPC's from Cisco, and you also have to consider - what happens if the switch being used to amalgamate these four ports together fails?

It didn't go against the BP for vPC . Connecting all the management interfaces to the separate OOB network is actually the Nexus BP for management. This will separate your management traffic during any data traffic outage.

Connecting the management links to the OOB make no different like when you are connecting the keepalive link via an interface off the M148 LC. The same situation applied if the LC failed.

Regards,

jerry

jeye wrote:


It didn't go against the BP for vPC . Connecting all the management interfaces to the separate OOB network is actually the Nexus BP for management. This will separate your management traffic during any data traffic outage.

It did when I put mine together. :-)

Connecting the management links to the OOB make no different like when you are connecting the keepalive link via an interface off the M148 LC. The same situation applied if the LC failed.

Perhaps - but I've got a lot more M148 ports, and by using one of them I didn't have to pay for an extra switch for OOB and keepalive traffic.

Cheers

It is your choice if you choose to do that.

Regards,

jerry

Good morning Darren and Jerry,

I really appreciate both of your input. So after reading both of your responses, I have a couple of different ways of going about this.

Since we bought a switch that's going to be dedicated for mgmt traffic for a bunch of devices. It sounds like that's the way to go. I just need to think I will need 2 or 4 ports that I will need to count for on that switch in this configuration/design, to be a link or links for my keepalive to my 7k's. Configuring and going this way it seems like it's best practice also?

1: I guess my design should take mgmt 0 from BOTH supervisors on each 7K (total of 4) to create my Keepalive and plug them into 4 ports on my switch?

2: Or do I just use one mgmt 0 from each 7K.

Regarding question 1. if i went that route, what would be an example of that configuration? On question 2, example on that configuration?

Thanks,

JR

Option 1 is the way to go if you have a dedicated management switch.

The following config will be applied to both SUP in the same chassis:

vrf context management
  ip route 0.0.0.0/0 x.x.x.x

interface mgmt0
  vrf member management
  ip address x.x.x.x/x

vpc domain 1
  role priority 1
  peer-keepalive destination y.y.y.y source x.x.x.x vrf management
  peer-gateway

HTH,

jerry

Jerry,

Thanks for your quick response and it's helping me put alot...... Ok, after reading your config. I put together my config so it can make sense to me. I would appreciate it if you can look over it and see if I'm on the right track and I do have a couple of questions.

The configuration steps for the first switch, Cisco Nexus 7000 Series Switch 1, are:

Configure the management interface IP address and default route.

N7k-1(config)# int mgmt 0

N7k-1(config-if)# ip address 172.25.182.51/24

N7k-1(config-if)# vrf context management

N7k-1(config-vrf)# ip route 0.0.0.0/0 172.25.182.1

Enable vPC

N7k-1(config)# feature vpc

Create the vPC domain, role priority

N7k-1(config)# vpc domain 1

N7k-1(config-vpc-domain)# role priority 1

Configure the peer keepalive link. The management interface IP address for Cisco Nexus 7000 Series

Switch 2 is 172.25.182.52.

N7k-1(config-vpc-domain)# peer-keepalive destination 172.25.182.52 172.25.182.51 vrf management

2nd SUP – Just copy and paste into the MGMT 0 port or do i need to enter a seperate IP address for that mgmt 0 port ? This is still a little fuzzy to me. I would assume this is the standby port?

The configuration steps for the second switch, Cisco Nexus 7000 Series Switch 2, are:

N7k-2(config)# int mgmt 0

N7k-2(config-if)# ip address 172.25.182.52/24

N7k-2(config-if)# vrf context management

N7k-2(config-vrf)# ip route 0.0.0.0/0 172.25.182.1

N7k-2(config)# feature vpc

N7k-2(config)# feature lacp

N7k-2(config)#vlan 101

N7k-2(config)# vpc domain 1

N7k-2(config-vpc-domain)# peer-keepalive destination 172.25.182.51 172.25.182.52 vrf management

I guess on switch two I DO NOT need to enter a role priority it will go to a default value which is 32667? or should i enter role priority 2?

Thanks,

Jerry

If you like, you can change the role priority to like 1000 or so. It is your call.

Other than that, your vPC peer keepalive link setup looks okay.

Regards,

jerry

Jerry,

Sorry on the late response. Thanks for all your assistance.

I would always use a dedicated port on the 5K for the peer keepalive. It is such an important link to use an OOB management switch. There is always a possibility of someone unplugging some cables from that switch and guess what? you would never know that your peer keepalive links are down. The peer keepalive can go down and you would not notice any issues until you have the failure of the peer links. If your peer links are down and you don't have that keepalive link active, you will have split brain situtation, which would be disastrous. Do a direct connection from one port to another port on the 5Ks. Use a twinax cable which is 1/3 the price of a GLC-T copper module. So what if you have to use 1 port on each 5K? I would rather do that then taking a  chance of my OOB swicth going down or someone not knowing better, unplugging cables from thay OOB switch. If the keepalive link is direcly connected to a port on the 5K, a reasonable engineer would never dare to unplug anything that is connected to his core swicthes:)

Moreover, you should always make that port a routed layer 3 port instead of an SVI, and put the layer 3 into a separate vrf. Reason is this: if you make it an SVI, it would work, until you lose your peer link. When you lose your peer link, the secondary switch is instructed to shut down all of its vpc member ports and all of its SVIs, to prevent a loop. The problem here is that the secondary switch would also shut down the SVI that you created for your keepalive, that would in effect take down your peer keepalive link. Now, not only that you have lost your peer links, your safety net, the peer keepalive, is also down. Now you have a dual active scenario.

Review Cisco Networking products for a $25 gift card