12-06-2011 06:39 AM - edited 03-01-2019 04:45 PM
This document describes general recommendations to optimize convergence and ensure stable and deterministic operation of a Virtual Port Channel topology.
There are no specific requirements for this document.
The vPC peer link is a standard 802.1Q layer 2 trunk which potentially carries all VLANs. It also carries regular control plane traffic such as HSRP hellos, STP BPDUs, etc. The vPC Peer Link also carries the critical CFS (Cisco Fabric Services) traffic, which is marked with a CoS value of 6.
The vPC Peer Link is considered critical for vPC operation; therefore steps should be taken to ensure that this link operates in the most reliable and resilient way possible. As such, the general Peer Link recommendations are as follows:
The recommendation to use a minimum of 2 x 10GE links for the vPC peer link ensures that a single 10GE linecard failure will not result in a vPC failover situation, whereby reliance on backup features such as the Peer-Keepalive link or Spanning Tree may come into play.
The recommendation on dedicated rate mode is made to ensure that oversubscription should not become an issue on the vPC Peer Link. The Nexus 7000 32 port 10GE linecard (N7K-M132XP-12) has the ability to disable three out of four 10GE links within a given port group, resulting in full 10Gbps line rate capability on the remaining port in the group. The rate mode on a port can be configured using the following CLI commands:
interface Ethernet
rate-mode dedicated
Full details of rate mode and port group information are available at the following URL:
Bridge Assurance is an enhancement to the Spanning Tree Protocol that enables the sending of BPDUs on all operational network ports, including alternate and backup ports. If a port does not receive a BPDU within a specified time period, the port moves into the blocking state. In order to provide additional protection in the event of unexpected reliance upon Spanning Tree, the Bridge Assurance and UDLD features should be enabled on all vPC Peer Link connections.
The vPC Peer-keepalive Link (PK-link) is used to provide protection against dual active scenarios in the event of the primary Peer Link being lost. If loss of the Peer Link takes place, the PK-link is used to determine the status of the opposite peer (in other words, to determine whether the loss of connectivity is due to link failure or node failure).
The PK-Link uses a simple heartbeat between vPC peers these messages are sent out every 2 seconds, and the link uses a 3 second hold timeout upon loss of the primary Peer Link.
In order of preference, the following types of interface should be used for the vPC PK-link:
1. Dedicated Link (1Gbps is sufficient) using dedicated VRF
2. mgmt0 interface (shared link with management traffic)
3. Routed over L3 infrastructure (least preferred)
In the event that the chassis is not equipped with any 1GE linecards (i.e. the chassis supports 10GE only), then option 1 becomes less desirable - it is considered inefficient and expensive to dedicate a 10GE connection solely for the PK-link. In this case, option 2 (mgmt0 interface) should be considered.
NOTE: If the mgmt0 interface is used for vPC peer-keepalive functionality, these interfaces must be connected together using an intermediate switch. This is because in a dual Supervisor system, only one management port is active at any one time (e.g. either slot 5 or slot 6). If the mgmt0 interfaces are connected without a switch, i.e. back-to-back, this will cause issues with vPC.
One topology which should not be considered is where the vPC PK-link is routed across the vPC Peer Link. This defeats the object of having a dedicated PK-link - the PK-link is designed to act as a backup for the primary Peer Link, therefore it does not make sense to route the PK-link across the Peer Link.
vPC member links are the interfaces taking part in the port channel itself; in other words, the connections to the downstream switch. The following recommendations apply to the vPC member links:
In order for vPC to function, the system must elect a primary and secondary peer device. By default, these roles are elected automatically by NX-OS, however it is beneficial to manually control which devices assume the primary and secondary roles.
The recommended practice for configuring role priorities is to ensure that the primary vPC peer is the same device used as the Spanning Tree root bridge and HSRP active peer. This can be achieved by lowering the role priority (default value is 32667), using the following commands:
vpc domain
role priority
The vPC System Priority parameter corresponds to the LACP System Priority value, and is used to guarantee that a peer partner cannot make a decision in terms of aggregation capability and advertisement. In the case of LACP, it is possible to have up to 16 Ethernet ports in a channel, however only eight of these ports can be active at any one time, with the remaining eight operating in standby mode. If there are more than eight ports in the link, the switch at the controlling end of the link will make a decision on which ports are placed into the channel.
It is recommended to manually configure the vPC System Priority to ensure that the vPC peer devices are the 'primary' devices in terms of LACP. It is also important that the vPC System Priority is identical on each pair. vPC System Priority is configured using the following commands:
vpc domain
system-priority
There are a number of configuration parameters which must be identical across the two vPC Peer Switches. Failure to comply with this requirement will result in the vPC moving into suspend mode. The configuration parameters which must be identical are as follows:
The default NX-OS behaviour is to not allow any changes to be made to vPCs while the vPC peer link is in a failed state.The intention is to preserve the consistency of the configuration, but in some cases it may be desirable to make changes while the peer link is down, for example provisioning new vPC-attached servers when one of a pair of Nexus 5000s is out of service.
On the Nexus 5000 you can modify the default vPC behaviour when a peer link is down on the primary vPC switch.The default behaviour on the primary vPC switch, after a peer-link goes down, is to keep the vPCs down once they get flapped and to not bring up newly configured vPCs. This command allows newly configured vPCs and existing vPCs that are flapped to be brought up when a peer link is down.
vpc domain
switch(config-vpc-domain)# peer-config-check-bypass
NOTE: The command comes into effect when a peer link is down and the vPC role has been determined to be primary.This command has no effect on the behaviour of the secondary switch, which disables its vPCs in the event of peer-link failure.
Nice Document..
Hope you can help me understand below scenerio:
Suppose if I have 2 N7K running vPC, and I have configured both N7K as STP root by manualy setting priority for all vlans to 0.
Now question are:
- Is this steup recommended.?
- What problems could be faced due to this configuration.? like loops?
- What happens in terms of STP to devices in non- VPC vlan, while connected to to both the chaisis.?. I know in VPC vlans BPDU will be sent by operational primary N7K with vPC system-mac as bridge ID.
Thanks
Sandeep Rawat
What is the impact of not having the STP port states of the vPC peer links as Network with bridge assurance enabled globally on a Nexus 5K?
One question:
Since VPC have role election; if Primary switch reload then secondary switch will become operational-primary switch, does this process will execute role re-election?
Thanks
Lance
Thank you for this useful document. I have a question about the MTU of the vPC peer link (on a Nexus 7706). What can be the bad effect if I have some vPCs operating at MTU 9216, but the vPC peer-link port-channel is still at the default MTU 1500. It seems to work, but I guess if I had any orphaned ports then jumbo frames could not cross the vPC peer-link.
So what would be the effect of changing the MTU on the vPC peer-link on-the-fly? Assume I have no orphaned ports at the moment, and that the vPC peer keepalive is up and running.
Thanks
Kevin Dorrell
Hello,
My questiob is about vPC PK-link.
In order of preference, the following types of interface should be used for the vPC PK-link:
1. Dedicated Link (1Gbps is sufficient) using dedicated VRF
2. mgmt0 interface (shared link with management traffic)
3. Routed over L3 infrastructure (least preferred)
these interfaces must be connected together using an intermediate switch.
But as I understood that intermediate switch will be like a point of failure. As i understood for Nexus 9372, i can use both mgmnt0 interfaces without intermediate switch or 10g interface on both switches.
Hi Artem,
You are entirely correct!
This has been the source of much confusion for the past 5 years due to guides like this one that failed to account for more than one line of Nexus switch.
Scattered in design and deployment guides for the 1 and 2 RU Nexus models, you will find the recommendation to use the mgmt0 port for keepalive. This is the best practice for all Nexus switch lines that lack dual supervisor modules.
Note that you will still have the intermediate switch if you wish to actually manage the Nexus via mgmt0. If management is in-band, directly connecting the mgmt0 ports from Nexus 1 to Nexus 2 is recommended.
Hello.
Sorry, but i confused with configuring vPC with Layer 3. I find different recomendation for configuring vPC with Routing but only for n5k and n7k.
But what about n9k. I have two n9372TX with NX-OS. I need to connect nexus and 3 com usin Layer 3. What will be best practice for it.
1) Using SVI on nexus
2) or using vPC and L3 port-channel interface.
And need conecting ASA to N9k with redaundancy (vPC, ether-channel or L3 port-channel)
Thanks
Sounds interesting, let us know if you got this to work.
Regards,
It's always best idea to choose interfaces from multiple modules for peer-link. If particular module fails, other module will still work. If anyone interested, you can check VPC configuration and Back-to-Back VPC configuration post.
Dears,
When you creat member ports into a Port Channel, Is it vpc (number) the same that vpc domain (number)?
For example:
Nexus1
vpc domain 10
int po ch 10
vpc 10
interface e1/1
description connection to downstream sw1
vpc 10 (Could be different number that the first VPC DOMAIN 10? why?)
Nexus2
vpc domain 10
interface po ch
vpc 10
interface e1/1
description connection to downstream sw1
vpc 10
1) What happen if I change the vpc number member port, and it's not match with vpc domain number?
2) For associate member ports only need match interfaces without consider vpc domain number?
Regards!
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: