cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2823
Views
0
Helpful
3
Replies

Nexus 9k - L2 trunk required with vPC peer link?

Palmer Sample
Level 4
Level 4

Hi all,

I'm pretty new to Nexus switching and am configuring a new deployment using 9k's for our datacenter core, connected to a pair of 6880's for the campus / aggregation.

I can post more details (rudimentary diagram, etc) if needed, but this is more of a design question:

The pair of 9k's are connected via 4x 40G uplinks.  Each 9k is also connected to the upstream Cat 6k via vPC with 4x 10G links to VSS MEC.  I've configured the 4x 40G as a port-channel and the vPC peer link, with vPC peer keepalive via the mgmt0 port (each mgmt0 port connected to a port on the Cat 6k).  Reading the Nexus 7k vPC best practices guide at http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guide.pdf

I see on page 27 that "We recommend that you create an additional Layer 2 trunk port-channel as an interswitch link to transport
non-vPC VLAN traffic."

Then on page 31, I see "The vPC peer-link is a standard 802.1Q trunk that can perform the following actions:
- Carry vPC and non-vPC VLANs..."

So which is it?

We have 4 FEXs which will be going into play, and that's probably a topic for another question (best practices for host connectivity - LACP with vPC vs Active/Standby links) - but realizing that some devices only have a single link, i.e. Integrated Lights-Out on the HP servers, will this non-vPC traffic be forwarded over the vPC peer link as well, or do I need to create a different L2 trunk link between the 9ks to provide connectivity for these hosts?

If I need to create another link, should I then split the 160Gb pipe into 2x 80Gb pipes, or leave all four 40G links connected and create a new trunk with a couple 10G ports?  I haven't read the details on how the traffic will be carried across the vPC peer link / what the utilization will be

Please forgive my ignorance, I'm trying to get a grip on the Nexus switching details as quickly as possible.  If any additional information is required, please let me know!

Thanks!

-P

2 Accepted Solutions

Accepted Solutions

Reza Sharifi
Hall of Fame
Hall of Fame

Hi,

You can configure a separate physical link for non-vpc vlans or you can add them to the existing vPC leer link, but you would need to deploy the dual-active exclude command. See page 12 in this link;

http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/C07-572830-00_Agg_Dsgn_Config_DG.pdf

Also, from the 9k documentation:

You can configure the inter-switch link for a backup routing path in the following ways:

  • Create a Layer 3 link between the two vPC peer devices.

  • Use the non-VPC VLAN trunk with a dedicated VLAN interface.

  • Use a vPC peer link with a dedicated VLAN interface

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide_chapter_0111.html#task_DE5A46D1A2A24578B5FF0915F03DBF06

HTH

View solution in original post

Hi,

I was reading a bite more, and it sounds like the vPC peer link is mainly used for control plane synchronization (MAC address table, etc) and only for data plane if needed - correct me if I'm wrong.  

You are correct.  There shouldn't be much traffic traversing the vPC peer-link.

So would the 'best practice' specify that we should consider splitting the 40G uplinks so there are 2x 40G for the vPC peer link and 2x40G for an L2 trunk for non-vPC traffic?

2 40Gig is way overkill, but to keep things consistent, 2 and 2 should be good.  Put 2 40Gig in one Portchannel and the other 2 in a different Portchannel.

I believe I also read that the best practice would be to connect single-homed devices e.g. iLO and server management ports to the vPC primary device to avoid potential issues if there is a peer keepalive link loss causing the secondary device to disable the ports.  Is that also correct?  If so, my interpretation would be that, under normal circumstances, normal data traffic would not travel between the nexus switches... right?

That is correct.

Just one more thing, for your vPC keep-alive, it is recommend to use the mgmt0 ports and connect them to a 3rd switch or your out-of-band management switch. See figure 6 in the second link I posted.

For someone who is new to Nexus, you have learned it pretty good :) Don't second guess yourself, you should be good to go.  If you can and have time, after setting it all up test the redundancy to make sure you know how things behave in case of any failure during the production. You don't want to be surprised.

HTH

View solution in original post

3 Replies 3

Reza Sharifi
Hall of Fame
Hall of Fame

Hi,

You can configure a separate physical link for non-vpc vlans or you can add them to the existing vPC leer link, but you would need to deploy the dual-active exclude command. See page 12 in this link;

http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/C07-572830-00_Agg_Dsgn_Config_DG.pdf

Also, from the 9k documentation:

You can configure the inter-switch link for a backup routing path in the following ways:

  • Create a Layer 3 link between the two vPC peer devices.

  • Use the non-VPC VLAN trunk with a dedicated VLAN interface.

  • Use a vPC peer link with a dedicated VLAN interface

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/6-x/interfaces/configuration/guide/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide/b_Cisco_Nexus_9000_Series_NX-OS_Interfaces_Configuration_Guide_chapter_0111.html#task_DE5A46D1A2A24578B5FF0915F03DBF06

HTH

Thanks, Reza.

I was reading a bite more, and it sounds like the vPC peer link is mainly used for control plane synchronization (MAC address table, etc) and only for data plane if needed - correct me if I'm wrong.  So would the 'best practice' specify that we should consider splitting the 40G uplinks so there are 2x 40G for the vPC peer link and 2x40G for an L2 trunk for non-vPC traffic?

I believe I also read that the best practice would be to connect single-homed devices e.g. iLO and server management ports to the vPC primary device to avoid potential issues if there is a peer keepalive link loss causing the secondary device to disable the ports.  Is that also correct?  If so, my interpretation would be that, under normal circumstances, normal data traffic would not travel between the nexus switches... right?

... I thought I had this figured out, but the more I read, the more I'm second-guessing my design!

Thanks,

-P

Hi,

I was reading a bite more, and it sounds like the vPC peer link is mainly used for control plane synchronization (MAC address table, etc) and only for data plane if needed - correct me if I'm wrong.  

You are correct.  There shouldn't be much traffic traversing the vPC peer-link.

So would the 'best practice' specify that we should consider splitting the 40G uplinks so there are 2x 40G for the vPC peer link and 2x40G for an L2 trunk for non-vPC traffic?

2 40Gig is way overkill, but to keep things consistent, 2 and 2 should be good.  Put 2 40Gig in one Portchannel and the other 2 in a different Portchannel.

I believe I also read that the best practice would be to connect single-homed devices e.g. iLO and server management ports to the vPC primary device to avoid potential issues if there is a peer keepalive link loss causing the secondary device to disable the ports.  Is that also correct?  If so, my interpretation would be that, under normal circumstances, normal data traffic would not travel between the nexus switches... right?

That is correct.

Just one more thing, for your vPC keep-alive, it is recommend to use the mgmt0 ports and connect them to a 3rd switch or your out-of-band management switch. See figure 6 in the second link I posted.

For someone who is new to Nexus, you have learned it pretty good :) Don't second guess yourself, you should be good to go.  If you can and have time, after setting it all up test the redundancy to make sure you know how things behave in case of any failure during the production. You don't want to be surprised.

HTH

Review Cisco Networking for a $25 gift card