cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4567
Views
0
Helpful
6
Replies

Nexus vPC domain layer to legacy IOS core design practices

DAVE GENTON
Level 2
Level 2

Looking for best practices when customer purchases new access layer 5k's to 2k's in vpc domain when core is still 4500 or 6500 non vss.  Typical core setup dual 4500 or 6500 trunking all vlans between the devices.  Now we are adding 5k's via vpc where typically core1 would have a port channel containing 2 links, 1 to each 5k configured as vpc on 5k's. This is duplicated on core2 router/switch having port channel to second vpc configured on 5k pair.

  Now typically those core devices are STP root and secondary as uplinks are L2 to SVI's for each vlan.  vPC domain should be STP root and secondary per design guides and some customers are hesitent to allow this.  What changes if any should be done in 4500/6500 core when L2 connected with SVI's for L3 ? Due to this being but a single access layer module being added on with existing stp access layer attached switches inter-switch trunk cannot be removed meaning each vpc is a loop created, 1 vpc must be blocked but when connecting up I'm not seeing vpc's block at all unless BA is enabled on vpc which is not good all though it gets a port channel in blocking state.

  When customers are requesting nexus 5k/2k pairs and vpc connectivity but core is not leaving yet for vpc aware upgrade how should STP and vlans be treated on core and nexus ? Design Guides only speak to L2 south with vpc to access layer and L3 north which is not common for me to see.  Mixed STP and vPC is not documented well here and with no lab to test documenatation is paramount for guidance.  Using best practices stp root pri/sec on 5k's, vPC to core1 port channel, vPC to core2 port channel, core typical inter-switch link and stp L2 links to other access devices and L3 uplinks to fw and routers, both vpc's are forwarding on 5k's yet reachability southbound to servers is unavailable when connected up this way, north to core is unstable and stp recalculating often.

Thanks in advance for assistance with decisions, and best practices or tested and deployed scenarios for stp priorities and roles between nxos and ios devices as core/distro-access via layer 2.

dave

6 Replies 6

Surya ARBY
Level 4
Level 4

I tested this scenario this week with 5.2(1)N1(1) running on Nexus 5k.

No problem seen; here is my VPC domain template :

vpc domain 101

  role priority 200

  system-priority 200

  peer-keepalive destination x.y.z.t

  auto-recovery

A VPC is just seen as a standard port-channel by the STP process

You can still use loop-based on Catalyst on convert each catalyst to a loop-free design with "L3-on-a-stick"; this last case is probably the best design as your STP will not block any link.

Usually I let STP running on the Catalyst (Catalyst are root / backup root) to keep ISSU available on the 5k.

I totally understand that but the legacy core should remain the STP root, not the nexus "pod" yet it's only documented for the nexus to be STP root in each pod assuming layer 3 uplinks to core but this is not the case.  They have multiple pods with same vlans, therefore nexus being stp root is not optimal.  Further core is not vPC aware so while I can use vPC on nexus to attach  to both core nodes each node requires it's own port-channel and in every attempt to get the nexus port channels connected to the core in the very short change windows we have much of the data center goes unreachable until the cables between nexus and core are unplugged.  I am looking for much more than vpc domain config my friend, that's not even a question asked nor a concern but overall best practices with stp and vpc when adding vpc pod to exising core with layer 2 to be specific.

thx

Hi Dave,

I have same problem. Did you find some solutions?

Peter

Just got back from testing, responding, check next msg bro

danrya
Level 1
Level 1

Here's a link with the topology that your referencing:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572829-01_Design_N5K_N2K_vPC_DG.pdf

Look at the bottom of page 8 "Non-vPC Aggregation Layer"

Dan

Thanks Dan, I had read that prior to even posting this actually, but just arrived home from installation and some of the testing in customer data center.   I have deployed this scenario many time before, hence the posting.  This time however while I did this same scenario, I removed the vPC portion from the 5k north to the core.  Without vPC and just plain port channels from each 5k to each core 6509 it seems to survive all failover scenarios better and cleaner.  

  Has anyone here had chance to lab this up and step thru failures ?? I'd like to know if STP changes topology when I lose a single 10G phy member in my port-channel ? Does the cost change on the PC when its bandwidth is cut in half ? or does he continue to report bw of all members despite one being down ?  Never seen or had chance to test this and see if NX-OS recalculates the bw, adjusts accordingly while that physical is down, and sends out TCN, anyone ?

dave