Showing results for 
Search instead for 
Did you mean: 


Can I connect same layer 2 switch under same layer2 VNI to different set of VTEP layer 2 gateway?


    we want to connect one blade server internal switch module to different VTEP layer 2 gateway swith

   one portchannel(two member port) connect to two vpc peer switch( vxlan vtep layer 2 gateway leaf switch)

   two port connect to another set of vpc peer switch (layer2 vtep gateway) by using traditional spanning tree connect  one port is forwarding  one port is blocked

   and vlan 100 on traditional port and port-channel(connect vpc)  on blader server switch module will be connected by same vxlan layer2 vni  across vxlan network different vtep. that creat a layer 2 loop across blade internal switch and vxlan network.

   is this possible and loop can be blocked automatically?


 thank you



For vPC VTEP and port-channel to the servers it will work fine but for non vPC VTEP (active-standby) you may want to use a feature called 'Layer 2 Gateway Spanning Tree Protocol' which works in combination with evpn esi multihoming
Micheline Murphy

Hello Andy--I'm not sure what design considerations you are trying to satisfy, but I would not recommend your proposed design.  There are a few reasons why I make that recommendation:


First, the whole point of a vPC ingress into a VXLAN fabric is to provide L2 redundancy without having to deal with STP stealing away half (or more) of network.  A properly running vPC will provide attached blades L2 connectivity that is loop-free, redundant, and very quick to converge in the event of a failure.  By adding a third link to another L2 device you basically force STP back into the equation.


Second, VXLAN provides ample redundancy for default gateway access through distributed anycast gateway.  You can present the same IP for any VLAN's default gateway on every VTEP if you need to. 


Finally, your design introduces a level of complexity and potential for failure that you might not have anticipated. 


 I'd like to hear what your design drivers are, though, and maybe there is a better solution. 


Hope this is helpful.  MM


         Thank you!

        we are facing a migration situation when migrating from an old network(65 switch), to leaf-spine vxlan network(core is 9508 , leaf is 93180YC-EX),
    we want to connect two temporary l2 trunk link from  each bleaf to each core 65 switch. and make sure all layer 2 vlan on old network connect to each vxlan layer2 vni ,and also create temporary l3 link and using ospf import old network ospf routes to one vrf. 
    now blade server are connected to two 65 in spanning tree active/standby state, we want to connect port-channel(new interface on blade server swtich module) to vlxlan bleaf vtep vpc at same time , when migrate blade server from old 65 network , we just shutdown old interface on 65 switch and traffic will be connected to new bleaf vpc interface.  but this design create a temp spanning loop between blade server and two old 65 switch and new vxlan network( l2 vni also bridge vlan ,create another loop pass VXLAN network)
because business on old network are very important , customer don't allow much longer time to migrate.
     last week I created a test network , connect two 3560 switch to two bleaf switch(not vpc) simulate our migrate method, and found some problem , I will create anther discuss threat.

Hey there Andy--OK, just making sure I have my head wrapped around your topology and migration plan right.  You've got two old network switches that are active/standby for a blade server.  They are running STP amongst them.  The plan is to connect each old switch to to the VXLAN fabric via an L2 and and L3 link.  Then connect the blade server to the VXLAN fabric via a port-channel, and then cut away the old links to the olds switches?  Is this accurate?


What are the blades doing?   Is it possible to migrate its workload to the VXLAN fabric first, which would enable you move the blade without worrying about a drop in service?  That might be the easier way to do things.


Just thinking out loud, I would think that you would need some way to make the old network believe that the border leaf vPC pair is the new gateway and new STP root.  Are you running HSRP or some FHRP on the old switches?  Can you have the vPC pair join the HSRP group and then adjust it so that the bleaf becomes the new active?  You'd need to do the same thing with STP, unfortunately.  At least you probably won't have to bounce the switches, since both STP and HSRP will run a new election without a failure of the active if a new BPDU shows up.  If the bleaf vPC pair becomes the new STP root and the new active gateway, then you should be able to prune away the old network connections without a problem as STP will shutter them anyway.


What's your other problem?









A better migration would be to attach the old switches to the VXLAN fabric but leave the SVI up on the old switches using an appropriately sized trunk link. Attach the old network and the VXLAN fabric to the same device providing egress (i.e. core) via L3 and an IGP. 


Migrate your workload to the VXLAN fabric but do not turn up anycast default gateways quite yet.  The migrated workload will continue to use the SVI on the old switches until you're ready to cut over.  When you're ready, no shut the anycast gateways and shut the SVIs.  The L3 cutover will occur automatically in your IGP, a much faster convergence than STP and you won't have to dicker with STP at all.





But I have a set of F5 LTM attach old 6506 , in your suggestion , we have to migrate these vlans same time for F5 and old switch,

I have a question , If I attach internal switch module of blade server with old network and vxlan vpc bleaf at same time , that create a layer 2 connection from old network 65-----blader server internal switch---vpc bleaf  ,  and spanning tree is also running on these link ,we are facing same stp transfer problem

   thank you


Good morning Andy--with my proposal of a trunk link between the VXLAN fabric and the old switches, you need to keep the SVI up on the network switches.  That makes the VXLAN fabric just another "branch" on the STP tree. The idea is to avoid having to converge STP in favor of a L3 convergence event, which will be much faster.


Since the old network will keep the SVI until you've migrated your workload, you will need to make sure that the trunk can handle the load of all of the server's traffic and for the time of the migration, workloads that are migrated to the fabric will take a sub-optimal route out of the network--VM>VXLAN fabric>across the trunk>old network switches>CORE.  But with a L3 link to the old network and a L3 link to the new VXLAN fabric from your network's core, your cutover will happen at IGP convergence speeds not STP convergence speeds.  The final cutover will be to just bring up the anycast distributed gateways on the VTEPs and shutting the SVI on the old network.  Your IGP will see the route failure to the SVI and cut you over to VXLAN fabric.


Does this make sense?  MM



Content for Community-Ad