10-23-2017 08:39 AM - edited 03-01-2019 01:40 PM
Hi all,
I'm trying to figure out which is the less risky way to interconnect 4 DC, starting from a customer's requirement (arguable) and the limitation of hardware already chosen (not by me).
This is the actual scenario:
2 sites (DC1 and DC2) L2 interconnected hosting both servers and clients
DC1 Core on first site realized on 2 x 6500 in VSS mode
DC2 Core on second site realized on 1 x 4500
all L3 Vlan implemented on VSS+4500 through HSRP
Root Bridge and HSRP Primary on VSS
ASAs active on DC1, ASAs standby on DC2
This is what I need:
interconnecting 2 new DC sites (let's say DC1new and DC2new) for the purpose of DC migration
the same interconnection must remain in place after DC migration for client to server communication
This is what I can use:
4 Nexus 9372, 2 for each new DC
2 10Gb geographical link (about 100Km): 1 DC1-DC1new, 1 DC2-DC2new
The customer's requirement:
some Vlans should span geographically not only during migration, but also in final scenario, because for the L3 separation I hope for, an ad hoc assessment not compatible with migration planning is needed. It seems that at the end of DC migration, some vlans remain with hosts on DC1-DC2 and on DC3-DC4
Now, I'm perfectly conscious of all cons against L2 extension and that this doesn't map to best practice for DCI, but nevertheless I have to find a way to implement.
I suppose to interconnect Nexus between new DC in back to back vPC, but the interconnection with existing VSS+4500 network, creates a square topology and introduce the need for STP.
First question is:
does interconnecting a vPC based world with a non vPC based via STP work? Am I sure that link DC1-DC1new goes Forwarding and the DC2-DC2new blocking?
Secondly:
actual VSS+4500 (and all other L2 devices involved) are configured for MST, I would configure RSTP+ inside DC, but as I undestood I'm obliged to maintain Root Bridge inside MST Region.
Does it mean that I always need to go back and forth for any communication inside vlan?
What if I create a wider HSRP group involving Nexus and after DC migration I move HSRP priority on Nexus? At this point I should have L2 RB on VSS DC1 and L3 GW HSRP Primary on DC1new...
Thanks in advance
Chiara
10-23-2017 09:34 AM
10-24-2017 07:21 AM
I have no way of implementing any L2 extension on L3, like VxLAN or OTV.
The geographical links are dark fiber.
10-24-2017 01:11 PM
10-24-2017 07:42 PM
You know I wasn't even thinking of this, but I do have a customer running MP-BGP with EVPN with VxLAN to do what your customer is looking to do.
Again this will be supported depending on the hardware. In this case Nexus 9K, Nexus 5600 series, and Nexus 7K.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide