05-04-2018 09:17 AM - edited 03-01-2019 09:34 AM
Hello Guys,
I have a question about the migration scenario of a brownfield network towards ACI Greenfield.
I have 2 Data centers seperated physically and i'm going to build a multipod from it.
So the phased approach would be:
- Build the greenfield fabric ACI
- Plan the migration
Now when i build the multipod i will have leafs and spines in each site (2 Spines and 5 leaves each site). This means that when i'm talking about migration. I will have to define a L2out in DC1 and one in DC2 to perform the Layer 2 migration between brownfield and greenfield ACI.
A scenario like this one:
The brownfield is connected to the ACI with a layer 2 trunk in the form of a double sided vPC. But i will have to foresee 2, because i have 2 data centers. Also the brownfield network has a data center interconnect which is a vPC on it's own.
In order to do the layer 2 migration i have to enable arp flooding and something else on the bridge domain inside ACI because of the traffic leaving ACI again in order to be forwarded/routed to the core switch in the brownfield network (nx-os)
But how will that work with 2 data centers? Say i have the vlan moved into the ACI fabric with the way out still being that L2out EPG in a single DC, what if that vlan has also been trunked across the brownfield data center DCI and ends up in the other DC. Won't i make my core switches go haywire for seeing mac address across the vPC doing the DCI and across the vPC going towards the ACI?
Or will this only happen in a scenario i have an active host in both data centers on that same vlan? Then i could manipulate the way ACI sends the traffic out by adding the L2out to the Bridge domain?
06-11-2018 10:58 AM
Hello Yannick,
Can you please elaborate on the deployment that you are considering with the ACI Multi-POD. Also, can you also elaborate on the use of DCI in the current network.
Regards,
Harshal
06-12-2018 05:48 AM
Hello,
So it will be a regular multi-pod with POD-1 in 1 data center and POD-2 in another.
We are using 2 DWDM circuits connecting via 4 IPN switches: 2 on each line in each data center. This whole multi-pod has been setup. Both ends have been configured and can see each other as BGP neighbours.
The following question is: How do you plan a migration from the brownfield network, which is pretty similar. It has 2 7K Core's connecting to a lot of 5K switches and everything is setup in back to back vPC redundancy. Now the brownfield network is also connected with 2 DWDM circuits and the logical connection is a vPC between the 2 7K clusters. So the DWDM is a vPC member.
Since it could very well be possible that vlan's in DC 1 are trunked to DC 2 across the brownfield data center interconnect. The problem is that I would like to have an exit from the ACI Fabric in DC1 and DC2 to smooth a possible migration. I guess I would have to kill the vlans's from going across the brownfield data center interconnect to avoid having a loop.
06-12-2018 06:27 AM
Hello Yannick,
Based on the overview given by you, I understand that you do not intend to have any form of L2 Communication between both the DC's (through ACI or Traditional Network).
For such a scenario, you can create Different Fabric Access policies i.e. different VLAN Pools, Physical Domain, etc. for both the DC's and also different EPGs. And, if in case you do not have an use case for Inter-DC communication through the IPN, you can very well go with a different Tenant altogeter.
Regards,
Harshal
06-12-2018 06:51 AM
Hi,
So at the current brownfield network design, I'm pretty sure that vlan's are stretched between the 2 data centers. The issue i'm having is that when i migrate a vlan to a BD in the ACI Fabric, the mac-address will be in the ACI Fabric, but if that vlan's is known in both data centers brownfield network. This could cause assymetric forwarding.
It will be brief since the vlan will be removed after it's migrated in the ACI Fabric.
I also intend to create a layer 2 out per data centers, because some vlan's that have been migrated have their L3 on a firewall in each data center.
So vlan 100 is active in DC1 and has their L3 in that DC1.
So vlan 200 is active in DC2 and has their L3 in that DC2.
It looks prudent to then create a L2Out per data centers to avoid having traffic exiting the ACI fabric to have to be forwarded across the brownfield DCI.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide