cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5019
Views
10
Helpful
9
Replies

Legacy Switching Private VLAN configuration Mapping in ACI

sachin.gawli
Level 1
Level 1

Currently I executing network Centric Migration for DC network to Cisco ACI. I wanted to know how to map legacy private VLAN configuration from switches to ACI construct.

I have some of the private VLANs configured with isolated VLANs. There are couple of server which are part of isolated private VLANs & each server has there default gateway on the firewall interface which is part of primary VLAN. Each Server only communicate to external world using its default gateway i.e. on the firewall. There is not communication between servers.

I need to map this configuration to ACI as currently I am following one VLAN to One BD , One EPG migration approach.

If anybody has done before this please pass your expert comments.  

9 Replies 9

ziyaayan15
Level 1
Level 1
Hi Gawli,

Did you find the solution?

PatrickH1
Level 1
Level 1

Dear All,

 

any News on that topic, i am facing the same challenge?

 

Kind Regards

 

Patrick

Hello everyone!

The year is now 2021 and the question still remains. Anyone has came up with a solution for PVLAN migration? The main question is how to configure ACI for the prolonged migration phase where part of the isolated endpoints is still in the legacy switching environment while some of them have been migrated Also at which point should one migrate the promiscuous ports?

I am looking at intra-EPG isolation vs EPG micro-segmentation, seems like micro-segmentation is mostly for the VMM integration and it is hard to believe that bare metal servers from legacy PVLAN environment should be migrated into uEPGs with MAC address mapping.

 

TIA for your input.

Alexei.

Hello folks.

I have just tested a solution and it works fine with PVLAN on the legacy side and two EPGs with a contract on ACI side.

Just a quick background on the topology. The PVLAN environment we have is a fully isolated BACKUP L2 switching topology with several hundred backup clients and a bunch of backup/DHCP servers. The clients have dedicated physical interfaces that are in isolated VLAN with ID abc of PVLAN implementation and can only talk to server and get DHCP IPs. The servers are in primary VLAN with ID xyz of the PVLAN implementation. 

Legacy backup environment is connected to the ACI visa a dedicated L2 migration trunk. On ACI side we have split backup PVLAN environment into two logical parts (EPGs): backup/DHCP servers and backup clients. Then we have defined a contract between those two EPGs. Backup client EPG keeps encaps abc (isolated VLAN) and is in intra-EPG isolation mode, clients cannot talk to each other. Backup/DHCP server EPG keeps encaps xyz (primary VLAN). To begin with, we have created IP-Any contract between client and server EPG. It gives us exactly the PVLAN functionality. L2 migration trunk should be in the server EGP with the primary VLAN ID xyz.

Have just tested it and it seems to work fine. Not exactly a network-centric approach, but for our use-case it is pretty straightforward and should work fine. We will be migrating some endpoints to see how it holds during the week. Will keep you updated on how it goes.

Cheers

Alexei.

Hi Alexei we actually did

implement a

simar scenario a while ago while migrating it worked fine. How was the outcome for you?

 

regards.

Hello Nuno.

It worked fine, the customer is happy with the solution.

Cheers

Alexei.

One question did uou had virtualized servers? Were they already using pvlan on the dvs port groups?

cheers,

Nuno

Hello!

It was a mixed bare/metal and virtualized environment.

Cheers

Alexei.

I just found this topic here. We did a ACI migration yesterday (step 1: L2 extension) and ran into the following issue:

EPG#1 (Backup Servers)

NX-OS primary VLAN: 100

 

EPG#2 (Backup Clients)

NX-OS secondary VLAN: 101 (isolated)

 

Migration link to ACI: Trunk allowing 100, 101

Migration link on ACI:

- EPG#1 is mapped to VLAN 100

- EPG#2 is mapped to VLAN 101

 

Problem

If both EPGs are assigned to the same bridge domain (and all servers are still connected to the legacy data center), then broad-, multi- and unknown unicasts from the backup server (in the legacy DC) are flooded to ACI (in EPG#1) (so far so good). However, the traffic is flooded back from ACI to the legacy DC in EPG#2 (because it's the same BD). So the NX-OS side thinks the MAC of the backup server is towards ACI (packet loss as long as the backup server does not send packets and the NX-OS switches learns the MAC on the correct ports again.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Save 25% on Day-2 Operations Add-On License