Showing results for 
Search instead for 
Did you mean: 
Join Customer Connection to register!

Datacenter interconnect

We have a single 10Gb fiber connection between 2 datacenters. All production servers are in our main datacenter. Recently we moved production application server workloads to the remote datacenter. We perfrom failover testing from time to time in the remote datacenter which requires us shutting down the 10Gb interface between the sites on our Nexus. Now that the production workloads are running in the remote datacenter we cannot shutdown the 10Gb interface. How can we perform testing without affecting the production workloads running in the remote datacenter? Currently all servers in the remote datacenter are running on a single VLAN with EIGRP routing between sites. I am looking for input on how to allow the production application traffic from the remote datacenter to our main site while still able to perform our failover testing in the remote datacenter. Any suggestions would be appreciated.         



Where is int e4/41 in relation to the DCs, the server vlans etc ?

How many test server vlans are there going to be ?

If there are multiple test server vlans do you want these to be able to communicate with each other ?

You talk about web access to/from the test servers. Is this web access internet or specific production web servers ?

The address ranges used in your acls, are these production ranges ?

It's not possible to say whether what you have would work without knowing the answers to the above questions.

Could you provide answers and then we can say whether we think it will work or not.



In response to your questions.

1. The e4/41 interface is the 10Gbe on the N7K in the remote datacenter.

2. I just need communication between a single subnet DevQA/Mgmt 11.x.x.x(remote datacenter) and our production subnet ATL 10.x.x.x(ATL datacenter) during our testing.

3. Web access internet.

4. These are ranges use just as an example


Are these vlans in the remote DC (both test and production) routed on SVIs ?

If they are then applying the acl to the 10Gbps interface will control which traffic can come and go between DCs but it wouldn't control test servers talking to production servers within your remote DC.

Is this an issue ?

The acl doesn't include anything for web access. Is internet acess via the main DC because if it is your acls are going to get more complicated ie. you need to deny everything and then

"permit ip any " in your inbound acl


permit ip any" in your outbound acl

unless you know the specific IPs of the web servers on the internet.

I'm not trying to complicate it i just want to be sure that you don't have any unforseen issues.


The VLANs are both production and I just need the one subnet in remote datacenter communicating with the main datacenter during our monthly tests.  I know it wont control the inter VLAN communication in the remote datacenter as this will be work in progress trying to figure out. However, if you have any ideas on how to block the Inter Vlan communication while allowing web access out to the remote datacenter internet that would be helpful.


I deleted my last reply because i read you only had one vlan but i was assuming you were going to create new vlans for the test servers.

Are you going to do that or not.

If everything is in the same vlan is this vlan extended across the interconnect or is it only in the remote DC ?

Apologies but if you only have one vlan then yes the 10Gbps link is the place to apply the acl but if you can i would stringly recommend having a different vlan for the test servers.

I assume the one vlan is routed in the remote DC ?


Jon Marshall
VIP Community Legend


Can you give the test servers new IPs ?

If so the easiest thing to do is put them in their own vlan, create an SVI on the remote DC L3 switch and apply the acl there.


Yes, I can put the test servers in a separate VLAN with different IP's and apply the ACL's there. If that removes some of the complexity then I'm good with that.


i think that would be a good way to do it as it removes the danger of a misconfiguration of the acl on the 10Gbps link affecting production servers.

So if you create a new vlan and an SVI for it your acl becomes relatively easy. I assume you are using private addressing internally. So you don't need to specify each subnet you can simply summarise even if you are not using all the addresses within that range eg.

production addressing = 10.x.x.x

test servers = 10.10.10.x/24

ip access-list from_test_servers

deny ip

permit ip any

int vlan 10  <-- new test server vlan

ip access-group from_test_servers in

what the above does is -

1) deny any traffic from the test servers to any device in production

2) allow internet access

if there were some reason you needed to access any production servers just include host entries before the first deny line. The above does assume as i say all production is using 10.x.x.x addressing.

This only blocks traffic going out from the test servers. It would also block return traffic so if a production device tried to connect to a test server the packet would get to the test server but the return packet would be dropped. However if you want be doubly sure you can also apply an acl outbound ie.traffic to the test servers -

ip access-list to_test_servers

deny ip

permit ip any

int vlan 10

ip access-group to_test_servers out

with both acls the test servers can't send anythiing to production and no production devices can send anything to the test servers.

Obviously if you use other private addressing in production you need to add that as deny lines as well otherwise they will be allowed by the permit lines for internet access.

Finally if you are using HSRP and the switches are not running VSS don't forget to apply the acl(s) to both SVIs, one on each switch.

Any questions please come back.