cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2233
Views
0
Helpful
2
Replies

Subnet vMotion? - Mixing multilayer networks with vMotion

sossie
Level 1
Level 1

Hi all,

Just hoping to open a general discussion and get a few ideas how network designers are dealing with the apparent conflict between multi layer network design and the desire for flat vlans to faciliate vMotion. And how people manage DR for layer 3 connected data centres.

Due to increased use of virtualization these days, many network designers are looking to extend their Layer 2 domains across multiple data centers to allow transperent vMotion. This seems to conflict with best practice multi layer network design that requires small layer 2 domains (to reduce stp risks).

I am working on a couple of network designs for DR sites. We are using Veeam to replicate servers from the primary datacentre to the DR site. For one project we have only a layer 3 (routed) link between data centres, and for the other we do have a layer 2 link between data centres.

Between the two data centres with the layer 2 link, we can vmotion a server from one DC to the other transparently. So our DR is fairly seemless.

For DR between the two Datacenters where we only have a layer 3 link, we have a nice multi layer design running OSPF. This means that in a DR senario out plan is to:

shutdown the Switch Virtual Interface (SVI) for the network that the VM was on at the failed site (if its still operating)

Turn on a SVI with the same IP and network at the DR site.

Turn on the replicated VM at the DR site

Cross fingers and OSFP will tell all our clients that are reliant on the failed VM where the replicated one was turned on.

The problem with the above is that its a manual process - I call it "subnet vMotion". Does anyone else have any ideas on how they handle this sort of DR setup neatly?

One idea I had was if Cisco and VMware (and the other virtulisation crowds) could develop something so that within Vsphere you could choose to "vMotion" an entire vlan (subnet). All single homed hosts with an interface in that vlan could move, and at the same time do talk to the routers/switches and disable the vlan on the source router and enable on the destination. There would be a bit of an outage while routing protocols converged, but it would save allowing the server guy having to do a lot of manual networking stuff. Or perhaps its already possible to do something like this?

Cheers, Simon.

2 Replies 2

Martin Parry
Level 3
Level 3

Hi

Until recently this has been a bit of a design issue, and required the extension of layer 2 between the seperate datacentres.  However this has now been addressed with the use of Nexus 7K and OTV (Overlay Transport Virtualisation.  This is essentially MAC in IP, and allows layer 2 connectivity to exist across the two datacentres that are segregated via a layer 3 WAN.

Information on OTV can be found here:

http://www.cisco.com/en/US/partner/prod/switches/ps9441/nexus7000_promo.html

Hope this helps a little

Martin

Darren Ramsey
Level 4
Level 4

Simon,

We have pretty much the same setup, L3 between the DC's, OSPF as the IGP, and SRDF/S between the DC's.

We utilize an active SVI at one DC and an inactive SVI at the other per VM cluster. We keep Vmotion/DRS contained to L2 within the DC, and then for disaster recovery we use SRM to move the entire VM cluster to the other DC, and activate the standby SVI. Now as you pointed out this is a "move all or nothing" type of setup. We are fine with this configuration as we never envision both SVIs being active at the same time in a DR/BC scenario.

However from what I am hearing, LISP should be able to address L3 mobility. Maybe an option if you don't have OTV or VPLS to extend L2 over L3 between your DC.