11-06-2009 09:27 AM - edited 03-04-2019 06:38 AM
After reading thru Cisco's data center interconnect, it is interesting. However, how do you use it in the real world scenario and why would you do it? The idea is to extend layer 2 VLAN from one data center to another; basically creating a giant virtual data center. If a data center is down, another data center will pick up the transaction.
Can't you do the same with Global Server Load Balancing? I would think two MPLS connections to each data center is more than enough. Has anyone implement this data center interconnect using VPLS, EoMPLS, etc?
Thanks
Solved! Go to Solution.
11-06-2009 01:53 PM
Can't you do the same with Global Server Load Balancing?
The answer depends on the application. If you are running an application that needs to have other servers in the same LAN and you want to have servers in different locations, you are forced to extend your L2 domain. DCI is the perfect solution.
If the application can interact with other servers running the same application and the communication can be routed, the GSLB would be the right solution.
Has anyone implement this data center interconnect using VPLS, EoMPLS, etc?
EoMPLS.
Regards
Edison.
11-06-2009 01:53 PM
Can't you do the same with Global Server Load Balancing?
The answer depends on the application. If you are running an application that needs to have other servers in the same LAN and you want to have servers in different locations, you are forced to extend your L2 domain. DCI is the perfect solution.
If the application can interact with other servers running the same application and the communication can be routed, the GSLB would be the right solution.
Has anyone implement this data center interconnect using VPLS, EoMPLS, etc?
EoMPLS.
Regards
Edison.
11-09-2009 09:32 AM
Thanks Ed.
Just wondering, from your experience, what specific apps require servers to be in the same LAN at different locations?
11-09-2009 10:42 AM
I have seen various vendors' (Sun, Oracle) clustering solutions require layer 2 connectivity between hosts in a given cluster.
Some have moved away from that with current product releases, but we often have to work with an installed base that continues to have the old required host connectivity characteristics.
11-09-2009 11:22 AM
Similar to the ones mentioned by Marvin - you can add Microsoft's LB to the list...
Regards
Edison.
11-09-2009 11:31 AM
VMWare vmotion, storage motion; FCoE, etc.
Regards,
jerry
11-17-2009 04:44 AM
Looking at the DCI podcast (by Amit Singh) on TechWiseTV, there are pair of 6500-VSS switches at each data center specifically to extend layer-2.
Can't this be managed using DC aggregation switches (6500-VSS) in both data centers and having dark fiber between them?
I think it would be hard to justify 4 nos of 6500-VSS for layer2-extension alone.
One of the other concern is about the split-subnet scenario; does the DCI design completely avoids split-subnet? How is spanning-tree root bridge assignment done if both DCs are in high-available mode?
I've narrowed down on the DCI using 6500-VSS using dark fiber option for interconnecting our data-centers, but need to study more (along with the server teams) on the implications at both layer-2 and layer-3.
Can anyone point me to right resource (technical documents, design guides,case study) for this requirement?
Thanks.
Rgds, VJ
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide