VPC DCi Vs OTV DCi accross two data centres connected by two 10GB Dark Fibre links.
We have a DC at our Corporate end and are looking to setup a new DR DC across two 10GB dark fibre lines. Both sites have a pair of Nexus 7K in VPC peering as the core. At present I have reduced it down to two possible solutions, and would like your opinion on which option would be best for our purposes. I want to check that I have covered all angles in picking the correct solution for us. So here goes:
Provide DCi solution between Corporate and DR sites. There are no plans to have DCi between any other sites in the near or distant future. Solution should mitigate any Spanning tree issues propagating across the DCi.
Option 1 - OTV to extend L2 LAN's from Corporate DC (site 1) to DR DC (Site 2):
If I go down this route, am thinking of running EIGRP between the join interfaces on each OTV edge device in each site to carry the IP traffic. Each join interface will be connected to the WAN Dark Fibre 10GB connections. HSRP hellos will be supressed with VLAN ACLs to give an Active/Standby state in each site for egress routing optimization.
Option 2 - VPC DCI to extend L2 LANs from Corporate DC (Site 1) to DR DC (Site2)
Configure trunk link across the two WAN Dark Fibre lines in VPC to allow the VLANs that I want to extend. VPc ports will be configured as spanning tree edge ports to stop BPDUs traversing the inter DCi Link. This will stop spanning tree issues from propagating across DC’s. HSRP hello’s will be suppressed between sites by Port ACLs on the inter connect VPC ports, to give an Active/Standby HSRP state in each site for optimal egress routing optimization.
Option 2 (VPc DCi) seems to give all the benefits of L2 LAN extension, i.e mitigate spanning tree issues spreading across DC’s and give maximum use of the available WAN bandwidth by coupling the two 10GB Dark Fibre lines in a single port channel to achieve 20GB connectivity between both sites. I cannot see much drawback to this solution. It is simpler to implement, involves introducing fewer network protocol and achieve maximum throughput between both sites.
Option 1 (OTV DCi), gives the same benefits of Spanning tree issue segregation between both sites, but seems to do most of the things that option 2 does but with the added overhead of introducing two additional network protocols OTV & EIGRP. Further as there is only one AED per VLAN per site, it will forward packet out only on one JOIN interface, hence only utilising one 10GB fibre link. You could achieve some load balancing by ensuring each of the edge devices on a given site is AED for alternate VLANs but this would not be as good as the port channel load balancing in option 2.
However with option 2, I am little weary of any broadcast storm related issues propagating the single VPC link to the other DC. Is there anyway of stopping this?
So to conclude, I am leaning towards going with Option 2 (VPc DCi). Have I made the right choice or am I missing anything?
Your input/comments would be greatly appreciated.
Also I have attached a brief sketch of both options.
Re: VPC DCi Vs OTV DCi accross two data centres connected by two 10GB Dark Fibre links.
Yes, your finial concern (broadcast storm related issues) is a valid one. OTV does NOT flood ARP, and BUM acroos the DCI links. Yes, it sends ARP across, but in a optimised way using ISIS contol-plane. In my test where i had DCI with no OTV i calculated near 30% of available link bandwidth was consumed by BUM/ ARP flooding.!
Cisco launched their solution for hybrid cloud solution for the Microsoft Azure public cloud back in September of 2017. Since that time, Cisco has enjoyed great success with installations around the world from Poland to Australia and many points in ...
NoticeThis is not an official guide, just something I've been testing to help during those "difficult" situations. All works here are my own!GoalsThere are some situations where we need to load an image onto a Nexus switch (or other network devices ...
I was playing a little with some show commands on ACI and i found that on leaves there are VXLAN tunnels also towards the APIC (10.0.0.1 in the figure); i was wondering why there should be these tunnels? APIC is using VLAN 3967 as infrastructure VLAN, and...