08-11-2011 10:47 PM - edited 03-07-2019 01:40 AM
Hi,
we have 2 datacenters at the same building and we have servers on both datacenters working at the same vlan.
we have decided a design with 4 nexus nexus switches but we are not sure this design suggested by cisco. I want you hear your comments about this design.
Thank you.
Solved! Go to Solution.
08-12-2011 07:08 PM
Hi Muhamed
with double sided vPC there are no issue, it is one of the advantages of using Nexus of having all the paths in forwarding state and redundant
however you just need to be aware about the design requirements for this type of topologies, as stated above you need to make sure each vPC side has its own VPC Domain see the diagram bellow
also for L3 part you need to be careful,
- are you planing to make the N7Ks per Data center the HSRP act/standby for the local servers ? if yes you need to plan it well as you have flat L2 between the two data centers
- what about L3 from the N7K to the core ? it is better to have a L3 links and use ECMP from the routers upstream to the N7K down
i would recommend you to have a look at this discussion where we were discussing this part of the topology (L3)
https://supportforums.cisco.com/thread/2098897
one more thing to consider with your N7K and HSRP design is that HSRP Tracking is not Recomended when you using vPC see the diagram bellow where in the case of core links down the N7K will shutdown the SVI and traffic will be sent over the vPCpeer link for routing back to the other vpc member port and this wil lbe blocked by vPC behivour, it is better to have a L3 Vlan ( non-VPC) or L3 link betweent the N7K boxes for this case
i think these are the most important points to take it into your consideration from a hgih level point of view
HTH
pls rate the helpful Posts
08-12-2011 04:02 AM
Hi Muhammed
this design in high level looks ok
however yo mentioned that your datacneters are in the same vlan is that mean you have a L2 service provider in between or you have your own interconnection link fiber for example ?
because if you have a L3 service provider Nexus can support L2 end to end over L3 SP using a technology called OTV
about other aspect in your diagram you are using vPCs form access switchs ( are they nexus ? )
from Nexus DC 1 to DC 2 i am assuming L2 with vPC as well
not sure if you have any concerns or any spicific question about it as i do not see any issue with it form high level
anyway you can refer to the following link which will give all what you need to know and understand form design point of view
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/DC-3_0_IPInfra.html
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/nx_7000_dc.html
HTH
if helpful Rate
08-12-2011 09:42 AM
Agree with marwanshawi, the design will depend on your transport between DC1 and DC2.
The only concern is north bound L3 traffic flow. I am not sure you want to have active/active for both DC or active/standby - all traffic will come to DC1, let's say.
Regards,
jerry
08-12-2011 11:32 AM
Thanks for the answers.
The cables between datacenters are dark fiber. There is no ISP between datacenters. we are not planning to use OTV.
The connections between Nexus 7000s are all trunk ports. and there are 2 vpc domains, each vpc belongs to one datacenter.
at this design, L3 SVIs (server's gateways) are Nexus switches. there will be 4 HSRP gateways.
My convern is all about double layer (back to back ) Vpc.
And nortbound connections to the core backbones which are L3 ports from each Nexus to Core backbones.
my spesific questions are:
witch back to back vpc, will there be any problem for the traffic from server (DC1) to server (DC2) at same/different vlans..
08-12-2011 04:56 PM
Shouldn't have any problems. Only major gotcha is to make sure that you have different VPC domains on the two DC's.
-Matt
08-12-2011 05:26 PM
Your data centers don't seem that big - do you really need to spend the colossal amounts of money on 4 N7Ks?
08-12-2011 07:08 PM
Hi Muhamed
with double sided vPC there are no issue, it is one of the advantages of using Nexus of having all the paths in forwarding state and redundant
however you just need to be aware about the design requirements for this type of topologies, as stated above you need to make sure each vPC side has its own VPC Domain see the diagram bellow
also for L3 part you need to be careful,
- are you planing to make the N7Ks per Data center the HSRP act/standby for the local servers ? if yes you need to plan it well as you have flat L2 between the two data centers
- what about L3 from the N7K to the core ? it is better to have a L3 links and use ECMP from the routers upstream to the N7K down
i would recommend you to have a look at this discussion where we were discussing this part of the topology (L3)
https://supportforums.cisco.com/thread/2098897
one more thing to consider with your N7K and HSRP design is that HSRP Tracking is not Recomended when you using vPC see the diagram bellow where in the case of core links down the N7K will shutdown the SVI and traffic will be sent over the vPCpeer link for routing back to the other vpc member port and this wil lbe blocked by vPC behivour, it is better to have a L3 Vlan ( non-VPC) or L3 link betweent the N7K boxes for this case
i think these are the most important points to take it into your consideration from a hgih level point of view
HTH
pls rate the helpful Posts
08-13-2011 05:39 AM
thank you for all the info that you have provided.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide