cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2016
Views
0
Helpful
7
Replies

Datacenter design with nexus switches

Muhammed AKYUZ
Level 1
Level 1

Hi,

we have 2 datacenters at the same building and we have servers on both datacenters working at the same vlan.

we have decided a design with 4 nexus nexus switches but we are not sure this design suggested by cisco. I want you hear your comments about this design.

Thank you. N7K-design-1.JPG

1 Accepted Solution

Accepted Solutions

Hi Muhamed

with double sided vPC there are no issue, it is one of the advantages of using Nexus of having all the paths in forwarding state and redundant

however you just need to be aware about the design requirements for this type of topologies, as stated above you need to make sure each vPC side has its own VPC Domain see the diagram bellow

also for L3 part you need to be careful,

- are you planing to make the N7Ks per Data center the HSRP act/standby for the local servers ? if yes you need to plan it well as you have flat L2 between the two data centers

- what about L3 from the N7K to the core ? it is better to have a L3 links and use ECMP from the routers upstream to the N7K down

i would recommend you to have a look at this discussion where we were discussing this part of the topology (L3)

https://supportforums.cisco.com/thread/2098897

one more thing to consider with your N7K and HSRP design is that HSRP Tracking is not Recomended when you using vPC see the diagram bellow where in the case of core links down the N7K will shutdown the SVI and traffic will be sent over the vPCpeer link for routing back to the other vpc member port and this wil lbe blocked by vPC behivour, it is better to have a L3 Vlan ( non-VPC) or L3 link betweent the N7K boxes for this case

i think these are the most important points to take it into your consideration from a hgih level point of view

HTH

pls rate the helpful Posts

View solution in original post

7 Replies 7

Marwan ALshawi
VIP Alumni
VIP Alumni

Hi Muhammed

this design in high level looks ok

however yo mentioned that your datacneters are in the same vlan is that mean you have a L2 service provider in between or you have your own interconnection link fiber for example ?

because if you have a L3 service provider Nexus can support L2 end to end over L3 SP using a technology called OTV

about other aspect in your diagram you are using vPCs form access switchs ( are they nexus ? )

from Nexus DC 1 to DC 2 i am assuming L2 with vPC as well

not sure if you have any concerns or any spicific question about it as i do not see any issue with it form high level

anyway you can refer to the following link which will give all what you need to know and understand form design point of view

http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/DC-3_0_IPInfra.html

http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/nx_7000_dc.html

HTH

if helpful Rate

Agree with marwanshawi, the design will depend on your transport between DC1 and DC2.

The only concern is north bound L3 traffic flow. I am not sure you want to have active/active for both DC or active/standby - all traffic will come to DC1, let's say.

Regards,

jerry

Thanks for the answers.

The cables between datacenters are dark fiber. There is no ISP between datacenters. we are not planning to use OTV.

The connections between Nexus 7000s are all trunk ports. and there are 2 vpc domains, each vpc belongs to one datacenter.

at this design, L3 SVIs (server's gateways) are Nexus switches. there will be 4 HSRP gateways.

My convern is all about double layer (back to back ) Vpc.

And nortbound connections to the core backbones which are L3 ports from each Nexus to Core backbones.

my spesific questions are:

witch back to back vpc, will there be any problem for the traffic from server (DC1) to server (DC2) at same/different vlans..

Shouldn't have any problems.  Only major gotcha is to make sure that you have different VPC domains on the two DC's.

-Matt

Your data centers don't seem that big - do you really need to spend the colossal amounts of money on 4 N7Ks?

Hi Muhamed

with double sided vPC there are no issue, it is one of the advantages of using Nexus of having all the paths in forwarding state and redundant

however you just need to be aware about the design requirements for this type of topologies, as stated above you need to make sure each vPC side has its own VPC Domain see the diagram bellow

also for L3 part you need to be careful,

- are you planing to make the N7Ks per Data center the HSRP act/standby for the local servers ? if yes you need to plan it well as you have flat L2 between the two data centers

- what about L3 from the N7K to the core ? it is better to have a L3 links and use ECMP from the routers upstream to the N7K down

i would recommend you to have a look at this discussion where we were discussing this part of the topology (L3)

https://supportforums.cisco.com/thread/2098897

one more thing to consider with your N7K and HSRP design is that HSRP Tracking is not Recomended when you using vPC see the diagram bellow where in the case of core links down the N7K will shutdown the SVI and traffic will be sent over the vPCpeer link for routing back to the other vpc member port and this wil lbe blocked by vPC behivour, it is better to have a L3 Vlan ( non-VPC) or L3 link betweent the N7K boxes for this case

i think these are the most important points to take it into your consideration from a hgih level point of view

HTH

pls rate the helpful Posts

thank you for all the info that you have provided.

Review Cisco Networking for a $25 gift card