cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1459
Views
0
Helpful
6
Replies

Nexus vPC vs L3 to the closet in Campus networks

Borman Bravo
Level 1
Level 1

I  wanted to see what other network architects thought of Cisco's vPC  technology which is available on the Nexus line of switches and its  application. Personally I believe that under most circunstances vPC is  strictly a Data Center technology, it should not be implemented in an  user access or closet access scenario in which the more nowdays  traditional "layer 3 to the closet" can be applied, with vPC to the  closet we are taking advantage of all available bandwidth and the fast convergence but we are still creating an environment susceptible to layer  2 loops, and we are still depending on STP protocol as a backup, perhaps adding some complexity from a support/troubleshooting standpoint. On the other hand, creating P2P /30 routed links between the distribution-access layers can provide the  same "use of all available uplinks and fast convergence" results if used in conjunction with a  routing protocol that will load balance traffic while at the same time providing L2 fault isolation, this is given that user subnets do not need to span across other closets, which is considered a best practice design anyway.

Any thoughts are appreciated.

6 Replies 6

Leo Laohoo
Hall of Fame
Hall of Fame
Personally I believe that under most circunstances vPC is  strictly a Data Center technology,

This is not the answer you are looking for but all I can say is "watch this space".

The words "Nexus 2K" and "PoE" have been going around Cisco for some time now.

Yes Nexus is basically designed for Data Centers thats true but proper network design does not say to have L3 between Access layer and Distribution layer.vPC is not complicated but its very simple considering you should not need to take care of STP and this given more BW as you said.

The reason why its designed for Data Centers ofcourse answer is BW. On user level we do not need 10gig kind of links.It does not change the rule of STP but says if you have dual attached device then STP can be ignored if not then use STP.

More benifits can be seen from VSS/Nexus when we go for OTV/VPLS Layer 2 Extension.

Thanks

Ajay

I'd really like to know what Cisco has to say about my question.

Are you implying that Nexus2K is being geared towards the user closet?

Are you implying that Nexus2K is being geared towards the user closet?

Heck YES!   

There are no plans for N2K towards the user access and it will never be available on campus side.

In traditonal campus side, I would look more towards deploying VSS and L3 routed access design. I have designed many DC/Campus consolidation projects for my customers and have always used L3 access routed design or VSS towards the campus side. If Vlan span across the buildings is a requirement, have always used a dedicated aggregation box running either VSS or STP to the access layer and L3 back to the Campus/DC core.

If VPC available on the Catalyst switches, I would use VPC to give me all available active/active bandwidth. In few of the designs, we have used Nexus7K VDC's to act as the dedicated Campus aggregation box running VPC to the user access layer which has worked very well for us.

Hope this helps.

Cheers,

-amit singh

Your traffic patterns may vary from ours but we had a 99% North/South desktop to server farm communications path and we decided to completely consolidate our access/distribution/core into clustered 7Ks and perform routing only on the 7Ks.  Validated our design at Networks 2010 and subsequent design modifications as new buildings were added.

We were PC->L2->Access_Closet[.1]->L3->dist_rtrs->L3->core_6500s->L3->server_farm_routers->L2->Servers

we converted to

PC->L2->Access_Closet->L2_vpc->7Ks->L2->UCS_server 

where the 7Ks are THE router for both the UCS server farms and desktop users.  What was a 4 hop connection between users and  resources is now 1 hop - the 7Ks.

Works great. We even put the 7Ks in different areas of the core building to limit our environmental issues and provide more flexible WAN/OTV connectivity options, they are positioned similarly to how the distribution layer used to be.  Connectivity to data center resources is via multiple divergent 10G links.

Closets in buildings on the same campus connect as follows

PC->L2->Access_Closet->L2_vpc->5K_pair->L2->7Ks->L2->UCS_server 

Of course we had duplicated closet vlans [10,20,30] and we swapped those when we migrated but other than management IPs no re-addressing requirements were encountered other than to move the closet .1s to a HSRP on the 7Ks.  Spanning tree optimizations need to occur as well.

A PLANNED side benefit was that the cost of the 7Ks were only slightly greater than the deactivated 6500s which were able to be re-purposed and not purchased new when expansion occurs.

Probably one of the most satisfying design changes of my career.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: