cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
27174
Views
0
Helpful
29
Replies

vPC on Nexus 5000 with Catalyst 6500 (no VSS)

Antonio Morales
Level 1
Level 1

Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.

The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.

UCS Diagram.png

Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.

Questions I have are.

- Is this my best deployment choice?

- vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:

     - one of the 6500 goes down

          - STP?

          - What is going to happend with the Etherchannels on the remaining  6500?

     - the Management interface goes down for any other reason

          - which one is going to be the primary NEXUS?

Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.

Any help is appreciated.

Devices

·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)

·         2 Cisco Nexus 5010

·         2 Cisco UCS 6120xp

·         2 UCS Chassis

     -    4  Cisco  B200-M1 blades (2 each chassis)

          - Dual 10Gb Intel card (1 per blade)

vPC Configuration on Nexus 5000

TACSWN01

TACSWN02

feature vpc

vpc domain 5

reload restore

reload restore   delay 300

Peer-keepalive   destination 10.11.3.10

role priority 10

!

!--- Enables vPC, define vPC domain and peer   for keep alive

int ethernet 1/9-10

channel-group 50   mode active

!--- Put Interfaces on Po50

int port-channel 50

switchport mode   trunk

spanning-tree port   type network
vpc peer-link

!

!--- Po50 configured as Peer-Link for vPC

inter ethernet 1/17-18

description   UCS6120-A

switchport mode   trunk

channel-group 51   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-A  

int port-channel 51

swithport mode   trunk

vpc 51

spannig-tree port   type edge trunk

!

!--- Associates vPC 51 to Po51

inter ethernet 1/19-20

description   UCS6120-B

switchport mode   trunk

channel-group 52   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-B  

int port-channel 52

swithport mode   trunk

vpc 52

spannig-tree port   type edge trunk

!

!--- Associates vPC 52 to Po52

!----- CONFIGURATION for Connection to   Catalyst 6506

!

Int ethernet 1/1-3

description   Cat6506-01

switchport mode   trunk

channel-group 61   mode active

!

!--- Associate interfaces to Po61 connected   to Cat6506-01

Int port-channel 61

switchport mode   trunk

vpc 61

!

!--- Associates vPC 61 to Po61

Int ethernet 1/4-6

description   Cat6506-02

switchport mode   trunk

channel-group 62   mode active

!

!--- Associate interfaces to Po62 connected   to Cat6506-02

Int port-channel 62

switchport mode   trunk

vpc 62

!

!--- Associates vPC 62 to Po62

feature vpc

vpc domain 5

reload restore

reload restore   delay 300

Peer-keepalive   destination 10.11.3.9

role priority 20

!

!--- Enables vPC, define vPC domain and peer   for keep alive

int ethernet 1/9-10

channel-group 50   mode active

!--- Put Interfaces on Po50

int port-channel 50

switchport mode   trunk

spanning-tree port   type network
vpc peer-link

!

!--- Po50 configured as Peer-Link for vPC

inter ethernet 1/17-18

description   UCS6120-A

switchport mode   trunk

channel-group 51   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-A  

int port-channel 51

swithport mode   trunk

vpc 51

spannig-tree port   type edge trunk

!

!--- Associates vPC 51 to Po51

inter ethernet 1/19-20

description   UCS6120-B

switchport mode   trunk

channel-group 52   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-B  

int port-channel 52

swithport mode   trunk

vpc 52

spannig-tree port   type edge trunk

!

!--- Associates vPC 52 to Po52

!----- CONFIGURATION for Connection to   Catalyst 6506

!

Int ethernet 1/1-3

description   Cat6506-01

switchport mode   trunk

channel-group 61   mode active

!

!--- Associate interfaces to Po61 connected   to Cat6506-01

Int port-channel 61

switchport mode   trunk

vpc 61

!

!--- Associates vPC 61 to Po61

Int ethernet 1/4-6

description   Cat6506-02

switchport mode   trunk

channel-group 62   mode active

!

!--- Associate interfaces to Po62 connected   to Cat6506-02

Int port-channel 62

switchport mode   trunk

vpc 62

!

!--- Associates vPC 62 to Po62

 

vPC Verification

show vpc consistency-parameters

!--- show compatibility parameters

Show feature

!--- Use it to verify that vpc and lacp features are enabled.

show vpc brief

!--- Displays information about vPC Domain

Etherchannel configuration on TAC 6500s

TACSWC01

TACSWC02

interface range GigabitEthernet2/38 - 43

description   TACSWN01 (Po61 vPC61)

switchport

switchport trunk   encapsulation dot1q

switchport mode   trunk

no ip address

channel-group 61   mode active

interface range GigabitEthernet2/38 - 43

description   TACSWN02 (Po62 vPC62)

switchport

switchport trunk   encapsulation dot1q

switchport mode   trunk

no ip address

channel-group 62   mode active

29 Replies 29

Well no one else answered so I'll toss out my thoughts.

Run GLBP on your 6500s to distribute the default gateway among both 6500s.

Not sure what you wanted to run on the port channel between 6500s - L2 communications can occur via the 5Ks, if PO1 is a trunk you will have a loop.

If the mgmt link goes down which ever 5K was primary will stay primary.

Antonio,

It would seem you put some work into your config and it has been helpful for me - our design is very much like yours.  Thank you!

How did you deal with the 6500 to 6500 trunk to prevent a loop?  Did you implement GLBP?

All the best,

Don

Thanks guys for the replies.

I didnt have the chance to reply before becuase I was off work and no chance to check this forum at all.


About GLBP I didnt implement it.Our 6506 switches are running HSRP for L3 and per VLAN RSTP for L2.

About the configuration posted here, let me tell you that I run it on Cisco TAC and they said it is good, we actually are going to do the reconfiguration of or infraestructure and implement what you see here this weekend, and we are also turning on Jumbo Frames on all device involved.

I will let you know how was it.

Best regards.

I'm sorry I didn't reply before but I was quite busy..

Im glad to tell you guys that the configuration works as spected we feel now so confident that our production servers started  being virtualized since then.

We did run all possible tests:

- Shutting down the 6500 aone at the time

- Shutting down the Nexus 5K one at the time.

- Shutting down the UCS 6120xps one at the time

- Reducing the Port-Channels

And all worked as a charm.

Just do a well design of your PVSP (use RSTP) and the HSRP. to reduce network convergence.

People who decide to implement this please let us know who well it works for you.

Thanks

Antonio,

We're deploying a very similar setup, also 6513 No VSS, two 5596's with (6) 2232 and (6) 2248, all dual-homed to the 55Ks, we will be taking advantage of the double sided vPC so servers will also be dual homed to the 2232s. My question is regarding the uplinks from the 5596s to the 6513 Cores, we also have HSRP and RPVST in our 6513 backbone.

Our current design is to have each of the 55Ks connect to a single 6513 with port-channel, we thought that because our primary 6513 is doing all the HSRP routing for our data center vlans this would make more sense, also because the 55Ks are peer-link yet there is a primary and secondary, the primary is going to do all the switching correct?

OR

Should we dual-home the 55Ks to the 6513s, I guess this would be a good idea only if the 55Ks are both active and able to reach the primary 6513 right?

My attached drawing might better explain this.

Also, when I configured a server dual-homed from two 2232s I couldn't create a vPC, when I added the channel-group to the interfaces, the port-channel was created, but then when trying to add the vPC it says it already exists...I did a show vpc and yeah, it was there...does it automatically create the vPC if the configs match on both sides on the 55Ks?

Hi LHernandez,

We actually try that configuration and found that Single Po to each 6513 will work but dual-home (as shown on the picture) works much better, conevergence times are fasters also.

Once you have vPC Up and Running you can see the two N5Ks as One Switch. Check my configuration for Po61 and Po62, that configuration associates vpc61 to Po61 and connects to SWC01 and vpc62 associated to Po62 connecting SWC02.

For vPC to come UP you need "feature vpc" and "feature lacp" along with same configurations on both switches.

Check vPC issuing the following commends

show vpc brief

show vpc consistency-parameters

This last one will tell you if parameters/configuration on the switches is consistent.

I hope I answered your questions.

Best Regards.

Antonio,

Glad to hear from you. A few questions then, before I change our physical and vPC topology.

1. What do the commands reload restore and reload restore delay 300 do?

2. Do I understand correctly that if we dual-home the 55Ks to each of the 6513 as shown in my picture, and each side gets it's own vPC, then the second vPC is in blocking state correct, due to spanning-tree. We run rapid-spanning tree and HSRP.

3. The only bad thing I can see, is that if one of the 55Ks goes down, bandwidth will decrease to 20Gig in our setup because the vPC 10 will stay up and traffic will not move over to vPC 11.

4. Did you find that with Dual-Home 55Ks you had less traffic going over the peer-link then? Since both 55Ks have a path to the 6513-01 or since the 55K-01 is primary does the 55K-02 still route all traffic over peer-link?

Hi and sorry for the delay.

Here are some answers.

1. "reload restore" enable the 5K to try restore the vPC after a reload of one or both of the members. the default delay before try is to wait 240 seconds, using " reload reastore delay 300" change the default wait.

2.  You are correct here, We have exact same configuration regarding HSRP and RSTP.

3. If one of the 5K goes down both vPC10 and vPC11 will stay up, remember you have exact same configuration on both 5Ks. The Bandwidth will be reduced becuase some of the interfaces are not available any more.

4. On my understanding peer-link passes no data traffic, it is used to syncroinize state of the vPC membersit also passes Multicast and broadcast traffic. Unicast traffic will pass only in link failures. Do not see the peer-link as another path for traffic, remeber both switches are acting as one. If tracffic is destinated to a specific switch lets say 6513-01 connected to vPC10,  traffic can come from both N5K because both have interfaces members of the vPC10.

Best Regards.

Hi, 

 

Digging up on the old thread. 

 

How would a loop form if Po1 is trunk?

 

Regards

S

jain.nitin
Level 3
Level 3

we have same setup decribed by antonio but when we connect N5K-1 and N5K-2 to our both core switches then it forms a loop and our HSRP on core swtiches start flapping.

where as we connect N5K-1 and N5K-2 to only one core switch then it does not give any problem.

We have same setup as described same configuration except port channel mode between core and N5K are ON mode not ACTIVE (LACP) does it make any difference ?

Please help us to resolve it.

do you trunk the same vlans between the 6500s as you do to the 5Ks ?

you don't need to anymore ... the 5Ks will allow the HSRP communications between the 6500s

you are connecting the 5ks to the 6500s with a VPC based link ?

do a "show int poXX" and a "show vpc XX" on the 5k link towards the 6500s to make sure there are no "downs" or other errors.


Jain.Nitin,

I'm going to plan on switching my configuration to dual-homed 5Ks to both of the 6513s...you mention that you had a loop in the network. Did you create one (1) vPC interface from the 5Ks to the 65 or core? I think that would cause a loop, but if it's two seperate vPCs going to the 65 or core, then the 5Ks spanning-tree should block on the second vPC to the secondary or NON ROOT core switch. You bring up a good point though, how is your HSRP configured and is your primary core switch the Root for the VLANs that your 5Ks have access to on the trunk?

We expect that by having the HSRP and Root Bridge be the primary 65 core switch, spanning tree should correctly block.

I'm not sure what Robert R III means by you dont need the VLANs on the trunk between the two 6500s...5Ks will allow HSRP communication between the 6500s??? Not Clear....?

ihernandez81,

Between the c1-r1 & c1-r2 there are no L2 links, ditto with d6-s1 & d6-s2.  We did have a routed link just to allow orphan traffic.

All the c1r1 & c1-r2 HSRP communications ( we use GLBP as well ) go from c1-r1 to c1-r2 via the hosp-n5k-s1 & hosp-n5k-s2.  Port channels 203 & 204 carry the exact same vlans.

The same is the case on the d6-s1 & d6-s2 sides except we converted them to a VSS cluster so we only have po203 with  4 *10 Gb links going to the 5Ks ( 2 from each VSS member to each 5K).

As you can tell what we were doing was extending VM vlans between 2 data centers prior to arrivals of 7010s and UCS chassis - which  worked quite well.

If you got on any 5K you would see 2 port channels - 203 & 204  - going to each 6500, again when one pair went to VSS po204 went away.

I know, I know they are not the same things .... but if you view the 5Ks like a 3750 stack .... how would you hook up a 3750 stack from 2 6500s and if you did why would you run an L2 link between the 6500s ?

For us using 4 10G ports between 6509s took ports that were too expensive - we had 6704s - so use the 5Ks.

Our blocking link was on one of the links between site1 & site2.  If we did not have wan connectivty there would have been no blocking or loops.

Caution .... if you go with 7Ks beware of the inability to do L2/L3 via VPCs.

better ?

one of the nice things about working with some of this stuff is as long as you maintain l2 connectivity if you are migrating things they tend to work, unless they really break

Robert,

I'm reaching out for any pointers you may have or some design ideas, you mentioned you had arrival of 7010s! I'm reviewing our BOM and putting some drawings together for a new data center which we'll be extending VLANs to using OTV over 2 layer 2, 10Gig links. The new datacenter will also be our disaster recovery site so I'm working with the datacenter guys to look into VMWare SRM...Right now I'm just trying to make sure we have all the right pieces.

We're looking at the 7010 with an M1 card and 3 f2e cards, 5 fabric 2 cards and a sup2e, so total of 4 7Ks, there will be two at each site, each running admin, core, otv and dc VDCs...

We'll have cluster 5585 firewalls between sites to do south north routing. I'm trying to piece this all together, we have 20 fex at each DC, so I'm quickly running out of ports. I'm also having to interconnect the different VDCs as well as peer link my VDCs at each site between the two 7Ks. We've dropped 5Ks from our design and single homed 2Ks at the top of rack, then dual home the servers.

Are you using OTV? Did you have to use LISP for the rest of your network?

I'll post a drawing once I'm done...maybe I should move this to another discussion.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: