cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
Join Customer Connection to register!
23731
Views
0
Helpful
29
Replies
Antonio Morales
Beginner

vPC on Nexus 5000 with Catalyst 6500 (no VSS)

Hi, I'm pretty new on the Nexus and UCS world so I have some many questions I hope you can help on getting some answers.

The diagram below is the configuration we are looking to deploy, that way because we do not have VSS on the 6500 switches so we can not create only one  Etherchannel to the 6500s.

UCS Diagram.png

Our blades inserted on the UCS chassis  have INTEL dual port cards, so they do not support full failover.

Questions I have are.

- Is this my best deployment choice?

- vPC highly depend on the management interface on the Nexus 5000 for the keep alive peer monitoring, so what is going to happen if the vPC brakes due to:

     - one of the 6500 goes down

          - STP?

          - What is going to happend with the Etherchannels on the remaining  6500?

     - the Management interface goes down for any other reason

          - which one is going to be the primary NEXUS?

Below is the list of devices involved and the configuration for the Nexus 5000 and 65000.

Any help is appreciated.

Devices

·         2  Cisco Catalyst with two WS-SUP720-3B each (no VSS)

·         2 Cisco Nexus 5010

·         2 Cisco UCS 6120xp

·         2 UCS Chassis

     -    4  Cisco  B200-M1 blades (2 each chassis)

          - Dual 10Gb Intel card (1 per blade)

vPC Configuration on Nexus 5000

TACSWN01

TACSWN02

feature vpc

vpc domain 5

reload restore

reload restore   delay 300

Peer-keepalive   destination 10.11.3.10

role priority 10

!

!--- Enables vPC, define vPC domain and peer   for keep alive

int ethernet 1/9-10

channel-group 50   mode active

!--- Put Interfaces on Po50

int port-channel 50

switchport mode   trunk

spanning-tree port   type network
vpc peer-link

!

!--- Po50 configured as Peer-Link for vPC

inter ethernet 1/17-18

description   UCS6120-A

switchport mode   trunk

channel-group 51   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-A  

int port-channel 51

swithport mode   trunk

vpc 51

spannig-tree port   type edge trunk

!

!--- Associates vPC 51 to Po51

inter ethernet 1/19-20

description   UCS6120-B

switchport mode   trunk

channel-group 52   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-B  

int port-channel 52

swithport mode   trunk

vpc 52

spannig-tree port   type edge trunk

!

!--- Associates vPC 52 to Po52

!----- CONFIGURATION for Connection to   Catalyst 6506

!

Int ethernet 1/1-3

description   Cat6506-01

switchport mode   trunk

channel-group 61   mode active

!

!--- Associate interfaces to Po61 connected   to Cat6506-01

Int port-channel 61

switchport mode   trunk

vpc 61

!

!--- Associates vPC 61 to Po61

Int ethernet 1/4-6

description   Cat6506-02

switchport mode   trunk

channel-group 62   mode active

!

!--- Associate interfaces to Po62 connected   to Cat6506-02

Int port-channel 62

switchport mode   trunk

vpc 62

!

!--- Associates vPC 62 to Po62

feature vpc

vpc domain 5

reload restore

reload restore   delay 300

Peer-keepalive   destination 10.11.3.9

role priority 20

!

!--- Enables vPC, define vPC domain and peer   for keep alive

int ethernet 1/9-10

channel-group 50   mode active

!--- Put Interfaces on Po50

int port-channel 50

switchport mode   trunk

spanning-tree port   type network
vpc peer-link

!

!--- Po50 configured as Peer-Link for vPC

inter ethernet 1/17-18

description   UCS6120-A

switchport mode   trunk

channel-group 51   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-A  

int port-channel 51

swithport mode   trunk

vpc 51

spannig-tree port   type edge trunk

!

!--- Associates vPC 51 to Po51

inter ethernet 1/19-20

description   UCS6120-B

switchport mode   trunk

channel-group 52   mode active

!

!--- Associates interfaces to Po51 connected   to UCS6120xp-B  

int port-channel 52

swithport mode   trunk

vpc 52

spannig-tree port   type edge trunk

!

!--- Associates vPC 52 to Po52

!----- CONFIGURATION for Connection to   Catalyst 6506

!

Int ethernet 1/1-3

description   Cat6506-01

switchport mode   trunk

channel-group 61   mode active

!

!--- Associate interfaces to Po61 connected   to Cat6506-01

Int port-channel 61

switchport mode   trunk

vpc 61

!

!--- Associates vPC 61 to Po61

Int ethernet 1/4-6

description   Cat6506-02

switchport mode   trunk

channel-group 62   mode active

!

!--- Associate interfaces to Po62 connected   to Cat6506-02

Int port-channel 62

switchport mode   trunk

vpc 62

!

!--- Associates vPC 62 to Po62

 

vPC Verification

show vpc consistency-parameters

!--- show compatibility parameters

Show feature

!--- Use it to verify that vpc and lacp features are enabled.

show vpc brief

!--- Displays information about vPC Domain

Etherchannel configuration on TAC 6500s

TACSWC01

TACSWC02

interface range GigabitEthernet2/38 - 43

description   TACSWN01 (Po61 vPC61)

switchport

switchport trunk   encapsulation dot1q

switchport mode   trunk

no ip address

channel-group 61   mode active

interface range GigabitEthernet2/38 - 43

description   TACSWN02 (Po62 vPC62)

switchport

switchport trunk   encapsulation dot1q

switchport mode   trunk

no ip address

channel-group 62   mode active

29 REPLIES 29

Hi Jain,

Sorry for the delay on answering but thing here are really crazy, lots of projects.

About Port channel mode for vPC, let me start saying that Im a big fan of " Mode ON"  and I highly recomended when you do on stack switches, but for vPC I would say switch to LACP.

So, if you have the exact same configuration we have check the following things.

- Make sure you configure the two vPC (61 and 62 on my case) one for each 65K switch One vPC will make STP and HSRP flap.

- Check for vPC Status, consitency and compatibility on the parameters, because if vPC is not up the two 5K will act as separate switches causing the STP to flap and HSRP also.

- Check STP status and also the logs on the 65K and on the N5K, look for some indications that STP is flapping.

- For STP make sure the Catalyst 65K(Core) is always the Root, STP flapping make HSRP believe the Other switch is unreachable and HSRP will FLap.

Maybe is the first thing to check but make sure you are using Rapid-STP or Per Vlan SRTP.

It is true that having L2 connectivity between the twi 65K Core switches and connecting the Nexus 5K to them will create a loop  and that is expected to heppen (we do not have VSS) but is RSTP is configured properly flapping should not happen.

Hope this help.

Best Regards.

Jain/Antonio,

If you do not connect your 6500s via L2 connections with the vlans presented to the 5Ks .. you CAN NOT have a loop.  Why build something that allows it ?

Skip it from day 1 and you will never look back. 

Confiure the ports closest to "core" as "mode active" and their peer as "mode passive" to set a standard - similar to who gets the .1 & who gets the .2 on a point to point link. 

Antonio or anyone else who wants to comment. We've now moved to a dual-homed 55K design, our 55Ks are dual-homed to our Non-VSS 6513s running in HSRP with RPVST.

We've started failover testing and I'm wondering if we're getting expected results. We have a small amount of servers configured right now so not much to sample from. There is a mix of linux and esxi hosts, the ones I'm working with are in a port-channel with lacp and it's negotiating correctly. Connections are stable at this point. When I bounce a 55K though, some of the servers become unreachable for 2-13 pings when the bounce begins, servers come back and then become unrechable again 2-13 pings, when the 55K is coming back up and the fexs are reconnecting. Is this about the same time you guys have seen from your tests? Some devices miss 1 ping or none...? If this were a Nexus issue wouldn't all devices be effected the same way?

If you are not on esxi 5.1 it does not really do lacp ( http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf ).

How do the windows only servers behave.

Are your vlans blocking on the links to the 5Ks or on the links between the 6513s ?

Drop the "common" vlans on the links between the 6513s if they are also on the links to the 5Ks.  If you have them between the 6513s you have spanning tree loops.  Take Antonios diagram at the top and kill off PO1 and see if life still works for your network ( really, it will ).  You can leave routed links between the 6513s but not L2s.

And it is possible that when one 5K shuts down, the existing spanning tree looped topology just moves to another loop topology and the extra recalculation time hurts recovery as perhaps you are seeing ...

HSRP may be using the links that get blocked with a failure and if they block ...your HSRP recovery time goes up too.

Have the windows server ping the HSRP physical IPs at the same time as you are pinging the virtual IP to see if they are responding more quickly.

But really, drop the L2 links between the 6513s and see what happens with no other changes and if you not using HSRP version 2 , possibly switch to that for accelerated recovery as well - you can run V1 & V2 on same router. ( http://www.cisco.com/en/US/docs/ios/12_3t/12_3t4/feature/guide/gthsrpv2.html  )

Guys - What Robert is trying to say is you should design all STP out of your network and data center if possible. There is a reason that TRILL, Fabric Path and SPB (802.1aq) are making their way into data center designs, that's because STP is evil and given the right conditions it will black hole your data center.

We run the same config as Robert, 6K as the L2/L3 boundary with a L3 link between for orphaned traffic. GLBP allows active/active for northbound traffic and will load balance, whereas HSRP is active/passive on the 6K,  unless you are going to play the odd/even VLAN and HSRP primary matching game. Build a V and not a square, your FHRP hellos will flow down to the 5K and back up to the other 6K.

Here is a case study we did with Cisco, the Visio is at the end.

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/case_study_Wellmont_Health.pdf

This is the configuration we have used for 3 years and it's rock solid. Similar design as the other posts but with L3 between the 6K, each 6K has a single Port channel to both 5K, 5K VPC up to the 6K, L2 VPC peer-link between the 5K, single homed 2232 with Port channel, and FCOE.

Darren,

I came back to your post for more information and revisit your drawing.

I'm in the processes of drawing up some configuraiton ideas to connect our existing datacenter to a new one. Nothing's been bought yet so I want to make sure we get all the right pieces.

We're looking at 2 Nexus 7Ks at each site, and having the VLANs extended using Cisco OTV, sounds like that's necessary to bring up a VM at our new DC with the existing IP. You mentioned VMWare Site Recovery Manager... as this new Datacenter will also be used as our disaster recovery site...we were thinking we would need OTV up and running to extend the same vlans (subnets) to each site otherwise in case of a disaster, the VM would come up with a VLAN that doesn't exist at Site 2...

I'm also still looking into HSRP v2 as you mentioned...I have not had any maintenance windows available to me for testing...

Datacenter Details (2) 10 gig links to connect two DC with OTV, we're looking at the Nexus 7010 with an M1 card and 3 f2e cards, 5 fa2 cards and a sup2e, so total of 4 7Ks, there will be two at each site, each running admin, core, otv and dc VDCs...

Your design looks good, however I don't see any mention of block or file based storage replication. Your VMs will need access to your storage either by some type of IP or SRDF replication, or perhaps something like a vPlex. Keep in mind if you are going to Vmotion to the other data center and leverage the OTV tunnel, you'll need similar hardware on the other side or leverage EVC.

The replication will be done with EMC over IP for the VMs, we've actually decided to change the M1 for 2 M2 cards per chassis, there will also be a 3560 1Gig switch for any 1G connections that we may need, trying to get a full count from our server guys for any other items they may need connectivity to.

I'm having some trouble wrapping my head around the links that we'll need to now add between our 6513 and the new 7010s at our corporate building. Today the 6513s non vss are connected to two different 55Ks via 2 links to each 55K cross connected. We're going to reduce the links to 1 10Gig link between each 6513 and 55K so we can use the remaining links to the new 7010s but how do I keep spanning tree from becoming an issue again? Am I going to configure those ports as switchports in port-channel, but assign different vlans to each port-channel, then use SVI HSRP IP addresses on each chassis for the different vlans? and let EIGRP do ECMP on them?

Darren

Your link is no longer active. Could you share this case study for me? Thanks

You are correct, it is no longer on Cisco's site.  Send me your email and I will send it directly to you.

Please send it to jonathan.woloshyn@bell.ca

Thanks

Hello Darren,

I am going to deploy this scenario nowadays, please could you send me your case study?

Best regards.

sorry, my email address is roberto.sanchez.martin@hotmail.com

 

thanks!

kjayachandran
Beginner

Antonio Morales

Did Cisco recommend any particular version of code for the 6500's? We are running 12.2(17r)SX7, RELEASE SOFTWARE (fc1) with PVST. Did you have to change to RPVST on the Cisco 6500's?

Francisco Caccam
Beginner

Hi Darren,

 

Kindly send me the PDF of the case study for reference on my upcoming implementation as the link is not active already. My email is francisco.caccam@hotmail.com.

 

Thanks Much,

Francis