cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1353
Views
4
Helpful
19
Replies

6500 to Nexus Design Question

shamg1974
Level 1
Level 1

Greetings. Basic question here --- I have two 6500 Core switches setup with HSRP, however, they are not VSS. They are two separate switches. Downstream I need to connect two Nexus 9k's. 

My plan:

On the Core side, two ports in chassis 1 setup as a LACP port-channel connecting to each Nexus 9K. Two ports in chassis 2 same setup. 

On the Nexus side, two ports (1 going to each chassis) setup as a vPC.

My question is can I do that on the 6500? port-channel to separate 9k's? Or do both connections from Core1 need to go to Nexus 9k-1? and two from Core2 to 9k-2? 

If I have to have both links from Core1 to Nexus1 and Core2 to Nexus2, vPC is not possible because the Cores are separate switches and not VSS.

Correct?

 

3 Accepted Solutions

Accepted Solutions

M02@rt37
VIP
VIP

Hello @shamg1974 

Right !! Since your 6500 are not configured as a VSS pair, they are operating as independent switche. This means you can not create a single portchannel on the 6500 that spans both Nexus 9Ks. Each port-channel on a 6500 must terminate on a single device. If you attempt to configure a port-channel from a 6500 to both Nexus switches, the links will not form correctly because the 6500 does not support multi-chassis EtherChannel without VSS.

From my point of view, a correct approach in this case, it is to configure separate portchannels from each 6500. Core1 should have a port-channel going to Nexus9K-01, and Core2 should have a portchannel going to Nexus9K-02. On the nexus side, you would simply treat these as separate uplinks without vPC. HSRP should be used between the 6500 to provide gateway redundancy, ensuring that if one core switch fails, the other takes over.

If you want to use vPC on the Nexus 9Ks, you would need to convert your 6500s to VSS first. VSS would make them act as a single logical switch, allowing you to create a single port-channel spanning both cores and connect it to a vPC on the Nexus side. 

---

Interesting post here:

https://community.cisco.com/t5/switching/vpc-on-nexus-5000-with-catalyst-6500-no-vss/td-p/1618853

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

View solution in original post

@shamg1974 

On nexus side, you can not bundle links going to different 6500 into a single LACP port-channel. Instead, you must treat them as separate L2 uplinks, each forming its own individual port-channel.

Each nexus switch will have a separate independent portchannel to a single 6500. There will be no vPC on the nexus side, because vpc requires a multichassis LACP peer on the upstream side, which your 6500 do not provide...

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

View solution in original post

mmm I see now.

6500-01 - int a and b on Po/LACP towards Nexus-01 and Nexus-02 _ vPC x on Nexus

6500-02 - int a and b on Po/LACP towards Nexus-01 and Nexus-02 _ vPC y on Nexus

That's what you say ?

M02rt37_0-1741172087934.png

 

---

Modern architecture in DC tend to Spine/leaf ; so FEX are not necessary ...

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

View solution in original post

19 Replies 19

M02@rt37
VIP
VIP

Hello @shamg1974 

Right !! Since your 6500 are not configured as a VSS pair, they are operating as independent switche. This means you can not create a single portchannel on the 6500 that spans both Nexus 9Ks. Each port-channel on a 6500 must terminate on a single device. If you attempt to configure a port-channel from a 6500 to both Nexus switches, the links will not form correctly because the 6500 does not support multi-chassis EtherChannel without VSS.

From my point of view, a correct approach in this case, it is to configure separate portchannels from each 6500. Core1 should have a port-channel going to Nexus9K-01, and Core2 should have a portchannel going to Nexus9K-02. On the nexus side, you would simply treat these as separate uplinks without vPC. HSRP should be used between the 6500 to provide gateway redundancy, ensuring that if one core switch fails, the other takes over.

If you want to use vPC on the Nexus 9Ks, you would need to convert your 6500s to VSS first. VSS would make them act as a single logical switch, allowing you to create a single port-channel spanning both cores and connect it to a vPC on the Nexus side. 

---

Interesting post here:

https://community.cisco.com/t5/switching/vpc-on-nexus-5000-with-catalyst-6500-no-vss/td-p/1618853

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

shamg1974
Level 1
Level 1

Thanks very much  M02@rt37 ! You explained it perfectly !

 

shamg1974
Level 1
Level 1

One more thing for clarity - on the Nexus side I would have two links from Nexus 1 going to Core 1 and two from Nexus 2 going to Core 2 - just basic LACP port-channels, correct?

@shamg1974 

On nexus side, you can not bundle links going to different 6500 into a single LACP port-channel. Instead, you must treat them as separate L2 uplinks, each forming its own individual port-channel.

Each nexus switch will have a separate independent portchannel to a single 6500. There will be no vPC on the nexus side, because vpc requires a multichassis LACP peer on the upstream side, which your 6500 do not provide...

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

Unfortunately, I no longer have access to a lab where I could easily test this, however vPC aka Virtual PortChannel appears to believe it can be done, at least back in 2016.

Of course, better to rely on Cisco documentation, such as the 12/2024 Understand Virtual Port Channel (vPC) Enhancements which appears to have different examples of ordinary (?) switches, hosts and routers with port-channel configs on them connected to a pair of Nexus switches using vPC.

There's also the 7/2011 Cisco document Configuring Virtual Port Channels with its first example, in vPC overview, also appears to use ordinary (?) L2 switches.  That document, under vPC Guideline and Limitations describes "You can connect a pair of Cisco Nexus 5000 Series switches or a pair of Cisco Nexus 5500 Series switches in a vPC directly to another switch or to a server"

Also there's this 11/2022 Cisco Press Port Channels and vPCs which has:

Virtual Port Channels

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 or 9000 Series devices to appear as a single port channel by a third device. The third device can be a switch, server, or any other networking device that supports port channels. A vPC can provide Layer 2 multipathing, which allows you to create redundancy and increase the bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load-balancing traffic. You can use only Layer 2 port channels in the vPC. You configure the port channels by using LACP or static no protocol configuration.

Figure 4-3 shows the vPC physical and logical topology.

 
JosephWDoherty_1-1741128738558.png

Possibly this feature, of which I was aware of, years ago, is no longer supported on the Nexus 9K with current software.

Two questions come to mine, as you mention a requirement for mutlichassis LACP.  As at least one of the documents note, Etherchannels (at least years ago) did not require LACP, so perhaps another new requirement?  Second, wouldn't the multichassis LACP be on the Nexus, as the 6500s are not doing multichassis LACP, as they might with VSS?

I also realize, typically, Nexus seldom use vPC without using a FEX, but as the above Cisco documents show, at least up until 12/2024, individual switches, routers and other hosts could form an Etherchannel/portchannel with a pair of Nexus.

Hopefully M02@rt37 you'll be able to clarify the situation.

With a single switch/router/host, your links make sense. But I have two core switches that are separate, not VSS, or one logical switch. It will have to be two links bundled from one nexus to one 6500. And another two from second nexus to the second 6500. No vpc available.


@shamg1974 wrote:

With a single switch/router/host, your links make sense. But I have two core switches that are separate, not VSS, or one logical switch. It will have to be two links bundled from one nexus to one 6500. And another two from second nexus to the second 6500. No vpc available.


No, that's not what I have in mind, nor as I understood your OP, which has. . ..

"On the Core side, two ports in chassis 1 setup as a LACP port-channel connecting to each Nexus 9K. Two ports in chassis 2 same setup. "

Right.  So each 6500 chassis has a single PC, comprising two links, a link to each Nexus.

"On the Nexus side, two ports (1 going to each chassis) setup as a vPC."

Right.  On Nexus pair, you would have two vPCs (shared by the Nexus pair), one connecting chassis 1 and second connecting to chassis 2.

Recap . . .

From 6500 #1, two links defined as a PC, one of those links to each Nexus, those links, on Nexus pair, defined as vPC, say vPC 1.

From 6500 #2, two links defined as a PC, one of those links to each Nexus, those links, on Nexus pair, defined as vPC, say vPC 2

Each 6500 remains standalone, but each with a single PC to the Nexus pair.  On the Nexus pair, two vPCs, one to each 6500.

(If it helps any, for Etherchannel purposes, think of the Nexus pair as a single chassis, and a line card taking the place of a separate Nexus.  How would you Etherchannel between such a device and your two 6500 chassis?)

The documentation I posted shows individual 3rd devices configured as above.  Of course, they only show one such device with PC on non-Nexus and vPC on Nexus pair, but you can have multiple devices, each needing its own vPC.

"My question is can I do that on the 6500? port-channel to separate 9k's?"

I believe the answer is yes, but the Nexus link pair would need a different vPC for each core 6500 chassis.

"Or do both connections from Core1 need to go to Nexus 9k-1? and two from Core2 to 9k-2?"

You could do that, not using vPC, but each Nexus (chassis as a whole) becomes a single point of failure (as are the 6500).

"If I have to have both links from Core1 to Nexus1 and Core2 to Nexus2, vPC is not possible because the Cores are separate switches and not VSS."

In that configuration, yes, because you need to span vPC across both Nexus.

If you had VSS you could have a single vPC on the Nexus pair and a single PC on the 6500 pair.  But, without VSS, you need a logical interface between the Nexus pair (i.e. vPC) and a logical interface on EACH 6500 (i.e. PC [known only to that 6500]).

If you look at my prior reply, where I posted the diagram from the Cisco Press, its Switch 3 would be one of your 6500s.  You would do likewise for the other 6500.

Of course, the replies M02@rt37 posted, if I understand them correctly, say all the posted information I provided, isn't possible.  Possibly that's true with the latest Nx9Ks, but it does appear contrary to what's been available for years.

It's also possible, we're "not on the same page".  Again, what you OP proposed doing, seems doable based on the information I posted.

Hello @Joseph W. Doherty 

Thanks for that clarification:

NX1 <vPC1> 6500 A <vPC1> NX2

NX1 <vPC2> 6500 B <vPC2> NX2

In this case 2 rules:

6500-A and 6500-B are separate devices !-> can not terminate the same vPC.

Nexus requires a single peer on each vPC ! The tow 6500 must act as one logical switch.

--

Right, also, FEX is no longer available or supported by Cisco. From a design perspective, we use "vPC back-to-back" or "double-sided vPC" to connect two nexus switches in one vPC domain to another vPC domain.

 

 

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

M02@rt37 

"6500-A and 6500-B are separate devices !-> can not terminate the same vPC."

Agreed!  Don't believe I "said" otherwise.

"Nexus requires a single peer on each vPC !"

Agreed, again!  Also, again, don't believe I "said" otherwise.

"The tow 6500 must act as one logical switch."

Ah, now that's where we disagree.  They are not one logical switch now.  Why must they be one logical switch?

"Right, also, FEX is no longer available or supported by Cisco. From a design perspective, we use "vPC back-to-back" or "double-sided vPC" to connect two nexus switches in one vPC domain to another vPC domain."

Interesting, didn't know FEX no longer supported, but did notice the latest Nexus designs seem to be only using themselves in a leaf/spine architecture.  However, thought I saw the capability for hosts to be connected to two different leaf Nexus switches.

So, possibly, if such an architecture is now a requirement for non Nexus devices to obtain redundancy, to an ACI fabric, one or two spine Nexus would be required, although that implies you would be unable to connect anything other than a leaf Nexus to a spine Nexus, which in the past, don't recall that being a requirement of the ACI architecture (that I supported in the past).  But, other than possibly requiring more than two Nexus, I thought I saw vPC shown being used from leaf Nexus to a non Nexus device using port channel.

At the moment, responding from my phone, but hopefully later today, will further review the two vPC design technologies you mention.  I did recall noticing them, but as they seemed to only pertain to Nexus to Nexus, I didn't study them.  Without FEX, I can see them being critically important, but even when FEX were supported, FEX were not needed to connect non Nexus hosts, including switch and routers, all using vPC.

 

 

mmm I see now.

6500-01 - int a and b on Po/LACP towards Nexus-01 and Nexus-02 _ vPC x on Nexus

6500-02 - int a and b on Po/LACP towards Nexus-01 and Nexus-02 _ vPC y on Nexus

That's what you say ?

M02rt37_0-1741172087934.png

 

---

Modern architecture in DC tend to Spine/leaf ; so FEX are not necessary ...

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

M02@rt37 

"That's what you say ?"

Yes, exactly!!!

So, are you saying such a topology cannot work?

"Modern architecture in DC tend to Spine/leaf ; so FEX are not necessary" 

I agree.

"Right, also, FEX is no longer available or supported by Cisco."

But that prior statement I was going to question, as a Cisco FEX support matrix appears to indicate otherwise.

Great !!!

---

Since 09/2022 FEX_Nexus 2000 is EOL

Partners give me EOS date => 09/2027 

https://www.cisco.com/c/en/us/support/switches/nexus-2000-series-fabric-extenders/series.html

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

M02@rt37 

"Great !!!"

Unsure, how to understand that.  Great you understand what I had in mind all along, or great, you agree it should work?

"Since 09/2022 FEX_Nexus 2000 is EOL"

Ah, you had one specific FEX in mind, not all FEX, correct?

As I wrote earlier, I suspected we may not have been "on the same page".

@shamg1974 

Continuing if "on the same page", is the (nice) diagram the M02@rt37 recently posted, what your OP was also describing as whether it's possible?

As I know, FEX technology (all N2k series) is EOL/EOS as I mention. Otherwise, what do you mean by "all FEX" ? Cisco Fabric Extender => N2k series only, agree ?

Some Nexus9k can be configured to behave like "pseudo" FEX, but this is not a native and/or standard way...

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.