cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1269
Views
7
Helpful
18
Replies

9600 Supervisor 1 vs 2

Chris McCann
Level 1
Level 1

Hi all,

We are looking at replacing our 6509E.

Currently the 9606 looks like the job, however we use a lot of 1GB SFP ports on the current 6509E.

With this in mind should we be using the Supervisor 1 cards, I am getting the impression that the Supervisor 2 doesnt support 1GB SFP?

I am looking at this document 

Migrating from Cisco Catalyst 6500/6800 to 9600 Series Switches

Under port density it doesnt have a reference for 1GB using the Supervisor 2.

I am assuming that the supervisor 2 supercedes supervisor 1, but looking at the above it seems may have to go with the older supervisor.

 

any advice welcome,

Chris.

1 Accepted Solution

Accepted Solutions

Hey Joe; the Cat9K is not a product I worked with, so can’t say with any certainty as to why there is such a low number of 1G interfaces supported. What I saw with other high-performance products that supported a small number of “low speed” interfaces was that supporting them could be very inefficient. That is, a high performance NPU has high-speed SERDES to interface to front-panel ports, with the SERDES often being much higher speed than the ports. The high-speed SERDES are muxed/demuxed by a gearbox-like IC to match the speeds of the front-panel ports.  In architecting the product, the designers make tradeoffs on how to allocate the fixed SERDES resources to the front panel, with one possible tradeoff being the optimization for numbers of lower speed vs higher speed interfaces. For example, a SERDES interface into the NPU might yield a very small number of low-speed interfaces if the systems has been optimized for high-speed ports.

Scanning the Cat9K data sheet, I see that the Sup1 uses the UADP 3.0 NPU, while the Sup2 uses the Q200 SiliconOne NPU with nearly 3x the throughput of the Sup1. It is plausible to me that the Sup2 has higher speed SERDES connectivity to its NPU and has been optimized for high speed interfaces (specifically 100G) at the expense of relatively poor support for 1G. If any Cat9K TMEs are lurking here, they can pile on with the actual answer.

Disclaimers: I am long in CSCO. Bad answers are my own fault as they are not AI generated.

View solution in original post

18 Replies 18

marce1000
Hall of Fame
Hall of Fame

 

 - Ref : https://www.cisco.com/c/dam/en/us/products/se/2024/9/Collateral/nb-06-cat9600-series-data-sheet-cte-en.pdf
   >....
  > 3 C9600X-SUP-2 requires SFP-1G-SX or SFP-1G-LR with compatible line cards (see Cisco TMG Matrix for details)

 M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '

Hi Marce,

Seems to be 8 ports maximum per system, am I reading that wrong?

Cisco Catalyst 9600 Series Switches Data Sheet - Cisco

Chris.

 

 - @Chris McCann  Not sure on that one; as far as your original question is concerned ; have a look at
                              https://tmgmatrix.cisco.com/?npid=5162   , it seems you can't have 1G (only 10G based uplinks
                              and or SFP's)

  M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '


@Chris McCann wrote:

Hi Marce,

Seems to be 8 ports maximum per system, am I reading that wrong?

Cisco Catalyst 9600 Series Switches Data Sheet - Cisco

Chris.


No, but assuming the datasheet isn't in error, I suspect it's not, it appears there may be a logical restriction on using gig ports with the sup2, both in total allowed per chassis, and type (optical only, and only supporting a couple of specific transceivers).

Why such a restriction?  Possibly more to due with marketing rather than technology.  I.e. the sup2 is intended for much more bandwidth support (more than double the possible 100g ports, and supporting 200g and 400g), so overkill, in possibly Cisco's opinion, supporting anything less than 5g ports, except for the very limited support of some 1g.  After all the sup1, covers the "low end", 100g and below, except for 50g, bandwidth ranges.

In the tail of your datasheet reference, at the end, under "Document History" take notice of:

Added 1G support for C9600X using SFP-1G-SX/LH

All applicable areas

April 15, 2024

So, possibly someone had enough clout to get this added to the sup2.

Could there be some technical reason for this limitation, too?  Maybe, which might also explain why the sup1 doesn't support 50g.

Someone who might know would be @Ramblin Tech .

Lastly, although Cisco recommends migration from a 6500/6800 to a 9600, well of course they do.  ; )

However, you mention you're replacing a 6509, but you didn't mention the sup.  You mentioned lots of gig ports, but didn't mention the actual line cards being used (and if DFC capable, whether that's being used).  You also didn't mention the "role" of the 6509, nor how well it appears to be supporting that role.  You didn't mention projected bandwidth upgrades.

So, although, I suspect a 9600 likely will well replace a 6509, it actually might not be the "optimal" replacement for you, considering needs and cost.  Don't overlook the 9400s and 9500s too.

Regarding the 9400s, Cisco positions them, I believe, as a 4500 series replacement, but remember, the 9400s have more capacity than a 4500, also potentially much more than your 6509.  For example, I recall the maximum card slot bandwidth of a 6509E is 80Gbps, while the 9400s have up to 480g.  Also, the 9400 series supports more line card slots than the 9600.


@Joseph W. Doherty wrote:
So, although, I suspect a 9600 likely will well replace a 6509, it actually might not be the "optimal" replacement for you, considering needs and cost.  Don't overlook the 9400s and 9500s too.

9600 is still cheaper to run than 9400, especially with a long running promo:  Save up to 16% on Catalyst 9600X next-generation line cards and supervisor


@Leo Laohoo wrote:

@Joseph W. Doherty wrote:
So, although, I suspect a 9600 likely will well replace a 6509, it actually might not be the "optimal" replacement for you, considering needs and cost.  Don't overlook the 9400s and 9500s too.

9600 is still cheaper to run than 9400, especially with a long running promo:  Save up to 16% on Catalyst 9600X next-generation line cards and supervisor


Possibly not the case if you need to also purchase a sup1 to support more than 8 gig ports.  Also, as the promo is just for one (?) sup2 with one or two line cards, with different discount percentages, and if you want a 2nd sup and or additional line cards, depending on exactly what the whole bill-of-materials cost is, maybe not then too.

In any case, certainly worthwhile to work out what possible replacements might be, especially if you can take advantage of promo pricing.

I can say, sometimes final pricing can be somewhat different from what's expected until you do work up a complete bill-of-materials costs.  Also, don't forget to take into account feature licensing and/or support contracts costs too, which might vary based on what components are used.

BTW, an example of how a total bill-of-materials cost can lead into some unexpected areas, decades ago, we were upgrading user edge, then FE user ports and with gig to distro.  From a networking usage perspective, there was good argument to be made the FE/gig combo was still quite adequate for user edge.  However, as the company was a leading international tech company, they wanted to move to gig to users.  That wasn't too much of a premium, and our copper cabling plant would handle it, but then the issue arose is gig uplink still okay or do we now need to upgrade it to 10g, which of course, meant distro upgrades too.  Also, at that time, 10g, ports were very expensive, and we had the capability to easily expand distro gig to access.

So, it looked like, we were going to Etherchannel access<>distro, using gig.  To be near like 10g, we were going to even have 8 link Etherchannel.  From a port cost standpoint, this was still less expensive than moving to 10g.

However, when we worked out complete bill-of-materials, for both gig and 10g access<>distro upgrades, we found the costs of so many gig optical transceivers, 8x as many, exceeded the cost of 10g.

So, again, it can be very worthwhile to work out complete bill-of-materials costs for different, but like capacity, hardware.

Hey Joe; the Cat9K is not a product I worked with, so can’t say with any certainty as to why there is such a low number of 1G interfaces supported. What I saw with other high-performance products that supported a small number of “low speed” interfaces was that supporting them could be very inefficient. That is, a high performance NPU has high-speed SERDES to interface to front-panel ports, with the SERDES often being much higher speed than the ports. The high-speed SERDES are muxed/demuxed by a gearbox-like IC to match the speeds of the front-panel ports.  In architecting the product, the designers make tradeoffs on how to allocate the fixed SERDES resources to the front panel, with one possible tradeoff being the optimization for numbers of lower speed vs higher speed interfaces. For example, a SERDES interface into the NPU might yield a very small number of low-speed interfaces if the systems has been optimized for high-speed ports.

Scanning the Cat9K data sheet, I see that the Sup1 uses the UADP 3.0 NPU, while the Sup2 uses the Q200 SiliconOne NPU with nearly 3x the throughput of the Sup1. It is plausible to me that the Sup2 has higher speed SERDES connectivity to its NPU and has been optimized for high speed interfaces (specifically 100G) at the expense of relatively poor support for 1G. If any Cat9K TMEs are lurking here, they can pile on with the actual answer.

Disclaimers: I am long in CSCO. Bad answers are my own fault as they are not AI generated.

Thanks,

That makes a lot of sense, you dont buy a F1 car and expect it still to be efficent at 50MPH.

I have recommended we go with the Sup-1 card. Also as a second option stacking 9300.

thanks,

Chris.

"I have recommended we go with the Sup-1 card. Also as a second option stacking 9300."

For a need of lots of gig ports, sup1 appears to be the only option for a 9600.  However, stacking 9300s, although they too can provides lots of gig ports, a stack isn't an equivalent of a chassis.  A stack might be fine for your use case, but it may not be too.  Again, you didn't go into details about your 6509 platform or how it's being used.

Also again, cannot say for a 9300 stack vs. a 9600 chassis, but historically, sometimes a chassis was less expensive than a stack.

Oh, cannot say whether it still applies to something like the 9k series, but, in my experience, historically, chassis devices seemed to have less software issues.  That's a point @Leo Laohoo might comment on, possibly both past and present.


@Joseph W. Doherty wrote:
chassis devices seemed to have less software issues

As one Cisco technical staff said to us back in 2022, "there is nothing wrong with the platform.  The disappointment is in the software." 

It does not matter if a 9600 has a single or quad supervisor card because all it takes is a bug to hit one card and take down the entire chassis.  And this has happened.  

"It does not matter if a 9600 has a single or quad supervisor card because all it takes is a bug to hit one card and take down the entire chassis. And this has happened."

@Leo Laohoo sure, no disagreement, but my observation was, at least historically, chassis software seemed less buggy than stackable switch software.  Just wondering if your historical experience agrees or not.

Also would be curious, if for current platforms, in regards to software stability, whether there appears to be any difference between stackable vs. single chassis.  (As I've been retired for some years now, I don't have any experience with later/current gen platforms.)

There are debates whether stacking, for instance, 9200/9300 and the difference between VSS. 

First off, the process behind the stacking of 9200/9300 and VSS are totally different.  

Next, I have witnessed several times where an entire stack goes down because the 9300 stacking process/daemon decides "not to play".  9200/9300 brings a different level of "stability" vs a stand-alone switch.  

VSS is no different.  For instance, we used to have two pairs of 9800-80 in a VSS.  Due to the stability, we had to rip them apart.  

Ah, so I think you're saying you've found stacking, whether classical 3k stack, or a VSS kind of variant, not as stable as a single device including chassis with multiple sups