cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3065
Views
50
Helpful
27
Replies

9606 throughput

bluesea2010
Level 5
Level 5

Hi,

 

As per the datasheet, each line card supports 2.4 Tbps per slot. So how we calculate total throughput for the switch 

 

C9600-LC-48TX 960 Gbps
C9600-LC-48S 96 Gbps
C9600-LC-24C 2.4 Tbps

 

the datasheet says  "support a wired switching capacity of up to 25.6 Tbps" 

What does it mean by wired switching capacity here, And what is the behine 25.6 Tbps

Thanks

27 Replies 27

marce1000
VIP
VIP

 

 - Whilst not a direct response - you may find this presentation useful :

                  https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2019/pdf/BRKARC-3010.pdf

 M.



-- Each morning when I wake up and look into the mirror I always say ' Why am I so brilliant ? '
    When the mirror will then always repond to me with ' The only thing that exceeds your brilliance is your beauty! '

Hi @marce1000 

access layer.PNG

What does it mean by below ? 

 

Aggregated downlink BW:
• 384G with 384x 1G
• 960G with 384x 2.5G
Uplinks BW needed for 20:1 oversubscription from access
to distribution
• 2x 10G for 384x 1G
• 2x 25G for 384x 2.5G

 

Thanks

 

 

 

Joseph W. Doherty
Hall of Fame
Hall of Fame

"What does it mean by wired switching capacity here, And what is the behine 25.6 Tbps"

I haven't looked at the specs 9606, nor the link @marce1000 provided, but likely the 25.6 Tbps is the fabric's bandwidth capacity, which if the series supports more than 6 slots (06?), the additional fabric capacity would be for a larger chassis (i.e. more line cards).

Hi, @Joseph W. Doherty 

with the following  no of devices is it a good idea going with a collapsed core or should i go choose  three tier architecture 

no of pc 800
no of phones 600
no of camea 450 with 5 Mpixel

no of ap 400 
wireless client 2000
iot devices 100

 

Thanks

@bluesea2010 ,

How many wiring closets / access switches do you have ? 

As for your original question, while the data sheet says 2.4Tbps per slot, this is in both directions (Transmit and Receive). This is a common practice in the industry to quote both . That would be 12 100Gbps ports at full line rate in both directions and 48 25Gbps ports at full line rate in both directions.

To answer your question about the slide you posted, what it is saying is that the 9400 chassis can support up to 384 ports of 1Gbps /  2.5Gbps ports (8 x 48 port modules in a 10-slot chassi). Based on historical 20:1 oversubscription recommendations (worst-case) from access layer to distribution layer, that would mean 2 x 10Gbps uplinks from the 9410 to the 9606 (for 1Gbps ports) or 2 x 25Gbps uplinks (for 2.5Gbps ports). As soon as the aggregate downlink bandwidth crosses 400Gbps mark, then you would want to have the 25Gbps uplinks to stay under the 20:1 oversubscription ratio.

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking and Cloud Group

 

 

". . . historical 20:1 oversubscription recommendations . . ."

Historical, indeed, as back to even 10 Mbps Ethernet, hub based.  This was also sometimes described 24:1, which effectively was 24 user host ports to 1 (same bandwidth) uplink.

Also, historical, server hosts might be oversubscribed at 8:1; again 8 server host ports to 1 uplink.

For both user hosts and server hosts, you "mileage" might vary, i.e. depends on how hosts are using network.  (For example, years ago I supported a departmental LAN, in a fortune 100 company, where our LAN carried more traffic than the corporate backbone.  This was due to a 30 person CADD drafting section whose CAD package constantly updated an image, with every change, of an engineering design on the server.)

In modern networks, although network traffic has grown by leaps and bounds, users hosts, with 100x faster gig connections, generally, don't have 100x amount of traffic.

Most server hosts, with 10g connections, also don't often have 1,000x the traffic.

So, if possible, you might want to gather some stats on your actual link loading, for various hosts, which will allow you to better design/allow for actual over-subscription ratios.

 

Hi @Joseph W. Doherty and @Scott Hodgdon 

 

"So, if possible, you might want to gather some stats on your actual link loading, for various hosts, which will allow you to better design/allow for actual over-subscription ratios."

You mean the stat from the existing network.  The existing Access layer is every IDF maximum 5 cat 3850 switches and also 4507 catalyst switches. In one stack or modular the maximum camera is 20 Nos .

with cisco IP phones, computers, cameras, IoT devices, and access points (ac). Now the plan is to use wifi6  and multicast streaming on both wired and wireless, camera maximum will take 20Mbps.  

PC's usual traffic is to web and also  vollaboration application like teams webex zoom and planning to use VDI for end users .

Please help me to size for another 5 years 

Thanks 

 

Hi @Scott Hodgdon 

"As soon as the aggregate downlink bandwidth crosses the 400Gbps mark, then you would want to have the 25Gbps uplinks to stay under the 20:1 oversubscription ratio."

 

I have around  28 IDF ( each IDF has a stack of 3850 (maximum 4) or modular switch( 4507 R+E and 4510R+E) ,

 

Let s say I have two physical cores and all access connected to each core,( to a single module  10 G module on the core )


You mean  the downlink bandwidth (400 Gbps)from the single module or  aggregate  bandwidth from the total module ? 

 

Thanks

 

 

@bluesea2010 ,

The slide you referenced was about downlinks (1Gbps or 2.5Gbps) from the access layer and what the uplinks from the access layer would be to meet the 20:1 oversubscription ratio from access to distribution. The 400Gbps I mentioned would be for the aggregate maximum bandwidth from the access layer to the clients . 

As was previously mentioned by @Joseph W. Doherty , it would be best to understand the actual bandwidth needs of your clients to properly plan for uplink sizing.

For 28 IDFs and about 5000 clients / devices, I think a collapsed core of 9606 is fine. If you want to add some additional resilience, you could run dual supervisors in each 9606. If that level of resilience is not needed, then you could even consider the 48-port 9500 that has 48x1/10/25Gbps ports and 4x40/100Gbps uplinks (9500-48Y4C) and is full non-blocking. If you connect each IDF at 10/25Gbps to each 9500-48Y4C, you still have 20 available 1/10/25Gbps ports and the 4x40/100Gbps uplinks.

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking and Cloud Group

Hi @Scott Hodgdon 

For 28 IDFs and about 5000 clients / devices, I think a collapsed core of 9606 is fine.

I have only 2500 clients

In that case, do I need 9606,Can you  suggest a lower switch .

If i go for 9500-48Y4C can i do vss ? 

Thanks

 

 

Thanks

@bluesea2010 ,

If this were my network, I would look at 9500-48Y4C instead of 9606, and the 9500-48Y4C does support StackWise Virtual (same concept as VSS).

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking and Cloud Group

Hi @Scott Hodgdon 

If I go for 9500-48Y4C

If In case I want to increase the no of IDF, then port won't be enough.In that case what is the solution 

Thanks

@bluesea2010 ,

You would need to add more than 20 IDFs over the life of the network. Do you foresee that happening realistically ?

If there is a strong possibility that the IDF count might eventually exceed the number of ports on the 9500-48Y4C, then you have a few options:

  • Go with 9606 and give yourself easier room for growth through the addition of modules. If you do this, see if there is still a bundle for the 9606 available. There used to be one that essentially gave a free module in the system.
  • If you have 9500-48Y4C in SWV mode, then you could add another pair in SWV mode as a second Core and then cross-connect the new pair with the existing pair. You could put half your IDFs on one pair and half on the other pair. As far as uplinks, just make sure your next hop device after the core as enough interfaces to take connections from both SWV pairs.
  • If your next-hop device does not have enough links to handle a two SWV pairs as a core, then you could still get another pair of 9500-48Y4C as a distribution layer for the IDFs above 48 and then connect that new pair of 9500-48Y4C to the existing 9500-48Y4C SWV core, with the 9500-48Y4C SWV core then connecting to whatever its peer device is outside the campus.

What is your planned uplink from your core to it's peer device outside the campus ?

Cheers,
Scott Hodgdon

Senior Technical Marketing Engineer

Enterprise Networking and Cloud Group

Hi @Scott Hodgdon 

 

What is your planned uplink from your core to it's peer device outside the campus ?

Sorry I did not understand  outside the campus , you mean the wan and internet connection 

If it is internet the current traffic is 450 Mbps , it can go up to may be 800 Mbps 

Thanks 

Review Cisco Networking for a $25 gift card