12-19-2021 04:24 AM
Hi,
As per the datasheet, each line card supports 2.4 Tbps per slot. So how we calculate total throughput for the switch
C9600-LC-48TX 960 Gbps
C9600-LC-48S 96 Gbps
C9600-LC-24C 2.4 Tbps
the datasheet says "support a wired switching capacity of up to 25.6 Tbps"
What does it mean by wired switching capacity here, And what is the behine 25.6 Tbps
Thanks
12-19-2021 05:04 AM
- Whilst not a direct response - you may find this presentation useful :
https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2019/pdf/BRKARC-3010.pdf
M.
12-19-2021 10:39 AM
Hi @marce1000
What does it mean by below ?
Aggregated downlink BW:
• 384G with 384x 1G
• 960G with 384x 2.5G
Uplinks BW needed for 20:1 oversubscription from access
to distribution
• 2x 10G for 384x 1G
• 2x 25G for 384x 2.5G
Thanks
12-19-2021 07:35 AM
"What does it mean by wired switching capacity here, And what is the behine 25.6 Tbps"
I haven't looked at the specs 9606, nor the link @marce1000 provided, but likely the 25.6 Tbps is the fabric's bandwidth capacity, which if the series supports more than 6 slots (06?), the additional fabric capacity would be for a larger chassis (i.e. more line cards).
12-19-2021 10:44 AM
with the following no of devices is it a good idea going with a collapsed core or should i go choose three tier architecture
no of pc 800
no of phones 600
no of camea 450 with 5 Mpixel
no of ap 400
wireless client 2000
iot devices 100
Thanks
12-19-2021 11:21 AM
How many wiring closets / access switches do you have ?
As for your original question, while the data sheet says 2.4Tbps per slot, this is in both directions (Transmit and Receive). This is a common practice in the industry to quote both . That would be 12 100Gbps ports at full line rate in both directions and 48 25Gbps ports at full line rate in both directions.
To answer your question about the slide you posted, what it is saying is that the 9400 chassis can support up to 384 ports of 1Gbps / 2.5Gbps ports (8 x 48 port modules in a 10-slot chassi). Based on historical 20:1 oversubscription recommendations (worst-case) from access layer to distribution layer, that would mean 2 x 10Gbps uplinks from the 9410 to the 9606 (for 1Gbps ports) or 2 x 25Gbps uplinks (for 2.5Gbps ports). As soon as the aggregate downlink bandwidth crosses 400Gbps mark, then you would want to have the 25Gbps uplinks to stay under the 20:1 oversubscription ratio.
Cheers,
Scott Hodgdon
Senior Technical Marketing Engineer
Enterprise Networking and Cloud Group
12-19-2021 01:03 PM
". . . historical 20:1 oversubscription recommendations . . ."
Historical, indeed, as back to even 10 Mbps Ethernet, hub based. This was also sometimes described 24:1, which effectively was 24 user host ports to 1 (same bandwidth) uplink.
Also, historical, server hosts might be oversubscribed at 8:1; again 8 server host ports to 1 uplink.
For both user hosts and server hosts, you "mileage" might vary, i.e. depends on how hosts are using network. (For example, years ago I supported a departmental LAN, in a fortune 100 company, where our LAN carried more traffic than the corporate backbone. This was due to a 30 person CADD drafting section whose CAD package constantly updated an image, with every change, of an engineering design on the server.)
In modern networks, although network traffic has grown by leaps and bounds, users hosts, with 100x faster gig connections, generally, don't have 100x amount of traffic.
Most server hosts, with 10g connections, also don't often have 1,000x the traffic.
So, if possible, you might want to gather some stats on your actual link loading, for various hosts, which will allow you to better design/allow for actual over-subscription ratios.
12-22-2021 06:49 PM
Hi @Joseph W. Doherty and @Scott Hodgdon
"So, if possible, you might want to gather some stats on your actual link loading, for various hosts, which will allow you to better design/allow for actual over-subscription ratios."
You mean the stat from the existing network. The existing Access layer is every IDF maximum 5 cat 3850 switches and also 4507 catalyst switches. In one stack or modular the maximum camera is 20 Nos .
with cisco IP phones, computers, cameras, IoT devices, and access points (ac). Now the plan is to use wifi6 and multicast streaming on both wired and wireless, camera maximum will take 20Mbps.
PC's usual traffic is to web and also vollaboration application like teams webex zoom and planning to use VDI for end users .
Please help me to size for another 5 years
Thanks
12-19-2021 06:25 PM
"As soon as the aggregate downlink bandwidth crosses the 400Gbps mark, then you would want to have the 25Gbps uplinks to stay under the 20:1 oversubscription ratio."
I have around 28 IDF ( each IDF has a stack of 3850 (maximum 4) or modular switch( 4507 R+E and 4510R+E) ,
Let s say I have two physical cores and all access connected to each core,( to a single module 10 G module on the core )
You mean the downlink bandwidth (400 Gbps)from the single module or aggregate bandwidth from the total module ?
Thanks
12-19-2021 07:32 PM
The slide you referenced was about downlinks (1Gbps or 2.5Gbps) from the access layer and what the uplinks from the access layer would be to meet the 20:1 oversubscription ratio from access to distribution. The 400Gbps I mentioned would be for the aggregate maximum bandwidth from the access layer to the clients .
As was previously mentioned by @Joseph W. Doherty , it would be best to understand the actual bandwidth needs of your clients to properly plan for uplink sizing.
For 28 IDFs and about 5000 clients / devices, I think a collapsed core of 9606 is fine. If you want to add some additional resilience, you could run dual supervisors in each 9606. If that level of resilience is not needed, then you could even consider the 48-port 9500 that has 48x1/10/25Gbps ports and 4x40/100Gbps uplinks (9500-48Y4C) and is full non-blocking. If you connect each IDF at 10/25Gbps to each 9500-48Y4C, you still have 20 available 1/10/25Gbps ports and the 4x40/100Gbps uplinks.
Cheers,
Scott Hodgdon
Senior Technical Marketing Engineer
Enterprise Networking and Cloud Group
12-20-2021 06:47 PM
For 28 IDFs and about 5000 clients / devices, I think a collapsed core of 9606 is fine.
I have only 2500 clients
In that case, do I need 9606,Can you suggest a lower switch .
If i go for 9500-48Y4C can i do vss ?
Thanks
Thanks
12-21-2021 07:35 AM
If this were my network, I would look at 9500-48Y4C instead of 9606, and the 9500-48Y4C does support StackWise Virtual (same concept as VSS).
Cheers,
Scott Hodgdon
Senior Technical Marketing Engineer
Enterprise Networking and Cloud Group
12-21-2021 06:46 PM
If I go for 9500-48Y4C
If In case I want to increase the no of IDF, then port won't be enough.In that case what is the solution
Thanks
12-22-2021 04:56 AM
You would need to add more than 20 IDFs over the life of the network. Do you foresee that happening realistically ?
If there is a strong possibility that the IDF count might eventually exceed the number of ports on the 9500-48Y4C, then you have a few options:
What is your planned uplink from your core to it's peer device outside the campus ?
Cheers,
Scott Hodgdon
Senior Technical Marketing Engineer
Enterprise Networking and Cloud Group
12-22-2021 06:52 PM - edited 12-22-2021 06:53 PM
What is your planned uplink from your core to it's peer device outside the campus ?
Sorry I did not understand outside the campus , you mean the wan and internet connection
If it is internet the current traffic is 450 Mbps , it can go up to may be 800 Mbps
Thanks
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide