cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
379
Views
0
Helpful
3
Replies

Problems with switching capacity

Kevin Wang
Level 1
Level 1

Hi, I'm so confused about switching capacity now.... @@

Question 1: If there were 100 1Gbps links, could I use a Catalyst 3850 stack as an aggregation device? Its spec says its stacking bandwidth is 480Gbps.

Question 2: 16 10Gbps links connect to a c6500, could I use four 4-port 10GE line card to achieve 1:1 oversubscription?

-------------------------------------------

Sorry for my bad English.... 

 

Thanks

3 Replies 3

Reza Sharifi
Hall of Fame
Hall of Fame

Hi,

Question 1: If there were 100 1Gbps links, could I use a Catalyst 3850 stack as an aggregation device? Its spec says its stacking bandwidth is 480Gbps.

480 is the total bandwidth capacity for 9 switches stacked.  For 100 1Gig links, you don't need 9 switches.  You can use 3 or 4 switches and get line rate per port. Now, if your uplinks from your aggregate switches is only 10 or 10Gig, than that is your limitation.

Question 2: 16 10Gbps links connect to a c6500, could I use four 4-port 10GE line card to achieve 1:1 oversubscription?

Yes, you would need to put the ports in performance mode.

HTH

 

 

480 is the total bandwidth capacity for 9 switches stacked.  For 100 1Gig links, you don't need 9 switches.  You can use 3 or 4 switches and get line rate per port.

So, I couldn't get line rate among these 100 1G links if I stack 3 switches because its stacking bandwidth is not 480?

This is what I want:

 

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Question 1: If there were 100 1Gbps links, could I use a Catalyst 3850 stack as an aggregation device? Its spec says its stacking bandwidth is 480Gbps.

480 is the total bandwidth capacity for 9 switches stacked.  For 100 1Gig links, you don't need 9 switches.  You can use 3 or 4 switches and get line rate per port. Now, if your uplinks from your aggregate switches is only 10 or 10Gig, than that is your limitation.

My understanding of StackWise-480 is the 480 Gbps represents 480 Gbps of duplex bandwidth, across a pair of ring ports.  Or, if a "ordinary" Ethernet port, we would consider each ring port 120 Gbps.

StackWise-480, like StackWisePlus before it, supports spatial reuse, so as you add stack members, you increase aggregate ring bandwidth.  However, as stack member traffic may need to transit multiple ring segments, you don't want to assume all the aggregate bandwidth is available.  Cisco says "The spatial-reuse technology enables multipath parallel switching across each stack ring to double the throughput."  Cisco appears to be assuming traffic will only need (on average) to transit half the stack members.

From just a bandwidth view, two such stack ring ports should handle 100 gig links but other considerations might/should be considered whether these switches are suitable for the distribution role.

Question 2: 16 10Gbps links connect to a c6500, could I use four 4-port 10GE line card to achieve 1:1 oversubscription?

Yes, you would need to put the ports in performance mode.

BTW, don't believe performance mode applies to 6704s.  (I.e. don't believe it can be configured, nor should it be needed.)

I recall only the (discontinued) 6702 offered true wire-rate (with sup720s), but also (vaguely) recall the 6704 doesn't offer 100% bandwidth needed, but it comes very close.  Also when using the 6704 and especially all four 10g ports, CFC and/or DFC3 capacity PPS might be a bottleneck; DFC4 has the additional PPS needed, I believe.  (Also with 6704s, you'll want whole chassis in fabric mode.)

Review Cisco Networking for a $25 gift card