cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2585
Views
0
Helpful
23
Replies

Net Work design

lynneshri
Level 1
Level 1

Dear All,

I have a question about selecting Cisco switches for my network. I recently joined this company. We are moving to a new location, and we have 5 floors. We have two IDF rooms and one MDF room. All cables are terminated in the IDF and MDF rooms. For each IDF room, I am going to install 4 switches (192 ports) in a stack. In the MDF room, I am also planning to install 4 switches in a stack. These switches are access switches. From these switches, trunk cables will connect to my core switch and the core switch will connect to the firewall and then ISP.

Each floor will have 100 to 150 clients (laptops, printers, and phones) connected at any given time, including Wi-Fi.

I am thinking that for access switches I should go for Cisco Catalyst C9200 with uplink port 40G. I am not sure about the core switch, but I think it should have for my core switch three 40G ports that will connect from my access switches.

Please I need help for selecting cisco access and core switch. Let me know if you need anything to more clarification. 

Thanks in advance.

LynneShri.

 

 

 

 

 

23 Replies 23

Hi @lynneshri 

 9200 is an excelente alternative for access switch and for core I would suggest 9500.

What is your concern?

Joseph W. Doherty
Hall of Fame
Hall of Fame

". . . access switches . . . with uplink port 40G . . ."

BTW, traditionally, user client access switches, usually work well with bandwidth ratios from about 25 to 50:1, i.e. if user ports are gig, one gig uplink may well handle about 50 gig access ports. 

Thanks for your reply, could explain to me what you want to say about " One gig uplink may well handle about 50 gig access port."

 

"One gig uplink may well handle about 50 gig access port."

Typically, user ports have low overall average utilization.  If they only average about 2% utilization, it would take 50 of them to push a same bandwidth uplink port to 100% utilization.  However, "your mileage might vary".

In your OP you note you're moving to a new location.  Ideally, you would look at existing (i.e. prior locations) user usage stats.  Assuming, how they will continue to use a network doesn't much change, you can pretty accurately predict what you'll need for uplink bandwidth relative to the number of user ports.

Or, let me put it another way, if you're thinking you need a 40g uplink port.  If your user ports are gig, that means you could support 40 gig user ports running, concurrently, at 100%, or 400 gig ports running, concurrently, at 10%, either is probably very unlikely.

Also keep in mind, it would probably be a good thing to have two uplink ports, for redundancy. Lots of ways to take advantage of those to effectively give you twice your single uplink bandwidth.  So, if one gig uplink port might support 50 user edge gig ports, a second gig uplink port would now allow, possibly, supporting 100 user edge gig ports.

Years ago, I was at a company that was planning to deploy 48 port 3750Gs, stacked, for user edge devices.  Some stacks would go the full stack limit of nine stack members, supporting 432 gig edge ports.  Originally we proposed to only use two Etherchannel gig uplinks, with the option to increase the Etherchannel to 4 or 8 ports, if needed.  Well, someone, I forget who, thought with that number of edge ports we had to deploy with an 8 gig port Etherchannel.  So, we were going to do that, until when we worked the numbers, it was actually less expensive to swap out two of the 3750Gs for 3750Es, so we could use 10g uplinks (2).  (What made the 8 gig port Etherchannel option more expensive was the 16 [Cisco] optical gig transceivers.)

So, we went ahead with the dual 10g uplinks, and even for 432 gig user edge ports, the 10g uplinks had pretty low utilization.

Again, your mileage might vary.  Most user apps (at that time) had the app software on the host, so you were only opening then saving data files, or you were using some HTTP web app, which often don't move a lot of data either.

Thank you for your explanation. It helps a lot. I will give you a little bit of background on the usage of our network. In our company, we use 100% video and audio. We use Microsoft Teams and Zoom 100%. We use Zoom for calls and use Zoom software to make and receive calls from laptops. We also have a few desk phones for Zoom. Our users always talk to clients using Microsoft Teams and they love to use video. We have daily 75-100 visitors every day, and they come with their iPads and phones. We have a live conference every three months and we broadcast it. Our users use Microsoft Teams to participate and at that time we have 300 users on one floor. On top of that, we use Azure, SharePoint, AWS and Salesforce software in the cloud.

Taking all of these into consideration, I am thinking of Cisco C9200 for access switch with 2 uplink 40 Gig ports. If one goes down, then I have a backup. For core switch, I am thinking of C9300 with 2 40 Gig uplink ports. So, the connectivity will be fiber coming from C9200 switch and connecting to the uplink port on C9300.

Let me know your thoughts. 

 

 

Just checked both Zoom and MS Teams highest bandwidth requirements.  Both top out at about 4 Mbps.  So, 100 users would need up to 400 Mbps, less than half of a gig.

BTW, if your concern is to that you will have sufficient bandwidth to support all the possible real-time traffic, that's understandable, but so far 40g appears to be much, much more than needed.

If you believe you need 40g to guarantee your real-time traffic always obtains the bandwidth it needs, although 40g improves your odds, for a true service guarantee you need QoS.


@lynneshri wrote:
We use Microsoft Teams and Zoom 100%. We use Zoom for calls and use Zoom software to make and receive calls from laptops. We also have a few desk phones for Zoom. Our users always talk to clients using Microsoft Teams and they love to use video. 

Are all users on video calls 100% of the time, Mondays to Fridays?  

Yes, all users use Zoom and Microsoft Teams for video and audio. We conduct all meetings through Microsoft Teams. We are trying to remove all physical phones from desktops.

I agree with @Joseph W. Doherty

It is impossible for all users to be using Zoom/MS Teams 100% of the time.  

"It is impossible for all users to be using Zoom/MS Teams 100% of the time."

Laugh - if that was all that actually happens on the network, designing for bandwidth needs would be so, so much simpler.

The "killer" of real-time traffic is mixing it with an unknown quantity of variable data traffic without QoS.

If you have absolutely no oversubscribed links (very rare), when mixing such traffic kinds, you shouldn't need QoS.

The less oversubscription you have, often there's less need for QoS.

Again, if OP believes using 40g links will insure all his real-time traffic will work just fine, he may find it does not.

If OP want to use 40g links, the only negative, so far, it appears he would be funding something that provides no real immediate benefit.  (BTW, there's always the argument to "future proof", i.e. we'll eventually need 40g.  Likely true, but when you really need 40g, it will likely be a lot less expensive.)

Again, using 40g, from what's been described, is likely overkill, but it you want to do that, nothing "bad" about it (beyond the possible delta increase in cost).

Where I work, our team manages >500 sites (and grows every year).  98% of those sites are on dedicated dark fibre.  Each of those site have dual 10- or 25 Gbps uplinks to the the distro.  Each access switch has dual 10 Gbps uplinks to the site distribution/core switch.  We have everything (except manufacturing & retail) in the mix:  Over 100 sites are education (various levels from pre-schools all the way up to colleges and universities), office buildings and hospitals, clinics.   

All sites have MS Teams and WebEx (but not 100% of the time) and I have never seen a site exceed 20% of a single uplink.  Even with tele-Health involved, MS Teams &/or WebEx do not make a dent.  

Even the some of the school with dedicated "CAD labs" and their "beefed up" PCs do not exceed 14%. 

Have a look at the picture below.  This, according to our NMS, is one of our busiest facilities.  

This site has two 25 Gbps uplinks to two cores.  It is a pair of 9500 in a VSS.  The graph is taken in the last 48 hours (2 minute interval average).  No server(s) at the site.  All servers are accessed in a central location and goes out through one of two 25 Gbps uplinks.  

LeoLaohoo_0-1690405694867.png

 

Leo,

No surprise to me - much as I would expect - your stats just show that 40g is, again, likely much more than needed.

Yes, 40 Gbps is way over the top.

Review Cisco Networking for a $25 gift card