03-21-2011 09:00 AM - edited 03-06-2019 04:11 PM
I am looking for some advice on the best way to implement replacing our existing un-managed switches with Cisco switches.
We have a relatively small Internal network using a single subnet with no vlans currently configured. I am ultimately just looking at the best way to put Cisco hardware in and achieve good performance and redundancy throughout.
We currently have around 7 (10/100) client switches and 4 (10/100/1000) server switches.
My initial thoughts have been to replace these switches and introduce another 2 main switches.
I would then use the 2 x Gigabit uplink ports on each client and server switch to connect to each one of the main switches. Each of the 2 main switches would also have a path out to our gateway thus providing a redundant link for every client and server switch to each other and also to our gateway.
Does this sound like a suitable solution or could it be implemented in a different way?
I am hoping the redundancy can be achieved by implementing STP (correct me if I am wrong) with each switch using one of the uplinks to forward traffic and blocking the other one unless we have a problem on one of those paths in which case STP would put one of the blocking ports in a forwarding state.
Should this just be configured on the uplink ports ?
In terms of hardware i have been looking at the following;
Client switches - 2960FE 10/100 x 48 w LAN Base
Server Switches - 2960S Gigabit x 48 w LAN Base
Main Switches - Unsure yet possibly 24 port models from the 3650x / 3750x series to incorporate redundant power modules
All of this will however may dependent on price so the more accurate I can be in nailing down the right hardware for the job the better.
Again I am hoping someone may be able to point me in the right direction and suggest more suitable hardware where appropriate
I appreciate this will be a somewhat basic setup for many people within the support forums but it seemed the best place to look to get the advice I need.
Whilst I have my CCNA its just one of the areas I manage at work and I don't have previous experience in designing a switch network and specifying the best hardware (yet)
I have attached a visio of how I envisage the switch toplogy should look.
Hoping someone can help
03-21-2011 09:35 AM
I suggest going with the 3750s as your main switch as you can stack them. Then you can etherchannel from the 2960s to both main switches in the stack thus avoiding the STP blocking port issue. Instead of using 1Gbps to the main switch, you will have a 2Gbps connection and redundancy.
03-22-2011 02:20 AM
Thanks for your response Edison
Would the benefit from stacking the switches just be easier management or does this provide some kind of redundancy as well?
If I were to use Etherchannel (which sounds ideal) is it fairly straightforward to configure ? and how does that then fit in with then using STP or RSTP if at all.
03-22-2011 07:48 AM
Matthew
If you make sure your etherchannel is spread across multiple switches in the 3750 stack then you do get redundancy because if a particular switch in the stack fails then the etherchannel will simply use the remaining links. This setup is called MEC (Multi chassis etherchannel) on the 3750.
As far as STP/RSTP is concerned they see the logical etherchannel link as one link so none of the physical links within the etherchannel are blocked. So you get all the physical links within the etherchannel forwarding traffic.
Jon
03-21-2011 02:53 PM
Client switches - 2960FE 10/100 x 48 w LAN Base
Server Switches - 2960S Gigabit x 48 w LAN Base
I agree with Edison. 3750X for the main switch is a sound call (+5). You have an option of using 4 x 1Gb SFP ports or 2 x 10Gb SFP+ ports.
Hmmm ... Plain 2960 for client switches doesn't sit well with me (normally). When dealing with access switches we now use switches which support PoE and 1Gb to the desktop. We then choose whether we want 1Gb or 10Gb uplink ports.
In regards to the Server switches, I agree that you chose 2960S but did you choose the plain 2960S (1Gb uplink) or the "D" model with 10Gb uplink.
Take note that the 2960S will support up to 4 switches stacked together.
03-22-2011 02:27 AM
Thanks for your response
Ideally I would like to present 1GB to the user desktop but it all just depends on $$ that is available for this project.
I suppose I can put them down initially, I can only get knocked back !!
I think the server switch I looked at was with the 1GB uplink but I may well consider the 10GB.
03-22-2011 07:54 AM
Hi
Will the switches be placed in the same room ?
Does the room have redundant power ?
Aprox how many ethernet ports are we talking about ?
Is everything copper or are there fibers installed also ?
this is information needed to proceed with the design.
HTH
03-24-2011 08:44 AM
90% of the switches will all be located in the same room, we have a couple of switches located on other floors which may require a fiber connection.
We have a coupld of UPS's located in the same room so they may provide power and some redundancy for the core switches.
In terms of ethernet ports probably around150 connections at the access layer for PC's and printers. Most probably 40/50 server switch connections.
As I mentioned mostly everything is copper with potentially a couple of fiber uplinks.
03-24-2011 12:48 PM
Hi
ok now we have something to work with.
I would go with 3-4 3750x-48T-S (if you do not care for the power stack I would go with L instead of S and save about 15% but the powerstack is a nice feature so it is worth some fighting for !). and one spare/lab/test 3750x of the same module as the above.
Then I would (depending on number of ports needed) use something like WS-C2960G-24TC-L for the different floors where you need to put a switch.
Can you use the switches you are using today as serverswitches out on the floors ? (to save money) it is something that is good in a negotiation.
This will give you a good core that will be able to give you a stable and good network for atleast 6-8 years ahead.
You can have up to 8, 10Gig ports with 4 C3KX-NM-10G= installed in the core without adding switches.
But since you do not seem to need it right now I would skip them and calculate them as expansion modules for later use and to use them as leverage in the oncoming negotiation with the person holding the money.
Do not forget to buy a couple of extra powersupplies for the Stack.
This in total will give you gig access everywhere (it will be needed when fx installing computers) and in the core you will have local switching and 64Gbps to connect the 3750 switches and you can do etherchannels out to the floor switches and if need be you can have 10gig to servers and SAN.
You can also trunk the servers if you need to.
I would recomend that half the powersupplies go with the UPS and half of them are off the ups.
incase the ups have a mailfunction this will save the stack from loosing power and incase the power is lost then the ups will keep the stack alive.
This is what I think I would do.
Good luck
HTH
03-25-2011 07:18 AM
Thanks for the detailed view on how you would do things, certainly helps to me thinking about a suitable solution.
I have had some brief dialogue with a Cisco Gold partner regarding the solution and they have identified the following
(Obviously they have had more detailed summary of our requirements than those I have been able to provide here)
Core of the network is a pair of Catalyst 4948E with 4 x 10GB ports, server access to come from Catalyst 3650x with 48 10/100/1000 ports and user access via catalyst 2960 48 ports FE or triple speed.
They have then proposed that each of the access switches is dual hosted back to the core devices with RSTP and HSRP deployed to provide rapid network failover for a switch or link failure. I assume this will provide 2GB redundant links to each access switch ?
As far as I am aware none of the devices will be stacked. Am i right in saying if the core devices were stacked I would need 4 of them to provide the redundant etherchannels as each pair of 2 stacked would provide 1 link ?
If my assumption is correct then I certainly think we just need to implement the devices as standalone as the cost of stacking those devices would undoubtedly be overkill for our environment.
In terms of configuring the etherchannels I assume this is available between the models of core switches and both the server and users access switches I have mentioned above ?
Thanks in advance
03-25-2011 10:43 AM
That sounds like a textbookdesign from your partner.
I personally would not use that design, but then again they have more information so they might know something I do not.
None of the switches that they have recomended to you is stackable the way 3750 is so it is not the same type of stack we are talking about.
The 3750x stacks are of two types, they have the stackwise for the dataexchange and that has a combined speed of 64Gbps (ring)
this works in all 3750x.
And the other is the powerstack wich makes the 3750 able to share power between them. This only works with ip-base or Ip-services.
Good luck
HTH
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide