cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1389
Views
0
Helpful
8
Replies

Hardware question

shaun.nilsen1
Level 1
Level 1

I am doing a project for school that requires me to build a parts list.  For now I am concentrating on Building 1 with 140 PC's, 137 VoIP, 27 AP's. For the access switches I've chosen;

1x - Cisco Catalyst 4507R+E - switch – Chassis            

4x - Cisco Catalyst 4500E Series 48-Port Gigabit Ethernet Switch                             

2x - Cisco Supervisor Engine 7L-E - control processor                                                      

2x - Cisco 4200 WACV - power supply - 4200 Watt    

Questions

1. Would have 2 of these setups for redundancy or is that over kill?          

2. I planned on connecting the VoIP thru the computer to only need 1 connection to the switch. Good or Bad idea?

3. For the Distribution Switches should I go 6500/6800 or go with a Nexus series?                                  

Thanks in advance for any insight.              

Shaun

8 Replies 8

Ok nobody has answered I'll have a go

I take it for the 4x - Cisco Catalyst 4500E Series 48-Port Gigabit Ethernet Switch, you mean the line cards in the chassis, which i would guess would be WS-X4648-RJ45V-E if you wanted just standard 802.3af power, or the WS-X4748-RJ45V-E if you required 802.3at power.

as for your questions

1) you are already redundant except for the chassis, do the chassis ever fail? that's the risk, how much do you have to spend?

2) people have been running PCs behind the IP phones since day one, so a good idea!

3) most probably a Nexus

HTH

Richard

Thanks for the response Richard,

My apologies I just cut the CDW header. I was looking at WS-X4748-RJ45V-E. The project was for a electric company so budget is not a issue.  I think I might have misspoke.  With the dual power supplies and dual supervisors I get redundancy there, but I guess a better question on my part would have been, "How many extra line cards should I keep on hand?"  I only need 167 ports, which would fill 3.5 switches with 48 ports in this building.  This particular chassis would hold only 1 more line card, and I think that is why I was leaning towards another duplicate setup with another chassis. 

I am not very familiar with the Nexus line, which model would be a good fit for complimenting these switches? I just started looking at the line briefly and found N3K-C3064PQ-10GX 

Thank you again,

Shaun

Hey should only connect 2k nexus fexs to a 6800IA in enterprise setup other than that do not connect any low end fexs to a enterprise switches not recommended stp shut off by default, nexus switches are designed for DC deployment 7k-5k-2k etc connected together  ,  enterprise 6500 core - 4500 dist - 37s,38s access layer depending on whats required even 2960s

for what you have above 3x3850s stacked together and a couple of daisy chained 2960s would do as its just pcs/phones maybe a server or 2, that would cover all ports leave you with some free and also 3850s have a license for up to  max 50 aps , if you buy a 4500 you will need to get a wireless line card if theres one available for that platform or else a standalone wlc

If your linking 2 buildings and you really want 4ks get them in VSS mode acting as one unit for redundancy split between each building then just use standard switches as your users, you really only need a collapsed core design with that small network

chassis do fail but very rarely , if its 2 buildings VSS combines power of both platforms together and givens redundancy

Thanks for your response Mark,

For this assignment it is a 3 building network (Main office w/ DC, power grid management building, and a backup site. The building discussed is the main building. My professor is against daisy chaining, so would there be another solution you could recommend? I know this is broad as I have left a lot of variables out. It is not a large network, but it is a critical network.

Thanks,

Shaun

You could still use a VSS in two buildings to act as one switch 1 routed layer 3  combine bnoth buildings and then  route from the 3rd building to the VSS switch, that way theres no L2 daisy chain its L3 setup whole way breaking the broadcast domains if that's what hes concerned about

The VSS really interests me. How would look on my network diagram? Would both switches look just like 1 or would both be represented in the diagram? I am not a Visio expert so please for give me. Also my professor responded to my initial parts list as, "looks reasonable" so I included for you to look at as well.  From the responses I got it might be too much. 

Have a look at this and attached screenshot for VSS , 2 physical switches that act as one unit but combined power of 2x6500 switches or 4500s, used in a lot of design as the core and then either straight access layer connected by port-channel to each physical switch or distribution layer and access layer , but for your size you would only need collapsed core ,no distribution really

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Campus/VSS30dg/campusVSS_DG/VSS-dg_ch1.html

Had a quick look at your network device selection that's pretty expensive kit , in real world for network that big you would have a lot more users than a couple of hundred

A network that big and expensive with that much power behind it would have a lot more users and servers and be high end with that kit, you have chosen large core/dist switches some with 10 slots and 8 slots chassis alone they could be 100g to kit out with blades just for the 45s , remember these are core so they need if there standalone dual sups , dual power supplys etc and if vss you need identical matching switches sups blades etc and then theres support and ilcm renewal to keep everything in date , 6800,6500,4500s are all big money to support/license for features and services

Honestly to cover 3 buildings and the amount users you have 2 4500s would be overkill and will future proof it with that many slots and then a couple of layer 2/3 dist 3650 switches or something similar spread out  for user access, connect your servers direct to 4500

I have seen networks maybe 5x3750xs stacked together cover what you require above without any issue , once theres not large servers connected to them