05-21-2010 09:30 PM - edited 03-06-2019 11:12 AM
Hi,
My company is currently relocating is primary site including its DataCentre to a new location. In this process we wish to upgrade our core switching enviroment from our 2 x 4507Rs and 6 x WS-4948s (Server/Core) and 11 x 3560s (User Block) as we are severly limited by the 3GB backplane on the 4507s that are configured in a collapsed core configuration. I am looking for some high level suggestions.
Requirments:
DataCentre:
227 x 1GB Ports for servers
100 x 100Mb Ports for DRAC/MGMT
User Access:
288-384 x 1GB POE Ports.
288 of the DataPorts terminate in the data Centre
96 of the Data Ports will terminate on 2 remote switches (connected by OM3).
My initial thinking was to get 2 x Nexus 7Ks and 2 x 3750s and have everything just patched into the 2 collapsed cores due to the huge fabric these things have, but gotcha #1 was our POE requirement for the 288 user ports.
So now I am thinking something like:
2 x N7Ks (configured with 3 x 1GB Line cards)
2 x 6500s (configured with SUP32 and 3 x 48x1GB POE Line cards)
2 x 3560s for DRAC/MGMT (from spares)
Which would be great to seperate the Campus enviroment from the DataCentre enviroment, but this cost is obviously much higher...
I have read through the DC3 IP Datacenter Network design, but I have to accomodate campus user access and not blow the project budget out of the water.
Any suggestions would be great.
Thanks
05-22-2010 12:14 AM
Hi,
My company is currently relocating is primary site including its DataCentre to a new location. In this process we wish to upgrade our core switching enviroment from our 2 x 4507Rs and 6 x WS-4948s (Server/Core) and 11 x 3560s (User Block) as we are severly limited by the 3GB backplane on the 4507s that are configured in a collapsed core configuration. I am looking for some high level suggestions.
Requirments:
DataCentre:
227 x 1GB Ports for servers
100 x 100Mb Ports for DRAC/MGMT
User Access:
288-384 x 1GB POE Ports.
288 of the DataPorts terminate in the data Centre
96 of the Data Ports will terminate on 2 remote switches (connected by OM3).
My initial thinking was to get 2 x Nexus 7Ks and 2 x 3750s and have everything just patched into the 2 collapsed cores due to the huge fabric these things have, but gotcha #1 was our POE requirement for the 288 user ports.
So now I am thinking something like:
2 x N7Ks (configured with 3 x 1GB Line cards)
2 x 6500s (configured with SUP32 and 3 x 48x1GB POE Line cards)
2 x 3560s for DRAC/MGMT (from spares)
Which would be great to seperate
Hi,
In order to design a network you should be well aware of the application capacity and the amount of users hitting the application,types of data flow like data/voice or video in the network.With having these things in mind then need to goo for backbone design with having in mind for future growth,If in future it will grow according to that we need to deploy high end swicthes at the core which are having high level backplane capacity to handle the load.
The 6500 chassis offers card slot connections for a 32 Gbps shared bus, or one or possibly two fabric channels to each card slot where the fabric channels are either 8 Gbps or 20 Gbps. "Fabric enabled" cards have one or two fabric channels connections, either the 8 Gbps or 20 Gbps type.
The fabric is supplied on either its own card (the older 8 Gbps channels - 256 Gbps total) or on the sup720 (the newer 20 Gbps channels - 720 Gbps total).
Check out the below link for 6500 series switches
Cisco Nexus 7000 is built to support future 40 Gbps and 100 Gbps ethernet.
Check out the below link for Cisco nexus 7000 series switches for more information
http://www.ciscounity.info/en/US/solutions/collateral/ns340/ns517/ns224/nexus_deployment_report.pdf
And i also i would suggest in order to design take suggestions but have network consultant to design as per the requirement in order to avoid blunder's.
Hope to Help !!
Ganesh.H
Remember to rate the helpful post
05-22-2010 05:47 AM
I'm not familiar with the Nexus line yet so I'll go to the 6500.
If you get, say, a pair of 6500E chassis and use the VS-Sup720. The two chassis will form a single logical switch, like the 3720 and 2975. I'd say you want to take advantage of multiple 10Gb links so each chassis with have one 6708-10Gb (8 ports or 10Gb each blade) per chassis. For your line cards, you'll probably want to think about 6748 (48 x 1Gb port per blade) which go well with the. For the DRAC/iLO you can use the low-end 6148.
For access, I'd recommend the 6500E with dual Sup32 cards and then use the 6148 with PoE daughter board.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide