05-19-2018 08:34 PM - edited 03-01-2019 01:44 PM
Hi Everyone,
I'm new to this so... But been tasked with designing the networking architecture for a small DC/server room. Typically 6 to 10 servers, 10G Uplink Access to Distribution, 50TB of SAN storage... that scale. This is the first time I'll be handling something like this though I have a bit of experience managing campus and branch office networks. So my confusion is this: What type of switching infrastructure would be most suitable?
1. Nexus Aggregation/Fabric Extension (feels like an overkill, but then ease of deployment and scalability?)
2. Catalyst Switches (with VSS on the agg, stacking on access layer, LACP from server to TOR and switch to Agg)?
3. Regular modular switching, with Cat 6800 for distribution and 3650 for access and any redundancy that can work asides STP?
As a newbie to DC networking, I'm really confused so any experienced advise would help. Cost is not a major concern but its always good to keep it down. Also minimum downtime is expected. In addition, I don't want to deploy STP (to avoid blocking ports and maximise port usage).
Thanks!
05-20-2018 01:56 PM
05-22-2018 06:16 AM
05-22-2018 04:21 PM
05-23-2018 12:54 AM
05-23-2018 01:03 AM
@Revenue_admin wrote:
why did you specifically say enable QoS
Because Catalyst switches have shallow port buffers. When (not "IF") the servers start hammering the ports with high-speed data the ports buffer will fill up very, very quickly and you'll start dropping packets by the truck load.
QoS is the only way to control the buffers from overflowing.
05-23-2018 01:06 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide