I'm new to this so... But been tasked with designing the networking architecture for a small DC/server room. Typically 6 to 10 servers, 10G Uplink Access to Distribution, 50TB of SAN storage... that scale. This is the first time I'll be handling something like this though I have a bit of experience managing campus and branch office networks. So my confusion is this: What type of switching infrastructure would be most suitable?
1. Nexus Aggregation/Fabric Extension (feels like an overkill, but then ease of deployment and scalability?)
2. Catalyst Switches (with VSS on the agg, stacking on access layer, LACP from server to TOR and switch to Agg)?
3. Regular modular switching, with Cat 6800 for distribution and 3650 for access and any redundancy that can work asides STP?
As a newbie to DC networking, I'm really confused so any experienced advise would help. Cost is not a major concern but its always good to keep it down. Also minimum downtime is expected. In addition, I don't want to deploy STP (to avoid blocking ports and maximise port usage).
why did you specifically say enable QoS
Because Catalyst switches have shallow port buffers. When (not "IF") the servers start hammering the ports with high-speed data the ports buffer will fill up very, very quickly and you'll start dropping packets by the truck load.
QoS is the only way to control the buffers from overflowing.