On SAN environments there is a growing focus on the resultant infrastructure as a key component of the overall storage solution. There has been a gradual shift from the past paradigm of server-centric storage solutions to a storage-centric model and additionally the more recent network-centric model. Data availability, data integrity, and data consistency requirements are critical to users and also contribute to the increased focus on the SAN infrastructure within an IT organization.
Virtual SAN (VSAN)
Cisco MDS switches offer VSAN technology, which is a simple and secure way to consolidate many SAN islands into a single physical fabric. Separate fabric services and separate role base management are provided for each VSAN, while providing separation of both the control plane and the data plane.
It is common practice in SAN environments to build two separate, redundant physical fabrics (Fabric A and Fabric B) in case a single physical fabric fails. When designing for large networks, most environments will fall into two types of topologies within a physical fabric:
The Core-Edge fabric topology has all the necessary features of a SAN in terms of resiliency and performance. It provides resiliency and predictable recovery, scalable performance, and scalable port density in a well-structured design. The core-edge model is architected with two or more core switches in the center of the fabric to interconnect two or more edge switches in the periphery of the fabric.
When designing a core-edge topology, a major trade-off is made between three key characteristics of the design. The first trade-off is the overall effective port density that can be used to connect hosts or storage devices. For a given number of switches in a core-edge design, the higher the effective port density, typically the higher ISL oversubscription from the edge layer to the core layer. The second characteristic is the oversubscription of the design. Oversubscription is a natural part of any network topology as the nature of a SAN is to 'fan-out' the connectivity and I/O resources of storage devices. However, the higher the oversubscription of the design, the more likely congestion may occur thereby impacting a wide scope of applications and their I/O patterns. The third characteristic that ties these other two together is cost. The rule-of-thumb suggests the 'higher the oversubscription for a given effective port density, the lower the overall cost of the solution'.
All SAN infrastructures incorporate at least one level of oversubscription. The traffic flow model within a SAN is always between SCSI initiators and SCSI targets, in other words between servers and storage. The number of server connections to the SAN will always outnumber those connected to storage devices simply because of the subsystem high storage capacity. Disk subsystem vendors commonly makes recommendations on a fan-out ratio of storage to server ports that can be upward of 12:1. Therefore it is appropriate to have oversubscription ratios for core-edge designs that approach the recommended 12:1 storage to server fan-out ratio.
The collapsed-core fabric topology is a topology with the features of the core-edge topology but delivers required port densities in a more efficient manner. The collapsed-core topology is enabled by high port density offered by the Cisco MDS.
The salient features of the collapsed-core topology are:
The resiliency of this topology is sufficiently high due to the redundant structure however without the excessive ISLs as in the mesh topology.
The performance of this topology is predictable, as paths between two communicating devices will not vary in length and hops, the direct path between two switches will always be chosen.
Using ISL Port Channeling between the switches will provide for faster recovery time with no fabric-wide disruption. The Port Channels between the switches provide load sharing over a logical ISL link.
The port density of this topology can scale quite high, but not as high as the core-edge topology. However, with the high port density offered by the MDS 9513 director, the collapsed-core topology can scale quite large.
The topology is simple to architect, implement and simple to maintain and troubleshoot.
The main advantage of this topology is the degree of scalability offered at a very efficient effective port usage. The collapsed-core design aims to offer very high port density while eliminating a separate physical layer of switches and their associated ISLs.
The only disadvantage of the collapsed-core topology is its scale limit relative to the core-edge topology. While the collapsed-core topology can scale quite large, the core-edge topology should be used for the largest of fabrics. However, to continually scale the collapsed-core design, one could convert the core to a core-edge design and add another layer of switches.
It is recommended to use director-class MDS 9500 Series directors for the collapsed-core design.
Note: SAN designs should always use two isolated fabrics for high availability, with both hosts and storage connecting to both fabrics. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths, and there is non-disruptive failover between fabrics in the event of a problem in one fabric. Fabric isolation can be achieved using either VSANs, or dual physical switches. Both provide separation of fabric services, although it could be argued that multiple physical fabrics provide increased physical protection and protection against equipment failure.
Hi Dears, I need to connect a BIG-IP F5 in STP mode Pass through to FEX N2K. But still can't ping the GW even F5 ip addresses.Please can someone assist if we can connect F5 to a FEX? If yes, which configurations need from both sides? Regard...
Hi,We have a lot of Epg - static path on each physical port on the fabric.I would like to find an easy way (Gui, CLI or Ansible) to default an interface and delete all the static path binded ? Below an example of the configuration I need to delete:mo...
Hello group,I'm struggling to make the PBR working on Nexus7010 (with SUP2,N7K-M132XP-12L and NX-OS 7.3.3 D1) The setup is the following small MPLS topology: <Customer CE router> --- <Nexus7K MPLS PE> --- <MPLS P router> --- &l...
Dear experts i would like to know the differences between the Nexus 9300 9300 EX/FX/FX2/FX3 switches, some of them i know according to the below link but FX3 are new comers what are the differentiate in FX3.https://www.ciscolive.com/c/dam/r/cisc...
Hello everyone !! Would somebody kindly help me to fix this issue?Due to security policies I need to receive the successful and unsuccessful SSH logins on my centralized syslog servers, I'm able to see and receive properly the unsuccessful login mess...