On SAN environments there is a growing focus on the resultant infrastructure as a key component of the overall storage solution. There has been a gradual shift from the past paradigm of server-centric storage solutions to a storage-centric model and additionally the more recent network-centric model. Data availability, data integrity, and data consistency requirements are critical to users and also contribute to the increased focus on the SAN infrastructure within an IT organization.
Virtual SAN (VSAN)
Cisco MDS switches offer VSAN technology, which is a simple and secure way to consolidate many SAN islands into a single physical fabric. Separate fabric services and separate role base management are provided for each VSAN, while providing separation of both the control plane and the data plane.
It is common practice in SAN environments to build two separate, redundant physical fabrics (Fabric A and Fabric B) in case a single physical fabric fails. When designing for large networks, most environments will fall into two types of topologies within a physical fabric:
The Core-Edge fabric topology has all the necessary features of a SAN in terms of resiliency and performance. It provides resiliency and predictable recovery, scalable performance, and scalable port density in a well-structured design. The core-edge model is architected with two or more core switches in the center of the fabric to interconnect two or more edge switches in the periphery of the fabric.
When designing a core-edge topology, a major trade-off is made between three key characteristics of the design. The first trade-off is the overall effective port density that can be used to connect hosts or storage devices. For a given number of switches in a core-edge design, the higher the effective port density, typically the higher ISL oversubscription from the edge layer to the core layer. The second characteristic is the oversubscription of the design. Oversubscription is a natural part of any network topology as the nature of a SAN is to 'fan-out' the connectivity and I/O resources of storage devices. However, the higher the oversubscription of the design, the more likely congestion may occur thereby impacting a wide scope of applications and their I/O patterns. The third characteristic that ties these other two together is cost. The rule-of-thumb suggests the 'higher the oversubscription for a given effective port density, the lower the overall cost of the solution'.
All SAN infrastructures incorporate at least one level of oversubscription. The traffic flow model within a SAN is always between SCSI initiators and SCSI targets, in other words between servers and storage. The number of server connections to the SAN will always outnumber those connected to storage devices simply because of the subsystem high storage capacity. Disk subsystem vendors commonly makes recommendations on a fan-out ratio of storage to server ports that can be upward of 12:1. Therefore it is appropriate to have oversubscription ratios for core-edge designs that approach the recommended 12:1 storage to server fan-out ratio.
The collapsed-core fabric topology is a topology with the features of the core-edge topology but delivers required port densities in a more efficient manner. The collapsed-core topology is enabled by high port density offered by the Cisco MDS.
The salient features of the collapsed-core topology are:
The resiliency of this topology is sufficiently high due to the redundant structure however without the excessive ISLs as in the mesh topology.
The performance of this topology is predictable, as paths between two communicating devices will not vary in length and hops, the direct path between two switches will always be chosen.
Using ISL Port Channeling between the switches will provide for faster recovery time with no fabric-wide disruption. The Port Channels between the switches provide load sharing over a logical ISL link.
The port density of this topology can scale quite high, but not as high as the core-edge topology. However, with the high port density offered by the MDS 9513 director, the collapsed-core topology can scale quite large.
The topology is simple to architect, implement and simple to maintain and troubleshoot.
The main advantage of this topology is the degree of scalability offered at a very efficient effective port usage. The collapsed-core design aims to offer very high port density while eliminating a separate physical layer of switches and their associated ISLs.
The only disadvantage of the collapsed-core topology is its scale limit relative to the core-edge topology. While the collapsed-core topology can scale quite large, the core-edge topology should be used for the largest of fabrics. However, to continually scale the collapsed-core design, one could convert the core to a core-edge design and add another layer of switches.
It is recommended to use director-class MDS 9500 Series directors for the collapsed-core design.
Note: SAN designs should always use two isolated fabrics for high availability, with both hosts and storage connecting to both fabrics. Multipathing software should be deployed on the hosts to manage connectivity between the host and storage so that I/O uses both paths, and there is non-disruptive failover between fabrics in the event of a problem in one fabric. Fabric isolation can be achieved using either VSANs, or dual physical switches. Both provide separation of fabric services, although it could be argued that multiple physical fabrics provide increased physical protection and protection against equipment failure.
Hey Guys,Is it possible to police a leaf to only allow defined tenants to be configured?e.g. LEAF101 will only allow common, management, dev tenants deployed, any configuration associated with production tenants will be restricted?
Hi guys!I have a simple topology that consists of the one Palo alto and 2 N9K nexuses (VPC domain). In my case, the topology does not imply a SPINE (don’t ask me why? :-). It is not a production topology, only staging. ( you can fi...
Hi guys!I have a simple non-standard topology: Palo Alto, and 2 N9K in VPC domain. ( a diagram you can find in attach). I don't have SPINE in my configuration (pls don't asking me why? . It is non production topology yet, only staging. Between Nexus...
Hello, I need your help please. I have a new ACI equipment. I can see my two directly connected leaf in APIC graphic interface, but when I set the id (101) the leaf status is eternaly stuck in "Discovering". I had been researching, but I...
Folks, Last few days i am searching for this answer on google but i didn't get any satisfied answer so thought let me ask here. I am running small spine-leaf EVPN+VxLAN and i want to use arp suppression to reduce my broadcast. I have couple of V...