Hi, I have a customer requirement where they want to do iSCSI in their environment. They are using the 4 x Catalyst 9300s in stacking mode as Core Switches. My question is - how would you do the SAN-A and SAN-B separation using the Cat9ks, as logically it is one big switch. I can do a higher MTU on them, that is no problem, but how will the SAN physical and logical separation work in this case? The server/host is going to have 2 x physical NICs (NOT bundled), one for SAN-A and other for SAN-B.
Is this a supported design?
As far the switch knows, SAN-a and b are not going to be separate. The design should work but the major issue I see with this design is the lack of switch redundancy. So, for example; if for whatever reason the IOS crashes on the master switch, the entire stack will be go out of business. I think a more resilient design would be to have 2 stacks and 2 switches per stack. This way if one stack fails, you still have the other stack to work with. Also, just for future reference, for storage connectivity, the Nexus series switches should be used as they are faster with lower latency and deeper buffer. The Catalyst is usually used in campus networks.
Hi Reza, if the active switch in the Stack goes down, the whole stack does NOT go down. The next member whichever with the higher priority OR lowest mac-address takes ownership of the stack. Also, as long as the mac-persistency is enabled (which is by-default), the stack`s mac stays the same even if the active switch fails. So there is no downtime for anything that is dual-connected to 2 x stack members.
Only thing worrying me is that in this case the stack will not physically separate SAN-A and B. I totally agree the DC switches should have been Nexus!
I am not taking about a switch crash. I am referring to an IOS crash, hang that will effect the whole stack. I have seen cases where the master switch IOS has a seizure and the whole stack is out until you reboot it.