07-18-2014 01:23 AM - edited 03-01-2019 11:45 AM
Hello,
In a Cisco UCS blades solution, in a storage hybrid topology scenario (storage array directly attached to Fabric Interconnects + FC Uplink to Cisco MDS SAN with 3rd party servers) will the 3rd party servers be able to access the storage array which is directly attached to the Fabric Interconnects??
Thanks,
07-18-2014 01:38 AM
Let me respond with a question. If you have both 3rd party and UCS servers needing access to storage why not move storage to MDS from FIs?
With that
1) you're closer to a reference designs (FlexPod, vblock etc)
2) You avoid FC switching on FIs (end host mode is recommended for 95% of environments).
3) (Possibly) Simplified topology
07-18-2014 02:56 AM
I agree on that, but just to confirm whether is possible or not, let's say I already have all ports occupied in MDS.
Will the scenario work?
07-18-2014 03:24 AM
Yes, in FC switching mode the FI behaves like a ... switch. Just make sure zoning allows it.
Edit: Below is _not_ Cisco recommended design. It's a peculiar use case.
I've seen a deployment where UCS blades were reaching storage via:
Blade -> local FI -> Nexus -> remote FI -> storage
local FI had same HA cluster attached. The blade was seeing path via local and remote FIs.
07-18-2014 03:42 AM
Edit: Below is _not_ Cisco recommended design. It's a peculiar use case.
Show me the document / reference ! It's TAC supported !
The blade was seeing path via local and remote FIs.
local / remote FI ??? this is a new terminology, please explain !
07-18-2014 03:49 AM
Recommended and TAC supported might be two things :-)
For the topology:
Two sets of UCSs (one in each DC), interconnected via nexus5ks (one for each fabric).
A blade in UCS#1 sees path via locally connected storage but will also see a path over UCS#2 where the other storage controller is connected.
View from netapp DSM attached.
07-18-2014 04:42 AM
I am a 16 year Cisco veteran (alumni) DSI II ! so I know the difference :-)
In the original posting, there was never the question of 2 DC, so your argument, although valid, doesn't apply.
I agree, that the solution I mention is not ideal (some call it ugly), mainly from the operational point of view. However, if dear customer want's to do it, ok, as long it is officially TAC supported.
The reason why my big UCS reference went for direct attach storage was: the server guys can do zoning (essentially part of the SP design); they don't depend on the storage guys, like in the case of a SAN.
However, later in the project, they realized, that they needed additional functionality, attaching to SAN storage, bingo ....................
07-18-2014 06:10 AM
Walter,
Agreed on all points. (Obviously can't comment on your tenure of working with Cisco :D)
The design I was mentioning was just to show that even crazier happen to work, even if they are not recommended (even if supported).
The message I wanted to get across, is that FC switching is something we left for several specific use cases. Yes it will work, yes it will be supported.
Yes, you can do pretty flexible (and even borderline crazy) things, but you risk:
- Hitting certain problems as the first/only deployment in the world.
- TAC taking longer time to understand/diagnose your setup.
We support quite a big variety of designs, with rare exceptions, nobody implements CVDs fully.
M.
07-18-2014 03:30 AM
Hoi Dani
I have a huge UCS installation, which has direct attachment of Netapp, as well as SAN connected HDS disk arrays.
The implementation is tricky, and for a long time was not TAC supported; today it is.
1) You need FI in FC switch mode
2) you can only attach MDS, or N5k as FC switches to FI; there is no interop mode, therefore no Brocade SAN support
3) you need different VSAN's for direct resp. SAN attachment e.g. 10/20 fabric A, 11/21 fabric B
4) zoning for direct attachment is very simple, and done with UCSM and service profiles
5) zoning for SAN attachment is done on MDS, therefore quite different, which in my opinion is a operational nightmare
From above, you can see, that a host anywhere can access direct attached storage, if his hba is in the proper VSAN eg. 10 / 20
If your server must access direct AND SAN attached storage, you have a problem ? InterVSAN routing ???
If you need more information, please contact me directly.
Cheers Walter.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide