10-01-2012 08:33 AM - edited 03-01-2019 10:39 AM
Hello,
When booting from SAN in UCS, what's the best practice when creating the Storage Groups in the disk array?
For instance, VMware: is it best-practice to have one storage group for each ESXi and add its own ESXi Boot LUN (id=0) plus the VM datastore LUNs needed?
Do other environments (Linnux, Windows, Hyoer-V) have anything special in these terms with which to take care of?
Thanks,
Solved! Go to Solution.
10-02-2012 05:26 AM
It's a security issue. Because the LUN ID can be changed easily on the host, you could essentiall clobber the wrong LUN if your server admin mistakenly changed the LUN ID. Also, once a host is booted up, it will be able to see & access every LUN within the storage group regardless of LUN ID. The significance of the LUN ID matches the host only impacts a host trying to SAN boot.
The two main forms of security enforced in Storage is Zoning and Masking.
Zoning - done on the storage switch, acts like an ACL limiting the scope of what a zone's members can see. A zone will normally only contain one host and the target WWNs. **Who can I see**
Masking - done on the storage array limits "what" LUNs a host has access to. This is done in the form of Storage groups. **What can I access**.
Circumventing either poses a great risk at data corruption/destruction since various operating systems can only read their native file systems. Ex. If you had all your hosts in one storage group (ESX, Windows etc) and tried to only separate them by a LUN ID, a simple 1-digit change of the boot target LUN ID on the initiator could cause a host to not read the filesystem and potentially right a new signature to the risk - overwritting your existing data. Windows can't read a linux partition and vice-versa.
Follow these best practices and your data will be much safer & secure.
Regards,
Robert
10-01-2012 08:46 AM
Dani,
Depending on the array, but normally only have a single host and mulitple LUNs within a Storage Group. This will include any Boot LUNs and optionally any shared LUNs (for Clustering such as SQL, Hyper-V or ESX).
If you're not doing any shared/clustered LUNs, then it's pretty simple. Each Storage group would contain only each host, and any assigned LUNs (boot and/or data).
btw. Boot LUNs don't have to be LUN0. That's just standard practice. The boot LUN ID can be anything you want, as long as it matches up with the Initiator boot configuration (ie. UCS boot Policy).
Regards,
Robert
10-01-2012 08:50 AM
Is it also a good option to have everything in one big single storage group, obviously differentiating the LUN ID for every single ESXi Boot LUN in the boot policy of every ESXi? Or rather is not recommended for X reason?
Thanks,
10-01-2012 09:24 AM
No, it's single host per storage group. LUN IDs are relative to only the host they're presented to in that SG. So you can't have multiple hosts in a SG and try to separate them using a LUN ID.
Robert
10-02-2012 01:10 AM
Thanks Robert.
Just out of curiosity: what's the reason to not put many hosts in a single Storage Group, provided that we separate them by means of LUN IDs? Is it just a best practice? Does this imply security issues? Could this config be unstable?
Thanks a lot,
10-02-2012 05:26 AM
It's a security issue. Because the LUN ID can be changed easily on the host, you could essentiall clobber the wrong LUN if your server admin mistakenly changed the LUN ID. Also, once a host is booted up, it will be able to see & access every LUN within the storage group regardless of LUN ID. The significance of the LUN ID matches the host only impacts a host trying to SAN boot.
The two main forms of security enforced in Storage is Zoning and Masking.
Zoning - done on the storage switch, acts like an ACL limiting the scope of what a zone's members can see. A zone will normally only contain one host and the target WWNs. **Who can I see**
Masking - done on the storage array limits "what" LUNs a host has access to. This is done in the form of Storage groups. **What can I access**.
Circumventing either poses a great risk at data corruption/destruction since various operating systems can only read their native file systems. Ex. If you had all your hosts in one storage group (ESX, Windows etc) and tried to only separate them by a LUN ID, a simple 1-digit change of the boot target LUN ID on the initiator could cause a host to not read the filesystem and potentially right a new signature to the risk - overwritting your existing data. Windows can't read a linux partition and vice-versa.
Follow these best practices and your data will be much safer & secure.
Regards,
Robert
01-29-2013 03:44 AM
Hello,
I came across this following link regarding ESXi 5
One can read the following sentence on it:
"For environments that boot from a SAN or use Auto Deploy, it is not necessary to allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN should be weighed against the LUN size and the I/O behavior of the virtual machines. "
I'm wondering how does this affect to the whole discussion we had in this thread some time ago...
Any inputs please?
Thanks,
01-29-2013 01:52 PM
I don't really understand the statement "For environments that boot from a SAN or use Auto Deploy, it is not necessary to allocate a separate LUN for each ESXi host"
Does this means you can have a single LUN that all hosts boot from or are they only talking about the scratch space?
01-29-2013 01:55 PM
Its just referring to a shared scratch.
Robert
01-30-2013 05:38 AM
I wonder why you want to do that? Seems like that would add uneeded complexitity to something that is very simple.
01-30-2013 06:39 AM
@Jeremy
Agreed. The minimal savings in shared LUN storage for the scratch space doesn't justify the added complexity. It's a ticking timebomb waiting to explode if you ask me.
Robert
01-29-2013 04:38 PM
In addition to Robert's finding, allowing in ESXi installed to see multiple boot luns during boot may cause a PSOD.
An ESXi host fails to boot with the purple diagnostic screen error: Two filesystems with the same UUID have been detected
Thank You,
Dan Laden
Cisco PDI Data Center
Want to know more about how PDI can assist you?
02-01-2013 02:18 AM
Alright, thanks a lot
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide