cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2037
Views
0
Helpful
12
Replies

Storage Groups

dani_bosch
Level 1
Level 1

Hello,

When booting from SAN in UCS, what's the best practice when creating the Storage Groups in the disk array?

For instance, VMware: is it best-practice to have one storage group for each ESXi and add its own ESXi Boot LUN (id=0) plus the VM datastore LUNs needed?

Do other environments (Linnux, Windows, Hyoer-V) have anything special in these terms with which to take care of?

Thanks,

1 Accepted Solution

Accepted Solutions

It's a security issue.  Because the LUN ID can be changed easily on the host, you could essentiall clobber the wrong LUN if your server admin mistakenly changed the LUN ID.  Also, once a host is booted up, it will be able to see & access every LUN within the storage group regardless of LUN ID.  The significance of the LUN ID matches the host only impacts a host trying to SAN boot.

The two main forms of security enforced in Storage is Zoning and Masking.

Zoning - done on the storage switch, acts like an ACL limiting the scope of what a zone's members can see.  A zone will normally only contain one host and the target WWNs.  **Who can I see**

Masking - done on the storage array limits "what" LUNs a host has access to.  This is done in the form of Storage groups.  **What can I access**.

Circumventing either poses a great risk at data corruption/destruction since various operating systems can only read their native file systems.  Ex. If you had all your hosts in one storage group (ESX, Windows etc) and tried to only separate them by a LUN ID, a simple 1-digit change of the boot target LUN ID on the initiator could cause a host to not read the filesystem and potentially right a new signature to the risk - overwritting your existing data.  Windows can't read a linux partition and vice-versa.

Follow these best practices and your data will be much safer & secure.

Regards,

Robert

View solution in original post

12 Replies 12

Robert Burns
Cisco Employee
Cisco Employee

Dani,

Depending on the array, but normally only have a single host and mulitple LUNs within a Storage Group.  This will include any Boot LUNs and optionally any shared LUNs (for Clustering such as SQL, Hyper-V or ESX).

If you're not doing any shared/clustered LUNs, then it's pretty simple.  Each Storage group would contain only each host, and any assigned LUNs (boot and/or data).

btw. Boot LUNs don't have to be LUN0.  That's just standard practice.  The boot LUN ID can be anything you want, as long as it matches up with the Initiator boot configuration (ie. UCS boot Policy).

Regards,

Robert

Is it also a good option to have everything in one big single storage group, obviously differentiating the LUN ID for every single ESXi Boot LUN in the boot policy of every ESXi? Or rather is not recommended for X reason?

Thanks,

No, it's single host per storage group.  LUN IDs are relative to only the host they're presented to in that SG.  So you can't have multiple hosts in a SG and try to separate them using a LUN ID.

Robert   

Thanks Robert.

Just out of curiosity: what's the reason to not put many hosts in a single Storage Group, provided that we separate them by means of LUN IDs? Is it just a best practice? Does this imply security issues? Could this config be unstable?

Thanks a lot,

It's a security issue.  Because the LUN ID can be changed easily on the host, you could essentiall clobber the wrong LUN if your server admin mistakenly changed the LUN ID.  Also, once a host is booted up, it will be able to see & access every LUN within the storage group regardless of LUN ID.  The significance of the LUN ID matches the host only impacts a host trying to SAN boot.

The two main forms of security enforced in Storage is Zoning and Masking.

Zoning - done on the storage switch, acts like an ACL limiting the scope of what a zone's members can see.  A zone will normally only contain one host and the target WWNs.  **Who can I see**

Masking - done on the storage array limits "what" LUNs a host has access to.  This is done in the form of Storage groups.  **What can I access**.

Circumventing either poses a great risk at data corruption/destruction since various operating systems can only read their native file systems.  Ex. If you had all your hosts in one storage group (ESX, Windows etc) and tried to only separate them by a LUN ID, a simple 1-digit change of the boot target LUN ID on the initiator could cause a host to not read the filesystem and potentially right a new signature to the risk - overwritting your existing data.  Windows can't read a linux partition and vice-versa.

Follow these best practices and your data will be much safer & secure.

Regards,

Robert

Hello,

I came across this following link regarding ESXi 5

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2032756

One can read the following sentence on it:

"For environments that boot from a SAN or use Auto Deploy, it is not  necessary to allocate a separate LUN for each ESXi host. You can  co-locate the scratch regions for many ESXi hosts onto a single LUN. The  number of hosts assigned to any single LUN should be weighed against  the LUN size and the I/O behavior of the virtual machines. "

I'm wondering how does this affect to the whole discussion we had in this thread some time ago...

Any inputs please?

Thanks,

I don't really understand the statement "For environments that boot from a SAN or use Auto Deploy, it is not  necessary to allocate a separate LUN for each ESXi host"

Does this means you can have a single LUN that all hosts boot from or are they only talking about the scratch space?

Its just referring to a shared scratch.

Robert

I wonder why you want to do that? Seems like that would add uneeded complexitity to something that is very simple.

@Jeremy

Agreed.  The minimal savings in shared LUN storage for the scratch space doesn't justify the added complexity.  It's a ticking timebomb waiting to explode if you ask me.

Robert

Daniel Laden
Level 4
Level 4

In addition to Robert's finding, allowing in ESXi installed to see multiple boot luns during boot may cause a PSOD.

An ESXi host fails to boot with the purple diagnostic screen error: Two filesystems with the same UUID have been detected

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2000476

Thank You,

Dan Laden

Cisco PDI Data Center

Want to know more about how PDI can assist you?

http://www.youtube.com/watch?v=3OAJrkMfN3c

Alright, thanks a lot

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card