04-26-2013 01:13 AM - edited 03-01-2019 11:00 AM
Is an iSCSI vNIC created only to allow boot from SAN? So when the OS is installed these vNICs will not show up, only the overlay NIC will show?
I am trying to understand what vNICs are required to allow boot from SAN and iSCSI SAN connectivity once the OS is loaded.
Solved! Go to Solution.
04-26-2013 06:48 AM
Correct.
Yes this is supported. The iBFT functionality of the VIC (adapter I assume you're using) will allow the iSCSI boot of the host, then to access an iSCSI SAN Data LUN (such as a VMFS) will leverage the OS Software initiator built into ESX.
Robert
04-26-2013 04:43 AM
Yes. iSCSI vNICs are purely for booting from SAN. Depending on the OS it may or may not show up as a NIC. VMware for example will sometime show a dedicated vSwitch and iscsi_vnic uplink.
If you just want to access iSCSI storage (without SAN boot) then you just need to load the appropriate OS software to leverage the existing NICs for iscsi.
For ESX you need to enable the Software iSCSI Initiator
For Windows you'd install/load the MS iSCSI Software client
etc.
If you're not booting from iSCSI it becomes the responsibility of the OS level software to manage the software iSCSI connections. UCS would only manage the iSCSI boot configuration.
Robert
04-26-2013 06:38 AM
So I can use the iSCSI vNIC to boot from SAN then use the overlay vNIC within the host OS for iSCSI SAN access (bound to iscsi service in esx)?
Is this supported (using the overlay vNIC for SAN access within the OS)? Or do I need to creare seprate vNICs used purely for SAN access from with OS?
04-26-2013 06:48 AM
Correct.
Yes this is supported. The iBFT functionality of the VIC (adapter I assume you're using) will allow the iSCSI boot of the host, then to access an iSCSI SAN Data LUN (such as a VMFS) will leverage the OS Software initiator built into ESX.
Robert
04-26-2013 07:10 AM
Thanks Robert
02-20-2014 08:34 AM
so what i'm getting from this discussion is that the UCS ONLY handles the iSCSI boot, and the ESXi software Only handles the storage access/san access? so there is no need to configure 2 vNIC, 1 for iSCSI boot and one for iSCSI stoarge/san access on the UCS?
02-20-2014 08:42 AM
Correct… You create your vnics which will be used for block access (within the OS / hypervisor) and then overlay logical iscsi vnics on top of these which are then used by the service profile to boot.. The logical vnics do not show within the OS / hypervisor.
02-20-2014 09:02 AM
Great! thanks NaelShahid.
another question, i have confusion about the esxi management ips. i have two pools one for the cisco side that the blades use to KVM into the UCS, and one pool for the esxi blades management input. i was questioning my self the validity of keeping them on the same subnet or not. our VNX doesnt like the fact that they are on the same subnet.
what im really asking is what is best practice .
thanks.
02-21-2014 01:16 AM
I personally would have separate IP subnets for blade KVM access and ESXi management, I would ‘think’ having a management network within the UCS domain which also passes through the UCS dataplane would be unsupported. Under FlexPod, this would be unsupported….
For your VNX, I would ‘think’ you could have this on the same management network as KVM. You would want to split this from the ESXi management network as without filtering iSCSI targets your ESXi hosts could ‘potentially’ access iSCSI storage over the management network… I am speaking from a NetApp view point here, but I would suggest this maybe the same for VNX…
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide