04-24-2014 05:57 PM - edited 03-01-2019 11:38 AM
We are trying to set up Auto Deploy for UCS B200 M3 blade servers. Our setup has the chassis connected to dual 6248 Fabric Interconnects. We were successful in getting this to work when the blades were identified via MAC address configured on the DHCP server (Infoblox). However, in trying to resolve the scenario of the server attempting to PXE boot via either NIC, thus having two different MAC addresses, this scenario could not be supported on the DHCP server (mapping two MAC addresses to one IP address). Then we had the idea of using the GUID/UUID of the blade as a unique client identifier, as it is the same no matter which NIC is used.
We have tried to set this up, but have been unsuccessful. The blade sends out its' GUID using DHCP option 97, but the DHCP server is only looking for the Client ID via DHCP Option 61. We have not been able to determine how, or if, the blade server can send its' GUID via DHCP Option 61, and Infoblox tells us that their server cannot be configured to accept DHCP Option 97 as a client identifier.
Has anyone encountered this situation, and resolved it? Surely this isn't a unique situation, having a blade server with two NICs.
Thanks in advance for your response.
Ron Buchalski
Solved! Go to Solution.
05-08-2014 03:29 AM
This is why you should select the "hardware failover flag" in the vnic definition. If your vnic is attached to fabric A, and A fails, you are automatically switched to fabric B.
04-25-2014 01:11 AM
Hi Ron,
i am currently setting up an autodeploy environment. But i only use one nic per host for boot from san. Because of the autofailover possibility in ucs, i think i do not need a second nic.
Why do you use a second nic? Do i miss something or is it just because of the thinking to have a redundancy for management in vcenter?
Frank
05-07-2014 01:38 PM
Our current UCS deployment is configured to map each NIC to a specific fabric interconnect, so choosing a single NIC per host, and binding that NIC to a single fabric interconnect, could potentially be a problem if the fabric is having connectivity issues.
05-08-2014 03:29 AM
This is why you should select the "hardware failover flag" in the vnic definition. If your vnic is attached to fabric A, and A fails, you are automatically switched to fabric B.
05-08-2014 07:18 AM
Yes, that's an option we're going to look at. It will require a change to our template to accommodate it. But this should solve the issue for us.
Thank you,
-rb
05-08-2014 10:16 AM
I think the workaround is known - but main question is why UUID/GUID not working or how to make it work.
05-08-2014 08:17 AM
If one uses "Hardware failover flag" then we may have to use 1 NIC for management. In this situation, vSphere will show a bang stating Management has no redundancy.
05-08-2014 10:12 AM
Why not using multiple vnic ?
One for AutoDeploy PXE boot, connected eg. to Fabric A, and Failover Flag set
2 vnics for Management, one each per fabric, without Failover Flag set
I also see many customers, using old vswitch for management / vmotion / storage, and DVS or N1k for general VM data traffic
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide