Hi, I hope someone can help
We have 2 separate UCS installations, identical hardware config; 2, FI's, NetApp direct-attached storage, and upstream HP switches.
I have one installation up and running, iSCSI boot working fine.
The other installation was configured by another engineer, and he got servers up and running, booting over iSCSI. However, I have now gone back to provision some more blades with the template used on the existing blades and iSCSI boot won't work. I have cloned a template from a working blade, iSCSI boot won't work, I have even created a new template, compared to my own working installation, still no joy.
The server boots, I get the 'initilize error 1' from both of the nic's. The vnic's have the iSCSI vlan set as native. The iSCSI vlan is configured under the Appliance and LAN clouds. I am using different iqn's on each VNIC. I don't see any logins on the netapp (I do from our existing blades, but not these new blades).
There could be some network problem, are they multiple uplinks for LAN, try pinning the vNIC to the working uplink for the same VLAN and check, I can take a look if needed.
Thanks Madhu, we have 2 uplinks from each FI across a pair of HP switches using distributed-trunking. I believe the links are active/active.
Sent from Cisco Technical Support iPad App
I came back to site this morning, booted a blade and got the Option ROM sucessfully installed message, having made no config changes. I booted the vmware installer, however it still could not see the LUN. I could see the session on the netapp though. I pinned the traffic to one port channel, same result.
After about 1 hour, the server just started to see the Initialize Error 1 message.
My suspicion is that there are some upstream network issues causing this behaviour? What dependency do the uplinks from the FI have on iSCSI boot?
Your iSCSI vNICS are overlayed on top of the regular vNICS and the traffic from these vNICS will go through the ethernet uplinks, to me it clearly sound like some uplink network issue, to simplify things just use one vNIC and just one single uplink if you can and give it a try.
Please email me firstname.lastname@example.org to poke around and do a webex session if needed.
Strange behavior indeed. iSCSI has no different dependencies than any ethernet traffic. The traffic will be pinned to one of your border interfaces (or port channels). When you get the Initiliazation Error 1 it means there is an invalid value returned by MCPERR_EINVAL ~ initiator can't reach the target.
What I suggest you try is the following.
1. Boot the blade but enter into the BIOS (F2) when prompted. This will keep the iSCSI option ROM connected/ing.
2. From the UCSM CLI connect to the adapter.
connect adapter x/y/z (chassis, blade, adapter slot)
3. Issue the "connect" command to open the adapter shell.
4. Connect to the Master Control Program with "attach-mcp"
5. Check the iscsi configuration of the adapter with "iscsi_get_config"
For a successful connection - You'll want to see something like:
CA-Dev-A# connect adapter 1/4/1
adapter 1/4/1 # connect
adapter 1/4/1 (top):1# attach-mcp
adapter 1/4/1 (mcp):1# iscsi_get_config
vnic iSCSI Configuration:
dhcp status: false
IP Addr: 172.x.x.142
Subnet Mask: 255.255.255.0
Target Idx: 0
Prev State: ISCSI_TARGET_DISABLED
Target Error: ISCSI_TARGET_NO_ERROR
IP Addr: 172.x.x.49 <<< Should be your target IP
Boot Lun: 0
Ping Stats: Success (9.698ms) <<
session_id: 0 <<
With the blade paused in the BIOS utility like this, you should be able to test ping your Target from the Initiator, or vice-versa.
adapter 1/4/1 (mcp):4# iscsi_ping 172.x.x.49
id name tgt address port tcp ping status
--- -------------- --- --------------- ----- ---------------------------------
13 vnic_1 0 10.29.177.52 3260 Success (11.900ms)
Once the host & target can reach each other, it becomes an array-side access issue. Ensure the LUN is correctly masked to the initiator properly.