cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3496
Views
0
Helpful
6
Replies

F0283 VIF 1 / 2 B-42/44 down, reason: Bound Physical Interface Down

Sandemann
Level 1
Level 1

Hi

I have some issues after migrate from CX4 SAN to VNX SAN. Everything is moved from CX4 to VNX and I have disabled FC Uplink port that is connected to CX4. Both my FI's is only connected to VNX.

We want SAN boot on our ESXi host, and some of them is install on a boot LUN on VNX, and some other is install local at the Blade.

My issue is that all my host shall boot from SAN. But suddenly this fails.

I zones the host on our MDS switches, and does everything in VNX, so the host shall see its boot LUN. SAN Boot Target is changed so it sees the VNX SAN.

In the install I see my boot LUN, and install ESXi without any issues. Remove the installation media and reboots the server.

When it shall boot back up, it says:

Reboot and Select proper Boot device

or Insert Boot Media in selected Boot device and press a key.

And i gets 8 major error messages like this:

VIF 1 / 2 B-42/44 down, reason: Bound  Physical Interface Down

What is wrong. I do the same thing now, as I did when I installed the hosts that is working.

6 Replies 6

Robert Burns
Cisco Employee
Cisco Employee

Can you try to re-ack the Server the Profile is associated to.  The error you're seeing usually means the host Iniitator interface (VIF) is down because it has no uplink to correclty pin to.  You said you shut down the ports the CX4 was previously connected to by disabling it.  Can you re-ack the blade which should re-pinn to the new/remaining uplink.

If that doesn't correct the problem I'd look into your boot policy.  Double and triple check the boot targets entered in the profile and make sure they match the targets on the VNX, and that each one is online and logged into the MDS ("zone flogi database").  The symptoms you're seeing used to be common with Clariion arrays as they are Active/Passive.  During the installation any available path has access to the LUN for the installation, but they rebooting required the first listed target in the boot policy to be the "owner" of the LUN.  This was easily fixed by trespassing the LUN to the other Storage Processor on the array.  With VNX I'm pretty sure they are Active/Active arrays so this shouldn't matter.

Regards,

Robert

Thanks for your answer.

It didn't help to re-ack the blade.

All error messages was gone after re-ack, and I installed ESXi to the boot LUN, og restarted the server. The same issue occured again. Error messages are back, and it can't find the boot LUN.

I have other Service Profiles, that use the boot policy and are working just fine. I don't dare to try reboot one of these, because the may not come back up.

I have tried to delete det Service Profile, cloned a new one from another that is working. Tried to deploy a new one from the Service Template, but with no luck

One thing that is done at most of the SP's is that I have set Pin Group to the VNX so I could boot when the CX 4 was available on UCS. I have tried both with and without on my issues server, but with no luck.

Hi stig,

A few more things to check -

1) is your storage directly connected to the FI? And do you have MDS as well? If so make sure you do not allow the same VSAN on both the uplink (this is assuming you are running version 2.1 and above)

2) if you are running 2.1 with direct attach, you need to create your zones on the FI/ UCSM level

./abhinav

Sent from Cisco Technical Support iPad App

Thanks for new answers:)

It is MDS between the FI's and our VNX. I'm running 2.0.3c on the entire enviroment.

Have VSAN 10 at the uplinks on FI A and VSAN 20 at the uplinks on FI B.

What is strange is that I have 8 B200 M2 blade and 2 of these blades are working just fine on the same UCS. Uses same boot policy and local disk policy.

Also have 2 B200 M3 blade in the same UCS, thats also works fine with the same setting!

I sure it is something I have overlooked or missed somewhere, but I can't figure out what it is

Hei Stig,

did you ever find a solution to this problem?

Ive got the exact same problem now, difference is I'm moving from a HP EVA to Dell Compellent and I've already done the operation on three servers. Now on the last server I'm getting this F0283 error (8 of em..) and server cant find the boot disk. And this is after I've installed vmware on the disk! Booting to image and installing to SAN disk works just fine, but when all is done and wrapped up and I do the last boot, no boot disk found.

Profile is cloned and identical to the other three servers. Very weird.

So.. did you solve it?

Hi

After we got these issues, we have decided that we are going back to local disk.

Have local disk in all host now, so I think that's the way we solved it.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card