Showing results for 
Search instead for 
Did you mean: 

UCS failure test

Amr Hafez

Hi All

I deployed a new DC using UCS 5108 with 4 blades (only 2 worked now) also 2-FI and connected to uptream core SW

2-links between IOM and FI (chassis discovery policy by 1 link)

1- uplink per FI to core SW as in the image1

also I deployed ESXi6 and server 2012 R2 on both blade 5,6 for testing

when I tried to test connectivity and HA by the following methodology:

1- disable one IOM1-FIA link : every thing is good and no pachets dropped for ESXi, VMs, mgmt 

2- disable 2 links IOM1-FIA: every thing good also oh! thats awesome !!

3- disable 2 links IOM1-FIA and 1link IOM2-FIB the ESXi in blade 5 lost connection but ESXi on blade 6 did not lose connection !! and I found that the backplane ports for IOM 2 for blade 5 was down but for blade 6 not down as in test1.2.3  images

I do not know why such different behavior

could you help me?

thanks in advance



Accepted Solutions

For the port-channel, go to the Equipment tab > Policies > Global policies> and under 1-Link, change it from none to Port-channel.

If you dont have port-channel configured, it is totally expected that the blade will go down when it loses both paths.  Remember that in UCSM, connectivity is based on pinning hardware here in there, in other words, your blade is pinned to a backplane port in the IOM and that backplane port is pinned to Fabric port and then eventually pinned to a uplink port.


Let me know if you have doubts.



View solution in original post

Yes, if you change chassis connectivity from none to port channel you have to re acknowledge the chassis.  That tells the FI to dump existing config and rescan the IOMs.  It's ar chassis acknowledgement that the port channel is created.  If the port channel doesn't exist and you enable it, that only affects new chassis. Ucs won't reset the IOM-FI links without being told to.

View solution in original post



Firsr, I suggest doing two as a minimum connection policy.  If you can't for some business reason so be it, but you really should require two links from each IOM to the FI.


Second, do you have Port Channel specified as the Link Grouping policy? Assuming not, what's happening is your vNICs are being pinned to the physical IOM uplink interface. Based on number of links, each blade will be pinned statically to a cable if you're not using port channel.  So in your case, uplink 1 from IOM carries traffic for blade 1,3,5 and 7. Uplink 2 from IOM a carries 2,4,6 and 8.  So when you unplug certain cables the vNICs riding that cable (which is determined by the blade location by default) will show down in ESXi.  In ESXi you likely have one nic from A and one from B.  When you unplug IOM cables on both in the same position you lose all paths upstream.  I hope that makes sense...  Again this is if you're NOT using port channel.  I strongly recommend port channel.  When you port channel from IOM to FI your blade VNICs get pinned to the port channel, not the specific uplink.  That lets you take links down without worrying about a vNIC losing its pinned uplinks.


Heres a good article on this:



You have any experience having issues with the Chassis Discovery Policy (CDP) set to 1-Link?  In my case, I always recommend using 1 link in the CDP, that way your IOM only needs at least one link to be fully functional to come up, if more are connected, you just need to reack the chassis to re-pin your server's interfaces to the excess IOM interfaces.

I agree with you on everything else.


Note: For the Port-channel to work between FI and IOMs, you need to have 2nd generation hardware at least... that means 62xx FIs anf 220x IOMs.