We have the following system \ environment, two redundantly connected Cisco 6120XP interconectorer linked to three Cisco UCS 5108x chassis with 13st cisco blade servers configured.
As information, I can say that we have NetApp FAS3240 storage and no Nexsus switches. We have vmware virtualization platform.
This is what happened.
We'd upgrade UCS Manager from version 1.4 directly to 2.03 (c).
At the time we had done a failover from the primary to the secondary interconector. We ran at that this time all network traffic through one I / O module on each chassis. When we pressed the upgrade on the primary interconecter so suddenly chassis number two reboot and we lose about 40 servers. VMWare has taken them down neatly but we lost it unfortunately. Now the question is why did this happen?
So did you follow the upgrade steps per the release document going from 1.3 to 2.0,3? These steps are very important. It seems you did, but just want to ask.
Sent from Cisco Technical Support iPad App
Please confirm the sequence of events
1) You initially upgraded secondary FI and when you activated it, secondary FI along with IOMs attached to it came up successfully with upgraded version. Is this correct ?
2) Then you made upgraded ( secondary ) FI as primary by issuing " cluster lead " command from operating primary.
3) Initiated upgrade on old primary. Then activated new version.
Now servers lost connectivity.
Is my understanding correct ?
1) Did all servers lose connectivity ( if you have multiple chassis ) ?
2) If not, do you know list of servers that lost the connectivity ?
3) Both LAN & SAN or only LAN or only SAN connectivity ?
4) Did you make any changes to resolve the issue ? If not, how did the connectivity got restored ?
I would suggest to you open a TAC SR with above information and UCSM and corresponding chassis tech support bundle.
Please include approx date / time of the event.