hosts are not responding state/frozen state after upgrade from 5.5U3 to 6.5U2 on C240-M4S

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-25-2019 04:46 PM
We recently upgraded esxi 5.5 U3 to esx 6.5 U2 with cisco customized image on C240-M4S Server. We first upgrade cisco firmware from 2.0(6) to 4.0(1c) and then esxi host upgrade from 5.5 u3 to 6.5 U2.(Please find the attached text to know Driver and FW details before and after upgrade).
After upgrade, hosts are going not responding state/frozen state where in esxi hosts are reachable via PING over network, but unable to re-connect host back to vCenter.
During host not responding state ,we can login into putty with multiple session ,however we can’t see/run any commands (like, if df- h, to view logs under cat /var/log ) .When we ran df-h, hosts won’t display anything, gets struck until we close putty session and then can re-connect .
During host not responding state, vms continue to be running, but we can’t migrate those vms into another host and also we are unable to manage those vms via vCloud panel .
We have to reboot host to bring back host and then will connect to vcenter .
We working with Vmware and Cisco since from 3 weeks ,no resolution yet .
We can see lot of Valid sense data: 0x5 0x24 0x0 logs in vmkernel.logs and VMware suspect something with the LSI MegaRAID (MRAID12G) diver. So Vmware asked to contact hardware vendor to check hardware/firmware issues and LSI issues as well
cpu20:66473)ScsiDeviceIO: 3001: Cmd(0x439d48ebd740) 0x1a, CmdSN 0xea46b from world 0 to dev "naa.678da6e715bb0c801e8e3fab80a35506" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0
This command failed 4234 times on "naa.678da6e715bb0c801e8e3fab80a35506"
Display Name: Local Cisco Disk (naa.678da6e715bb0c801e8e3fab80a35506)
Vendor: Cisco | Model: UCSC-MRAID12G | Is Local: true | Is SSD: false
Cisco did not see any issues with Server /hardware after analyzing Tech support logs and also we performed Cisco diagnostics test on few servers,all components tests/ checks looks good .Only one recommendation given by cisco is to change Power Management policy from balance to High Performance under esxi host->configure->Hardware->Power Mgmt->Active policy ->High Performance
Can someone help me to find cause/fix .
- Labels:
-
Unified Computing System (UCS)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-16-2019 06:03 AM
Hello,
My cluster is being effected by this bug too, however it did not solve my problem after removing the LSI:
esxcfg-scsidevs -a
vmhba1 megaraid_sas link-n/a unknown.vmhba1 (0000:18:00.0) Avago (LSI / Symbios Logic) MegaRAID SAS Invader Controller
/var/log/vmkernel.log:
2019-10-16T11:28:57.727Z cpu6:66150)ScsiDeviceIO: 2954: Cmd(0x43960157ef00) 0x1a, CmdSN 0x6a1d from world 0 to dev "naa.618e728372e70510233f4ee410ec09e3" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
2019-10-16T11:59:06.865Z cpu15:66215)ScsiDeviceIO: 2954: Cmd(0x439e01145580) 0x1a, CmdSN 0x6b2d from world 0 to dev "naa.618e728372e70510233f4ee410ec09e3" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
2019-10-16T12:28:58.597Z cpu0:67482)ScsiDeviceIO: 2954: Cmd(0x43960c344f00) 0x1a, CmdSN 0x6c31 from world 0 to dev "naa.618e728372e70510233f4ee410ec09e3" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
2019-10-16T12:55:45.046Z cpu7:66226)ScsiDeviceIO: 2954: Cmd(0x4396015b1e80) 0x4d, CmdSN 0x159 from world 67457 to dev "naa.618e728372e70510233f4ee410ec09e3" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
esxcfg-scsidevs -A
vmhba1 naa.618e728372e70510233f4ee410ec09e3
vmware -lv
VMware ESXi 6.5.0 build-9298722
VMware ESXi 6.5.0 Update 2
any idea why I keep getting the failed messages in the log file?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-23-2019 05:51 AM
I haven't got a chance to test this. We are going to upgrade VMware to latest 6.7 and see if that works.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2019 10:07 AM
Finally issue fixed by disabling LSI Drivers and enabling MegaRaid drivers on ESXi host

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2019 10:13 AM
Finally issue fixed by disabling LSI Drivers and enabling MegaRaid drivers on ESXi host

- « Previous
-
- 1
- 2
- Next »