cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6505
Views
5
Helpful
43
Replies

Need Help with Cisco C240 M4SX RAID Controller and Fan Issues!

ServerNewbie
Level 1
Level 1

I recently got my hands on a Cisco C240 M4SX with 24 drive bays. It’s equipped with a Cisco 12G SAS Modular RAID Controller, but I’ve run into an issue I can’t seem to solve.

From what I’ve read, using a non-Cisco PCIe adapter causes the system fans to stay in high-power mode. However, in my case, I am using a Cisco controller, so I’m unsure why the fans are still running at full speed.

I’ve searched everywhere, but it seems like no one has a clear solution. If you’ve dealt with this or have any advice, I’d love to hear:

  • Debugging steps I can take?
  • Any potential fixes or workarounds?
  • Is reflashing the controller firmware worth trying?

I’m open to all suggestions—please share your expertise!

TL;DR: Cisco C240 M4SX fans stuck in high-power mode even with a Cisco 12G SAS Modular RAID Controller. Any advice on fixing this?

43 Replies 43

For whatever reason, both the M4 and M5 servers seem to think the HBA's run hotter than than the RAID controllers. I don't know that it is true, but it is does seem peculiar to me. It depends what you are going to do with the server before I could say that is a problem. You can do JBOD mode with the RAID controllers, but FreeNAS/TrueNAS doesn't have good driver support for that. My ESXi hosts are using the RAID controllers and those do not force the fan speed higher. Since all my servers are in my home office, I set the fan policy to acoustic to keep the noise down.

It's wild that the servers just assume they run hot—maybe those cards lack sensors? Then again, the card literally comes with the server 🤷.

I plan to use the server for both frontend and backend hosting (React and Node.js). Since we're storing data, we intend to use ZFS for redundancy.

If we can get this server running without significant costs, we might buy another server and place it in a different location for redundancy. At that point, my understanding is that Ceph would be the better choice since it's designed for cluster-level redundancy.

That said, I'm new to this space, so my understanding might be flawed.

ZFS wants direct access to the drives (no RAID controller), so the HBA is a much better choice for that. I can't speak to the rest of it.

nds-frank
Level 1
Level 1

In case anyone else finds this thread from the bug report.  A couple of notes after much troubleshooting on the C220/240 M5 series.

TL;DR - Replace the thermal compound everywhere to reduce temps by 8 to 10c.

UCSC-RAID-M5 - I attempted to use this card first and the chip temps would quickly rise to 90c and cause fans to accelerate.  As I wanted to use JBOD mode, I did not troubleshoot this card and instead moved on to UCSC-SAS-M5.

UCSC-SAS-M5 - Shows up in CIMC as "Cisco 12G Modular SAS HBA (max 16 drives) (MRAID)" is a non-RAID HBA however the chip temp can also rise up to 90 causing fans to spin up.  Once the fans kick on the chip temp will decrease to 71 and creep back up to 90 again cycling the fans.  Removing the HBA heatsink and replacing the thermal compound (it's an OEM Avago card) kept the temp under 86 and fans do not exceed 3300 rpms (Acoustic profile).

UCSC-MLOM-C25Q-04 - My nemesis..  VIC 1457..  Two things, without the mLOM in place, there is no fan under the HBA card to pull heat from the HBA heatsink.  The front chassis fans path to the HBA heatsink is impeded by one of the CPUs and other plastic components.  On the C220 M4 this is not the case!  The C220 M4 has no impedance in the path and the RAID card can be cooled with low fan settings without issue.  They really messed up the M5 design on that one, I haven't looked at the M6/M7 to see what they did there yet.  When you install the VIC 1457 mLOM card, it has an amazing fan on the card and would do well to pull the hot air from the HBA heatsink above it and push it out of the server.  Alas, that fan does not kick on under MLOM chip temp rise, my assumption is it must be tied directly to the SFP+ temps as that's where it will direct most airflow when engaged (but it would take all the HBA air and exhaust it in the process).  I tested the fan with an external power source and it works well and moves a lot of air, but doesn't do much good when they don't have it enabled even at a low RPM.

CPUs - Replaced the thermal compound on one socket, CPU temp dropped by 10c.  The stock thermal compound throughout (CPUs, HBA, etc) is terrible, replace as soon as possible for lower die temps. 

Wanted to add one more piece of information for those who are looking for a quiet setup, specifically for the C220M5.  With the server fans at the lowest possible RPM and running the 'Acoustic' profile, I still noticed a louder than usual whine from the server.  I narrowed it down to the power supply fans.

UCSC-PSU1-770W V04 - I've tested 4 of these power supplies, all of which are loud at idle RPM.  One of them was extremely loud, the others were just louder than the below unit.

UCSC-PSU1-770W V01 - I had 5 of these from older M4 servers, same specs and form factor, they are all dead silent.

If you followed the advise in my first post, and still consider the server too loud, please check to make sure you have the V01 PSUs.  The V04s use AVC fans where the V01s have a different brand that is pretty much silent.  The fans are not directly compatible (different connectors) so you cannot swap a fan straight from a V01 to the V04 power supply due to the internal board redesign.

That is very interesting. I used the PSU's from my M3's when I was running M4's. The lower voltage PSU's for the M4's were hard to find, at least on eBay. I thought the PSU's were different on the M5's versus the M4's. Maybe they are the same in the C240's, but they looked like different sizes on the C220's. I opted for the C240's not because of the need for a ton of PCIe cards, but because the large chassis has bigger fans which should (hopefully) not need to spin as fast. My lab is in my home office, so noise is a big deal for me.

You make two excellent points @nds-frank that i missed on first read. I was using Chelsio PCIe cards in my older FreeNAS systems because that is a strongly recommended card for that platform. It did marginally annoy the CIMC because it was an unknown card, but not bad. FreeNAS doesn't support any of the higher speed MLOM modules, so you have to do something else. I hadn't thought about those two heat sources being right near one another, but that totally makes sense. How you distribute the cards between PCIe risers and how you distribute the drives in the front bays can impact this as well.

Which power supplies you use makes a HUGE difference! I think I recycled some M3 power supplies in my M4's because they are quieter and the installed components don't have a high power draw. IDK if you can recycle older PS's in an M5, but it could be worth investigating. Obviously you need to check the load to make sure you have enough power with a single PS. That said, you are likely fine as long as you aren't loaded up with GPU's.

I hate it when I do that, but it appears that I lied to you. Here is the inventory from one of my old M4 FreeNAS units.M4-inv.jpg

I also get the high power fan override, but at least in my office it isn't as obnoxious as it sounds to me.M4-freenas2.jpg

The M5 isn't so bad. The inventory in this one is all over the place because I had to use the LSI HBA cards because of the head butt between the SAS HBA and the WD Red SATA SSD's/

M5-inv.jpgM5-freenas2.jpg

Oh, yup that's it, same thing. I have a sound proof server room, the noise isn't the issue. It's the power consumption. You also bring up a good point that slipped my mind -- the MLOM in that machine is a dual 10GBe RJ-45. When I bought it, I thought for sure something was wrong because of the temps, but everyone has assured me that MLOMs run hot. 

I'm curious, what is your average power consumption at?

As for the M5, I also bought a SAS HBA for that machine, but I haven't had the time to finish the rest of the configuration. It was very confusing with the different variations of the server, and the rear NVMe vs. two front slots of NVMe, and some parts of the server don't function with one PCIe riser vs.another PCIe riser. I must love these Cisco machines for delaing with the grief they give me sometimes.

 

The M4 hasn't been on for 2+ years, so I can't say on that. This is the M5 FreeNAS although it isn't under heavy use right now. My power bill is dreadful anyway since I have a whole room full of stuff that is running most of the time. My office has some outlets on a shared circuit, (2) dedicated 120V circuits, and a dedicated 240V circuit for the 5Kva UPS.

ElliotDierksen_0-1751922138062.png

 

Ahh, ok. I'm going to have to get back to you on the M5. I'm worried now, lol.

I attached screenshots that I just took. The M4 has been on for about 15 minutes, and it's not running any worklloads, just idling in TrueNas w/ some containers open. If I put the RAID card back in, and put it in JBOD mode, the power would go down between 90-100W. 

Take a look at how hot the MLOM is already. I don't even have the links aggregated because I was afraid it would overheat. Someting wrong with the part potentially?

Do you think it's feasible to setup Wake-on-LAN so I can time my backups throughout the day. I have a couple containers on here, but I have plenty of other servers for them to be on.

If you aren't using the MLOM, take it out. The speeds on fans 1 & 2 are very high. My admittedly fuzzy memory was that my M4 FreeNAS servers ran the fans at 6000-7000 rpm and the noise was tolerable. The power usage on yours is super high as well. What kind of drives are installed in this unit, and how many? Until I got SSD's, I stayed with 5400rpm drives as a balance between speed and power usage/noise.

As an additional frame of reference, note the fan speeds on my M5. You can see that I have 4 PICe cards and the SAS HBA in the RAID slot. Way less than your M4. My old M4 isn't an exact parallel because I removed the 40G card and the SAS HBA PICe card for the external drive shelf.

ElliotDierksen_0-1751938192979.png

 

@Elliot Dierksen sorry for the delay in response.

I'm running 4 SAS SSD & 20 SAS HDD.
2 SSD=RAID1 (Boot Drive for TrueNAS)
2 SSD=RAID1 (ZFS Pool for containers in TrueNAS) - basically a boot drive for the containers.

I'm not sure which RPM these HDDs are, they are all the original drives from my PowerEdge R740s.

Since my last post I have removed the heatsinks from the HBA & the MLOM, I then reapplied thermal paste.

MLOM temperature hasn't changed, but my fan speeds have come back down to earth. Do you know how I can find out the temp of the HBA controller?

Screenshot 2025-07-11 at 9.41.19 PM.png

Screenshot 2025-07-11 at 9.40.59 PM.png

Screenshot 2025-07-11 at 9.39.59 PM.png

   






 

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card