01-21-2025 03:17 PM
I recently got my hands on a Cisco C240 M4SX with 24 drive bays. It’s equipped with a Cisco 12G SAS Modular RAID Controller, but I’ve run into an issue I can’t seem to solve.
From what I’ve read, using a non-Cisco PCIe adapter causes the system fans to stay in high-power mode. However, in my case, I am using a Cisco controller, so I’m unsure why the fans are still running at full speed.
I’ve searched everywhere, but it seems like no one has a clear solution. If you’ve dealt with this or have any advice, I’d love to hear:
I’m open to all suggestions—please share your expertise!
TL;DR: Cisco C240 M4SX fans stuck in high-power mode even with a Cisco 12G SAS Modular RAID Controller. Any advice on fixing this?
02-05-2025 03:46 AM
It looks like you have the controller and cables to be replaced correctly. Longer mini-sas cables are what you will need. I did not see a riser card for the PCIe cards you are talking about buying. You will need at least one of those if you don't already have that. I would suggest getting the riser card that is closest to the MRAID slots which I believe is riser 1. If you do need the longer mini-sas cables, try to get flat ones. There are several cable guides along the inside of the case where the cable run up to the ports on the SAS expander. You will have to fully remove the fan tray to route the cables properly. You may need different lengths as the ports on the SAS expander may be in different places. Take a LOT of detailed pictures before you start to take everything apart. That will be very helpful when you start putting it back together.
02-05-2025 04:07 AM - edited 02-06-2025 04:48 AM
Also, download the maintenance and service manual if you haven't already. I forgot that there are only 2 SAS data cables going to the controller in the M4. That means you could use an LSI 8=9300-8i instead of the 16i. The 8i might be cheaper. The 16i will work, but you would have 2 unused ports.
02-05-2025 02:33 PM
02-06-2025 04:50 AM - edited 02-06-2025 04:52 AM
I don't have a brand preference on cables. The Frankenstein thing with the NAS box is the only time I have needed that. I wouldn't touch anything but an LSI (now Broadcom, sigh) card. The 9300 series is the best bet for a SAS HBA that supports SAS 12G speeds.The LSI 9300 cards are what Cisco sells for an SAS HBA to support external drive shelves.
02-06-2025 06:25 PM
I spoke to some other people on reddit with the same server that were NOT having the same issue.
We checked and it looks like I have the UCSC-SAS12GHBA card while they have UCSC-MRAID12G-1GB
I might try to purchase that card and replace mine first.
That's a much cheaper purchase than the PCIe controllers
Thanks so much for the help so far, I'll keep you updated!
02-07-2025 05:35 AM
For whatever reason, both the M4 and M5 servers seem to think the HBA's run hotter than than the RAID controllers. I don't know that it is true, but it is does seem peculiar to me. It depends what you are going to do with the server before I could say that is a problem. You can do JBOD mode with the RAID controllers, but FreeNAS/TrueNAS doesn't have good driver support for that. My ESXi hosts are using the RAID controllers and those do not force the fan speed higher. Since all my servers are in my home office, I set the fan policy to acoustic to keep the noise down.
02-07-2025 12:49 PM
It's wild that the servers just assume they run hot—maybe those cards lack sensors? Then again, the card literally comes with the server 🤷.
I plan to use the server for both frontend and backend hosting (React and Node.js). Since we're storing data, we intend to use ZFS for redundancy.
If we can get this server running without significant costs, we might buy another server and place it in a different location for redundancy. At that point, my understanding is that Ceph would be the better choice since it's designed for cluster-level redundancy.
That said, I'm new to this space, so my understanding might be flawed.
02-08-2025 06:41 AM
ZFS wants direct access to the drives (no RAID controller), so the HBA is a much better choice for that. I can't speak to the rest of it.
02-13-2025 04:02 PM
In case anyone else finds this thread from the bug report. A couple of notes after much troubleshooting on the C220/240 M5 series.
TL;DR - Replace the thermal compound everywhere to reduce temps by 8 to 10c.
UCSC-RAID-M5 - I attempted to use this card first and the chip temps would quickly rise to 90c and cause fans to accelerate. As I wanted to use JBOD mode, I did not troubleshoot this card and instead moved on to UCSC-SAS-M5.
UCSC-SAS-M5 - Shows up in CIMC as "Cisco 12G Modular SAS HBA (max 16 drives) (MRAID)" is a non-RAID HBA however the chip temp can also rise up to 90 causing fans to spin up. Once the fans kick on the chip temp will decrease to 71 and creep back up to 90 again cycling the fans. Removing the HBA heatsink and replacing the thermal compound (it's an OEM Avago card) kept the temp under 86 and fans do not exceed 3300 rpms (Acoustic profile).
UCSC-MLOM-C25Q-04 - My nemesis.. VIC 1457.. Two things, without the mLOM in place, there is no fan under the HBA card to pull heat from the HBA heatsink. The front chassis fans path to the HBA heatsink is impeded by one of the CPUs and other plastic components. On the C220 M4 this is not the case! The C220 M4 has no impedance in the path and the RAID card can be cooled with low fan settings without issue. They really messed up the M5 design on that one, I haven't looked at the M6/M7 to see what they did there yet. When you install the VIC 1457 mLOM card, it has an amazing fan on the card and would do well to pull the hot air from the HBA heatsink above it and push it out of the server. Alas, that fan does not kick on under MLOM chip temp rise, my assumption is it must be tied directly to the SFP+ temps as that's where it will direct most airflow when engaged (but it would take all the HBA air and exhaust it in the process). I tested the fan with an external power source and it works well and moves a lot of air, but doesn't do much good when they don't have it enabled even at a low RPM.
CPUs - Replaced the thermal compound on one socket, CPU temp dropped by 10c. The stock thermal compound throughout (CPUs, HBA, etc) is terrible, replace as soon as possible for lower die temps.
03-03-2025 06:50 PM - edited 03-03-2025 06:50 PM
Wanted to add one more piece of information for those who are looking for a quiet setup, specifically for the C220M5. With the server fans at the lowest possible RPM and running the 'Acoustic' profile, I still noticed a louder than usual whine from the server. I narrowed it down to the power supply fans.
UCSC-PSU1-770W V04 - I've tested 4 of these power supplies, all of which are loud at idle RPM. One of them was extremely loud, the others were just louder than the below unit.
UCSC-PSU1-770W V01 - I had 5 of these from older M4 servers, same specs and form factor, they are all dead silent.
If you followed the advise in my first post, and still consider the server too loud, please check to make sure you have the V01 PSUs. The V04s use AVC fans where the V01s have a different brand that is pretty much silent. The fans are not directly compatible (different connectors) so you cannot swap a fan straight from a V01 to the V04 power supply due to the internal board redesign.
03-04-2025 12:18 AM
That is very interesting. I used the PSU's from my M3's when I was running M4's. The lower voltage PSU's for the M4's were hard to find, at least on eBay. I thought the PSU's were different on the M5's versus the M4's. Maybe they are the same in the C240's, but they looked like different sizes on the C220's. I opted for the C240's not because of the need for a ton of PCIe cards, but because the large chassis has bigger fans which should (hopefully) not need to spin as fast. My lab is in my home office, so noise is a big deal for me.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide