The Portuguese version of this Article can be found at: Resfriando a IA com Água Morna .
 |
For an offline or printed copy of this document, simply choose ⋮ Options > Printer Friendly Page. You may then Print > Print to PDF or Copy & Paste to any other document format you like. |
Introduction
The pursuit of greater Electrical and Thermal efficiency in Data Centers is an old one (pre-1887, for cooling and insulating high-voltage transformers), and the advent of AI (Artificial Intelligence) has acted as a catalyst, accelerating the need for a solution.
Every time you use an AI-powered tool, immense Computing Power from a Data Center is being accessed somewhere in the World, and equally immense is the Energy required to power the Hardware that makes it all possible. However, powering the Hardware is only part of the problem; cooling it is the other part.
This immense Energy demand from AI is forcing a complete reinvention of the Data Center, and therefore, optimizing the Electrical and Thermal Efficiency of each component is becoming increasingly necessary.
Importance & Challenge
Data Centers are already being classified as Critical National Infrastructure, along with Emergency Services, Energy and Water Services, and Financial and Health Systems.
Quoting a BBC article - Feb/2025:
" ... The giant Data Centers needed to power AI can require large quantities of water to prevent them from overheating ... "
" ... Data Centers use Fresh, Mains Water, rather than Surface Water, so that the pipes, pumps and heat exchangers used to cool racks of servers do not get clogged up with contaminants ... "
" ... Microsoft's global water use soared by 34% while it was developing its initial AI tools, and a Data Center Cluster in Iowa used 6% of the District's Water supply in one month during the training of OpenAI's GPT-4 ... "
Quoting the Ethernet Alliance article - 2025 Ethernet Roadmap:
" ... By 2026, the AI Industry is expected to have grown exponentially to consume at least ten times its demand in 2023 (Electricity 2024 - Analysis and Forecast to 2026 Report - May 2024) ... "
" ... Gartner estimates the power required for Data Centers to run incremental AI-optimized Servers will reach 500 TWh per year in 2027, which is 2.6 times the level in 2023 (Gartner Predicts Power Shortages Will Restrict 40% of AI Data Centers by 2027 - Nov 2024) ... "
Liquid Cooling
Traditional Data Centers use Air Cooling and Chillers.
New Liquid Cooling methods have been developed with the goal of being significantly more efficient at removing heat; these include:
- Immersion Cooling
- Direct-to-Chip Cooling
Immersion Cooling
Servers or Components are submerged in a non-conductive liquid that dissipates heat.

Is this the future of Data Centers ?
During Cisco Live Amsterdam - Feb/2025, immersed in Liquid (Shell S3 X) and without Fans (eliminating the concepts of Dust and Air and with much less Noise) :
|
The video above also discusses:
Edge Computing: a Distributed IT Architecture that processes Data close to its origin, instead of sending it to a centralized Cloud or Data Center.
Class 4 FMP (Fault Managed Power) : is a technology listed in the National Electrical Code in 2023, which was not designed to replace PoE (Power over Ethernet) - up to 100 W, but rather traditional AC Power Cables - 300 W+.
Top of Rack (ToR) : best suited for Modern Data Centers where Network Switches are placed in each Rack, usually at the Top, for fault isolation, scalability, and flexibility - unlike End of Row (EoR).
|
Direct-to-Chip Cooling
A Cold Plate is attached directly to the Processor, and a Liquid (OCP PG25) circulates through it, absorbing the heat.


|
The OCP (Open Compute Project’s) PG25 is a mixture of 75% Water and 25% Glycol.
The OCP PG25 specification for DLC (Direct Liquid Cooling) fluids requires the use of a fluorescent green or yellow-green dye specifically to facilitate easy Leak Detection.
The OCP PG25 has an inlet temperature of up to 113 F (45 C).
|
Cisco’s INSANE Liquid-Cooled Switch.
During the Cisco Partner Summit - Sept/2025, the ideal combination of Liquid (OCP PG25) and Air was presented, a Hybrid Cooling System using Warm Water for cooling. In the Cisco "51.2T Switch", 80% of heat is efficiently cooled using Liquid, in addition to system energy savings of up to 13%.
|
The Direct-to-Chip Cooling described in the video is designed to target the 3x main heat-generating components: the CPU, the NPU (Networking Processing Unit), and the Optical Components (front-end), dissipating approximately 2,000 W of heat before it can radiate through the Chassis.
|
Looking to the Future
Especially in Data Centers with AI workloads, Liquid Cooling is no longer an option but a necessity.
Liquid Cooling presents significant challenges, and Innovation is key to overcoming them.
Comment below which Innovations you believe would have the greatest impact on Liquid Cooling !