Why Are Data Centres Switching to Liquid Cooling?
Liquid cooling in a data centre is exactly what it sounds like: instead of using air cooling systems to carry heat away from servers and other hardware, liquid cooling technologies are used. A coolant circulates through the system, absorbs heat from the components, carries it away, and releases it somewhere else usually via a heat exchanger or cooling tower before coming back around to do it again.
Liquid cooling isn’t a new idea it has been used in specialist high-performance computing environments for years; however, it’s no longer niche. The rise of artificial intelligence AI workloads and GPU hardware that powers them has made gpu liquid cooling a practical necessity in data centres that would have had no need for it five years ago.
Why Air Cooling Has Hit Its Limits
Air cooling worked well for a long time because data centre hardware, despite the power density and stayed within bounds that air could manage. Hot and cold aisle containments were built, CRAC units were correctly sized, and the heat was removed and despite being costly it worked efficiently.
Modern AI data centres and GPU hardware has broken that model. A single high-density GPU rack can draw over 100kW. The geometry of the hardware, GPUs packed tightly together with very little space between them means there’s little place for air to flow in the volumes required to carry that heat away.
And if the heat doesn’t get removed effectively, the GPUs detect they’re running too hot and throttle themselves down. This unfortunately means you’ll have spent a significant amount of money on AI hardware and while it runs at a fraction of its rated performance because the cooling can’t keep up. This is an avoidable problem.
Direct-to-Chip is the Right Solution for GPU Infrastructure
The most effective liquid cooling approach for AI and GPU deployments is direct-to-chip cooling. A metal cold plate sits directly on top of the GPU. Coolant flows through channels inside the cold plate, absorbs heat from the chip by conduction, and is pumped away to a coolant distribution unit (CDU) where the heat is transferred out of the loop. Cool coolant then returns to the cold plate and the cycle repeats.
The reason this works so much better is because liquid removes heat far more efficiently than air. You can remove the same amount of heat with much less volume of cooling fluid than you’d need in airflow, and you can do it far more precisely and deliver cooling directly to the components that generate the most heat rather than trying to condition the air in an entire room.
The energy efficiency benefits are real too. A well-designed direct-to-chip system running a GPU cluster can operate at a PUE below 1.2 whereas an equivalent traditional air-cooled setup would be considerably higher. For organisations with energy cost pressures or sustainability commitments this adds up in the long term for reducing energy consumption.
What’s the Difference between single phase and dual phase?
There are two main approaches to direct-to-chip liquid cooling systems, single phase and dual-phase.
Single-phase means the coolant stays as a liquid throughout the entire loop. It goes in cool, absorbs heat, comes out warm, transfers that heat at the CDU, and goes back in cool again. So the coolant never changes state.
Single-phase systems use treated water circuits or dielectric fluids, and they’re the right choice for the vast majority of enterprise AI deployments. They’re proven, relatively straightforward to install and maintain, and are more than capable of handling today’s GPU heat loads.
Dual-phase means the coolant is designed to change state – arriving at the cold plate as liquid, absorbing so much heat that it partially vaporises and then condensing back to liquid at the heat exchanger.
What a Proper Liquid Cooling Installation Involves
This is where the difference between a cooling system that works well and one that causes headaches is really made.
Before anything is installed, we look at the actual heat loads for the hardware going in. Not just headline TDP numbers, but how the equipment really operates day to day. We size the CDU with proper headroom, not simply enough to scrape by under perfect conditions. We also plan the manifold routing carefully so pressure drops are kept low, flow rates stay consistent, and everything remains accessible for future maintenance. This is particularly exciting, but it’s what separates a setup that runs smoothly for years from one that starts causing problems early on.
The installation process matters as well. Cold plate connections are always made dry, with no coolant in the system. Once everything is connected, the loop is pressure tested with nitrogen at one and a half times the operating pressure and held long enough to be a meaningful check. We won’t sign off a loop until we’re confident it’s completely leak-free. Only then is coolant introduced.
After commissioning, we test thermal performance under real workloads and hand over full documentation. That includes the system design, pressure test results, flow rate data and CDU configuration. Good documentation at handover might not feel critical at the time, but it makes a big difference when something needs to be diagnosed or changed a couple of years down the line.
What it all comes down to
AI data centre cooling isn’t a premium option anymore. It’s what the hardware requires to operate correctly. The question isn’t really whether you need it, it’s whether it gets installed properly.
Technimove has specialist engineering teams dedicated to data centre liquid cooling installation. We scope, design, install, pressure test, and commission and we stay involved through our support services to make sure systems keep performing the way they should.