CraftRigs
Hardware Review

Liquid Cooling for AI Workstations: Worth the Hassle?

By Ellie Garcia 4 min read

Some links on this page may be affiliate links. We disclose it because you deserve to know, not because it changes anything. Every recommendation here comes from benchmarks, not budgets.

Most people building local AI rigs overthink cooling. Custom water loops look impressive in build logs, but for the typical single-GPU inference box, a quality air cooler does the same job at half the cost and zero maintenance risk.

That said — there are scenarios where liquid cooling genuinely earns its place. Specifically: dual-GPU builds, constrained cases, and 24/7 sustained inference loads on high-TDP cards.

Here's where each solution makes sense.

Why LLM Inference Is Harder on Cooling Than Gaming

Gaming is a burst workload. A typical gaming session runs your GPU at 85–95% load for seconds at a time, punctuated by load screens, menu navigation, and in-engine breathing room. Your average sustained GPU temperature in a game is lower than the peak.

LLM inference is a sustained workload. When you're generating tokens, the GPU runs at 95–100% the entire time. No relief, no breaks, no microsecond idle periods. A 30-second gaming burst becomes a 3-minute inference run at the same load. Thermal management that's "fine" for gaming can throttle under sustained AI workloads.

This matters for cooler selection. You want sustained thermal capacity, not peak burst capacity.

Note

Thermal throttling on GPUs: Nvidia GPUs start throttling boost clocks when GPU temperature exceeds ~83°C (varies by card). During LLM inference, a throttling GPU means lower tokens/second — you see it in real-time as responses slow down mid-generation.

Single GPU Builds: Air Cooling Is Fine

An RTX 4090 running at 450W under LLM inference is a serious thermal challenge — but a quality air cooler handles it. The GPU has its own cooler (you can't put an AIO directly on the GPU without major modding). What you're managing with case cooling is the ambient temperature around the card and the CPU.

For a single-GPU build, prioritize:

  • Good case airflow: Front-to-back airflow, not the side-panel glass-box design that traps heat. Full build guide with case recommendations here.
  • A 240mm or 280mm AIO for the CPU: If your CPU is doing any model offloading work, it'll see sustained load too. An AIO for the CPU makes sense here.
  • GPU card position: Keep at least one PCIe slot of gap between your GPU and any other expansion card for airflow clearance.

A tower air cooler like the Noctua NH-D15 handles sustained inference CPU loads reliably without the leak risk or pump noise of an AIO. Full CPU cooler breakdown here.

Dual-GPU Builds: This Is Where Liquid Makes Sense

Two RTX 4090s in a standard ATX case is where thermal management becomes genuinely difficult. Each card pulls 400W+ under inference. In a standard setup they're 1–2 PCIe slots apart — the bottom card exhales directly onto the top card's intake.

Custom water blocks on both GPUs plus a large radiator (480mm or dual 360mm) completely solves this. GPU temperatures drop from 85–90°C to 55–65°C under sustained dual-inference load. Boost clocks are maintained. No throttling.

The cost: $400–800 for two GPU water blocks, fittings, pump/reservoir, tubing, radiator, and coolant. Add another few hours of build time and monthly coolant checks.

Tip

Compromise option for dual GPU: Deshroud both GPUs and replace their blower-style coolers with aftermarket heatsinks + 120mm fans pointed upward, combined with aggressive case exhaust. Cheaper than a full loop, notably better than stock coolers stacked in a closed case. GPU cooling mods guide here.

Pre-built Liquid-Cooled AI Servers

A few vendors (Bizon, Comino) sell factory liquid-cooled multi-GPU systems purpose-built for inference. These aren't cheap — a liquid-cooled 6x RTX 4090 rack from Comino runs $30,000+ — but for production inference infrastructure they eliminate the integration work and come with support.

For home lab and small team use cases, this is overkill. But worth knowing the category exists if you're scaling.

Caution

Custom loop maintenance is real: A water-cooled loop needs coolant checks every 6–12 months, and fittings can develop slow leaks over years of temperature cycling. If you're uncomfortable doing this maintenance, or if the system sits in an office where a leak would be catastrophic, stick with air or all-in-one solutions.

The Bottom Line

Single GPU (RTX 4090 or below): don't bother with a custom loop. Good case airflow and a 240mm AIO for the CPU is plenty. Spend the $400 you'd spend on a custom loop on a faster NVMe or more RAM instead.

Dual GPU or sustained 24/7 production loads on multiple high-TDP cards: liquid cooling pays for itself in maintained performance and extended component life. It's not enthusiasm spending — it's engineering.

For builds between these extremes (a single high-TDP card in a small form factor case, for example), an AIO on the CPU plus aftermarket GPU cooling mods hits a practical middle ground.

See Also

cooling liquid-cooling aio custom-loop gpu workstation local-llm

Technical Intelligence, Weekly.

Access our longitudinal study of hardware performance and architectural optimization benchmarks.