News in the Channel - issue #39

HYBRID COOLING FOR DATA CENTRES

Data centres have been a part of the working environment for many years but are in the spotlight more than ever thanks to AI putting greater demands on them. It also means how data centres are cooled is more of a priority than ever. “Traditional enterprise environments were often designed for more predictable workloads and lower rack densities, while many AI workloads, especially GPU-based training and inference, place far greater pressure on power and cooling,” says Karl Mendez, managing director of CWCS. “In my experience, many existing data centres were not designed for sustained higher-density deployments at scale. Some can be adapted, but operators now need to think much more carefully about power, heat removal, resilience and future growth. “What is changing most is not just peak demand, but the consistency and intensity of that demand. AI workloads tend to run hotter, for longer and with less tolerance for fluctuation, which exposes limitations in legacy infrastructure more quickly. That is why we are seeing a shift from incremental upgrades to more fundamental infrastructure planning, with operators designing for higher baseline densities and more flexible power and cooling strategies.” Jason Beckett, head of architecture at Hitachi Vantara, adds that traditional environments were built for steady, lower- density compute. “Whereas AI introduces highly concentrated demand driven by GPU clusters, parallel processing and unpredictable load patterns,” he says. “The impact is most visible at server rack level. Standard racks historically operate at around 5–15kW, but AI deployments can exceed 40kW and in some cases go far higher, putting pressure on power delivery, cooling and physical infrastructure. “This creates a misbalance. Many facilities hit maximum rack power limits before they can fully populate AI hardware, forcing underutilisation or redesign.

“Cooling is another constraint, as traditional air-based systems struggle with the heat generated, accelerating the shift to liquid cooling. “As a result, operators are having to rethink layouts, retrofitting where possible, or building new AI-specific environments designed for higher density, resilience and energy demand from the outset.” Adhum Carter Wolde-Lule, a director at Prism Power Group, agrees, adding: “the industry is in a genuine retrofit scramble right now, and new builds are having to make completely different design decisions compared to what was standard five or 10 years ago.” Advantages of hybrid cooling With data centres requiring greater cooling power, hybrid solutions are becoming increasingly popular. “Hybrid cooling combines air and liquid cooling technologies to address varying heat loads across the data centre,” says Slawomir Dziedziula, senior director of application engineering EMEA - IT Systems at Vertiv. “Liquid cooling is applied directly at the chip level to manage the highest density heat sources, while air cooling continues to manage lower density areas,” he explains. “This approach improves overall thermal efficiency, supports higher return air temperatures, and reduces energy consumption compared to air only systems. Hybrid cooling also provides flexibility, allowing data centres to support mixed workloads and gradually adapt as computing densities increase.” Flexibility is another advantage, Adhum adds. “Hybrid cooling, which typically means combining traditional air cooling with some form of liquid cooling whether that’s direct-to-chip or rear-door heat exchangers, lets operators handle mixed workloads without locking themselves into one approach,” he explains. “Air cooling on its own simply can’t

Contributors

Karl Mendez

cwcs.co.uk

Jason Beckett

hitachivantara.com

Adhum Carter Wolde-Lule

prismpower.co.uk

CONTINUED

www.newsinthechannel.co.uk

39

Powered by