Data Centre Magazine Issue 39, February | Page 69

DESIGN & BUILD

The data centre industry faces a big challenge: designing infrastructure that can accommodate ever-increasing workloads while maintaining operational efficiency and financial viability.

Traditional capacity planning models, built around predictable growth curves, have become obsolete in an era where a single AI training cluster can consume as much power as a small city.
Rack densities that averaged 8-10kW three years ago now regularly exceed 60kW for AI workloads, with some specialised AI deployments reaching 120kW per rack. This exponential increase demands fundamental rethinking of cooling architecture, power distribution and physical space utilisation.
Navigating the constraints of power and cooling Cooling and power constraints are becoming more and more apparent in the data centre industry. Recent analysis from the Uptime Institute reveals that 45 % of operators reported power availability limitations in 2024, up from 36 % the previous year. The bottleneck goes beyond capacity – it’ s the ability to deploy that capacity flexibly as requirements evolve.
The rise of liquid cooling technologies reflects this evolution. Direct-to-chip cooling, rear-door heat exchangers and immersion cooling are no longer niche solutions but standard components in scalable infrastructure design.
datacentremagazine. com 69