THE DATA CENTRE INTERVIEW
The acceleration of AI adoption is forcing a fundamental rethink of data centre infrastructure design, according to Andrea Ferro, VP Power and IT Systems EMEA at Vertiv.
The company is responding to projections showing AI-ready data centre capacity rising at 33 % annually between 2023 and 2030, according to McKinsey, while Goldman Sachs forecasts AI could drive a 165 % increase in data centre power demand by the same date.
“ It’ s the speed of AI adoption scaling and its impact on resource use,” Andrea says when asked about the pressures AI is placing on infrastructure.
“ Today’ s high-end racks already consume up to 120-132kW or more of power, but next-generation systems launching in 2027 and later, are being estimated to be up to 600kW per rack – and potentially beyond 1MW for future generations.
“ This isn’ t just about scaling existing infrastructure – it’ s about rethinking power delivery, thermal management and system integration,” explains Andrea.“ We’ re no longer dealing with isolated server deployments but with fully integrated AI factories that require increasing amounts of power to enable computing at scale. The challenge is compounded by the evolution of Perception AI through Generative AI to Agentic AI – each generation requiring more sophisticated infrastructure integration.”
Vertiv identifies training to inference shift as a critical transformation The movement of computational workloads from centralised training to distributed inference is reshaping infrastructure requirements across the sector. Andrea identifies 2025 as the year when computation shifts dramatically toward inference at the edge, moving away from the large-scale training clusters that have dominated recent infrastructure planning.
“ From a technical perspective, training workloads can tolerate higher latency and benefit from centralised, high-throughput architectures. Inference, particularly for agentic AI applications, demands microsecond response times with consistent performance,” Andrea says.
Applications now process hundreds of thousands of tokens in microseconds, from autonomous decision-making systems to robotics applications requiring real-time environmental analysis.
This shift drives demand for distributed infrastructure with different characteristics: lower latency tolerance, higher reliability requirements and the need to operate in diverse environmental conditions. Andrea describes it as a move from designing for consistent, predictable performance to increasingly demanding GPU power profiles.
“ It’ s managing the integration of multiple AI models operating in coordination, each with different infrastructure requirements,” says Andrea.
26 December 2025