Data Centre Magazine December 2025 | Page 29

THE DATA CENTRE INTERVIEW
Edge AI market growth drives data centre power and cooling requirements The edge AI market is projected to grow from US $ 20.78bn in 2024 to US $ 66.47bn by 2030, reflecting what Andrea describes as a fundamental shift in where computation takes place.
“ Enterprises need local processing for speed, control and efficiency. Edge data centres help with that,” insists Andrea.“ They also support regional resilience and reduce the load on centralised systems. Perhaps more beneficial for the enterprise itself is the idea of data sovereignty and the potential benefits towards security.
“ But there’ s another factor: energy efficiency. Processing data at the edge reduces the energy costs of data transmission and allows for more granular power management. With data centre energy consumption set to double by decade’ s end, this efficiency gain is becoming crucial.” Physical AI represents a shift in how edge computing is conceived. Traditional edge applications might process sensor data and send results to actuators, but physical AI systems need to perceive their environment, reason about it and take physical actions in real-time.
“ This creates unique infrastructure challenges,” Andrea says.“ These systems often need to operate in harsh environments such as manufacturing floors, outdoor installations and mobile platforms, while maintaining the computational power of a data centre.”

“Enterprises need local processing for speed, control and efficiency. Edge data centres help with that”

Andrea Ferro, VP Power and IT Systems EMEA, Vertiv
The workload distribution is not about abandoning cloud infrastructure, rather about intelligent workload distribution.“ AI needs a hybrid approach that leverages both centralised and distributed capacity,” says Andreas.“ Training large models still happens in cloud clusters, but inference is increasingly happening at the edge. What we’ re seeing is cloud rebalancing: organisations are strategically placing workloads based on performance requirements, data sovereignty regulations and cost optimisation.
Some training workloads are moving closer to data sources, while inference workloads are distributed to minimise latency.“ The key is matching the right workload to the right infrastructure,” he adds.“ Batch processing and model training can tolerate centralised processing, but real-time inference, especially for physical AI applications, requires edge deployment.”
datacentremagazine. com 29