CLOUD & COLOCATION
How SIN01 tackles the AI infrastructure challenge The challenge of accommodating AI workload demands in a sustainable way has grown from concern to a global issue.
Traditional facilities, designed for steady workloads, now struggle to accommodate the intensive processing demands of AI applications – and the challenge now extends beyond raw computational power to the fundamental architecture of how data centres operate.
“ When you look at the difference between an AI data centre and a traditional one, the main difference is in the cooling,” Pablo explains.
“ The difference is in how you cool down and extract the heat because the heat is higher. The size of the data hall we had usually had 150 racks – and the power was pretty much around one dot between one mega to two megas per two, two and a half megas.
“ Now in the same size, we are able to bring much more IT capacity, but it is pretty much the same look and feel. The only difference is how you extract the heat.”
As liquid cooling emerges as the preferred solution for high-density AI workloads, traditional data centre operators face infrastructure overhauls as well cost implications – especially since cooling typically accounts for 60 % of a data centre’ s operational expenditure, with half of that energy consumption directly linked to cooling solutions.
“ The more you can optimise the cooling piece, the more efficient you can be,” Pablo says.
The blueprint for sustainable AI infrastructure The other genius of Start Campus’ s approach lies in its integration with existing infrastructure.
Rather than constructing an entirely new facility, the developers recognised the potential of repurposing the decommissioned power station’ s maritime connections.
The facility utilises solutions from Schneider Electric’ s EcoStruxure portfolio, providing real-time monitoring and control systems that optimise energy usage across the entire operation.
116 July 2025