System architecture presents another opportunity. The CDU combined with a well-designed TCS loop provides a structured and scalable method for moving thermal energy between the compute environment and facility infrastructure.
When designed correctly, operators can deploy capacity and scale as AI densities rise without reworking core infrastructure.
“ There is also longer-term potential around reducing or even eliminating chillers, although that remains a utopian ambition rather than an immediate reality. For now liquid cooling is the only way to efficiently, reliably, and sustainably run the chip and server technologies required for AI,” Richard says.
He identifies the elimination of risk at scale through proven, repeatable liquid cooling solutions aligned with AI infrastructure development as the primary opportunity.
AI server requirements mandate cooling technology adoption Data centre operators no longer face a choice about liquid cooling adoption for AI deployments.“ If you want to deploy advanced AI systems and leverage the power of AI in your business, liquid cooling is the only way those servers are being delivered,” Richard says.
This reality represents a tipping point for the industry. Current and
30 March 2026