Data center operators are abandoning air cooling for liquid systems as AI compute clusters push power density beyond what traditional infrastructure can handle. A single AI training rack now draws 40-100 kilowatts versus 5-10kW for standard servers, forcing wholesale redesign of cooling and power delivery.
Supermicro announced expanded Red Hat AI Factory certification for its liquid-cooled accelerated computing systems, targeting enterprise AI deployments that require predictable scaling across hybrid cloud environments. The systems integrate NVIDIA accelerators with purpose-built thermal management designed for sustained high-power operation.
Extreme environments are becoming testbeds for next-generation cooling. Underwater data centers eliminate freshwater consumption while using seawater for direct cooling, though engineers face corrosion and fouling challenges. "The marine environment is pretty brutal to engineer around because there's increased salinity, debris, and various kinds of corrosion of metal piping," said Daniel King, noting the complexity versus freshwater systems.
Network infrastructure is equally strained. Nokia is deploying AI-RAN (AI Radio Access Network) to support distributed AI workloads across network layers, recognizing that "Physical AI requires an intelligent network underpinned by AI-RAN so operators can fully harness distributed intelligence," according to executive Ronnie Vasishta. The technology enables real-time coordination between edge devices and centralized compute.
Semiconductor companies are shipping 224G retimers and optical scale-up interconnects to address bandwidth bottlenecks in training clusters. These next-generation links move data between GPUs 2-3x faster than current 112G standards, critical for models trained across thousands of accelerators.
Edge deployment requires new security models. Veea Inc. open-sourced its Lobster Trap scanning system, which validates AI agents in under one millisecond without meaningful latency. The company's TerraFabric platform enables autonomous system updates at edge sites while maintaining stability across large-scale deployments.
The infrastructure race extends beyond cooling and networks to power delivery. Facilities require dedicated substations and backup systems as single AI clusters draw 50-150 megawatts—enough to power 50,000 homes. Grid constraints now determine data center locations more than network connectivity or real estate costs.
Industry analysts estimate AI-native infrastructure buildout will exceed $50B through 2027 as hyperscalers and enterprises retrofit existing facilities and build greenfield sites designed for AI workloads from the ground up.

