AI infrastructure vendors are shipping interconnect technologies at 1.6 terabits per second and higher to handle escalating model training and deployment workloads. The push spans Ethernet advances, optical solutions, and specialized fabrics connecting AI accelerators at data center scale.
Supermicro is delivering Red Hat-certified systems for NVIDIA-powered AI factories. "Our validated solutions for the Red Hat AI Factory with NVIDIA help ensure customers can combine high-performance, purpose-built systems with a robust, enterprise-grade software platform," said Vik Malyala of Supermicro. The integration aims to simplify deployment and scaling of enterprise AI workloads across hybrid cloud environments.
Nokia is advancing AI-RAN partnerships to enable distributed intelligence across network layers. "Physical AI requires an intelligent network underpinned by AI-RAN so operators can fully harness distributed intelligence across every layer of the network," said Ronnie Vasishta, positioning the technology as foundational for AI-native 6G systems.
Offshore wind-powered underwater data centers are emerging as an alternative to land-based facilities. These installations aim to leverage ocean cooling and renewable energy directly at the source. However, the marine environment presents engineering challenges. "There's the increased salinity, debris, and various kinds of corrosion and fouling of metal piping that you wouldn't have in a freshwater environment," said Daniel King, highlighting material and maintenance hurdles that must be overcome for commercial viability.
Veea Inc. released TerraFabric for edge AI and autonomous systems, alongside Lobster Trap security scanning. The company claims scanning occurs under a millisecond with no meaningful delay. "Based on large scale deployments to date, we believe this allows organizations to accelerate updates and deploy new capabilities without compromising overall system stability," Veea stated.
The infrastructure buildout addresses bottlenecks in model training clusters and inference deployment. Connectivity between GPU nodes, storage systems, and edge locations determines throughput for distributed training and real-time inference workloads. Providers are advancing from 400G to 800G and 1.6T rates while semiconductor manufacturers push newer process nodes to support higher bandwidth chips.
The ecosystem transformation reflects compute intensity of foundation models and deployment scale of AI applications. Organizations face choices between traditional data center expansion and novel approaches like offshore facilities that trade operational complexity for energy and cooling efficiency.

