Co-packaged optics (CPO) laser orders are increasing as AI datacenter operators deploy optical circuit switching to solve GPU-to-GPU communication bottlenecks in large training clusters.
Optical switching fabrics are replacing electrical interconnects in AI infrastructure as cluster sizes grow beyond 10,000 GPUs. Traditional copper-based connections struggle with bandwidth and latency requirements when clusters scale to 100,000+ accelerators planned for 2026-2027 deployments.
CPO technology integrates laser light sources directly with switch silicon, eliminating discrete optical transceivers. This architecture reduces power consumption by 30-40% compared to pluggable optics while increasing bandwidth density. The approach supports 1.6 Tbps per port today with roadmaps to 3.2 Tbps.
Optical circuit switches reconfigure light paths in nanoseconds without electrical conversion, enabling dynamic bandwidth allocation across GPU clusters. Meta, Microsoft, and Google are testing optical switching architectures to support training runs requiring petabytes of gradient data transfer between thousands of accelerators.
The commercial traction follows years of optical interconnect research at chip companies and hyperscalers. NVIDIA's NVLink-over-fiber and similar technologies demonstrated viability but required external optics. CPO integration reduces cost per bit while improving thermal performance in dense rack configurations.
AI inference infrastructure is also driving optical adoption. Recommendation systems and large language model serving require low-latency communication between distributed GPU pools. Optical switching provides sub-microsecond reconfiguration versus milliseconds for electrical fabrics.
Supply chain indicators show laser manufacturers ramping production for CPO deployments. Industry sources report orders spanning 2026-2027 with volumes suggesting thousands of optical switches for hyperscale installations.
The technology shift addresses a fundamental scaling constraint: GPU compute performance doubles every 18 months while electrical interconnect bandwidth grows 20-30% annually. Optical interconnects bridge this gap with bandwidth scaling matching compute roadmaps.
Challenges remain in standardization and thermal management. CPO requires precise temperature control for laser stability. Industry groups are developing specifications but deployment timelines depend on resolving integration complexity with existing datacenter cooling infrastructure.

