HPE and Juniper revealed cloud-native routers and high-density routing platforms at Mobile World Congress 2026, targeting telecommunications infrastructure optimized for AI workloads.
"AI infrastructure is a critical growth driver for service providers," said Rami Rahim, HPE executive. "HPE is committed to helping these customers lead in the AI era by building intelligent, next-generation networks that can support complex operations, rising data traffic, and the transformative capabilities of AI."
HPE's approach combines high-performance routing and switching with AI-native automation and telco cloud architectures. The company aims to help customers virtualize and modernize networks while simplifying operations across compute, storage, and networking layers.
"HPE delivers the architectural and operational foundation service providers need to fully participate in the AI value chain," said Ray Mota, industry analyst. The infrastructure enables secure, autonomous, on-demand digital services at scale.
AI workloads are reshaping traffic patterns and creating new demands for uplink capacity, latency reduction, and overall network throughput. Traditional network architectures struggle with the bandwidth requirements of distributed AI training and inference operations.
The announcements accompany broader industry shifts in GPU virtualization, power delivery systems, and financing models designed to accelerate enterprise AI adoption. Service providers face pressure to upgrade infrastructure as AI applications proliferate.
HPE's leadership in routing, switching, and automation positions operators for AI-driven connectivity demands. The company's integrated approach addresses compute, storage, and networking simultaneously rather than treating them as separate upgrade cycles.
Cloud-native router designs enable faster deployment and more flexible scaling compared to legacy hardware. High-density platforms pack more routing capacity into smaller footprints, reducing data center space requirements.
The convergence of AI optimization across networking, compute, and power domains reflects infrastructure providers' recognition that AI workloads require holistic system design. Bottlenecks in any single component limit overall performance.
Service providers can now offer differentiated AI capabilities to enterprise customers through modernized network infrastructure. Built-in security features address growing concerns about AI system vulnerabilities and data protection in distributed computing environments.

