Date: 10/29/25

 

From Edge to Core: Building the Next Generation Of AI Infrastructure 

Leveraging the benefits of local computation for next generation applications 

 

Oct 29th, 2025

 

Dr. Carlos Berto, Director of Network Engineering at Axiom

 

Edge Computing has been a talking point for years now but has yet to really set its mark on the IT landscape. Edge computing maximizes the efficiency of real-time data processing in a wide variety of applications, but to truly harness this potential, our data centers and network infrastructures need to be further optimized.

Let’s discuss the potential benefits of edge computing, as well as the ideal solutions that will help usher in this next generation of data center computing. 

Bridging Edge Nodes and Hyperscale Data Centers

Edge computing brings everything closer to the users. Processing data in closer proximity to users significantly reduces latency, optimizes bandwidth, and enhances overall performance.

Combined with local caching and content delivery networks (CDNs), it accelerates access to data and creates some connectivity paths as a contingency.

 

 

One of the key benefits that Edge computing brings to the table is creating a more efficient interconnect between servers at the source and the hyperscale data center. Edge nodes, which are compact and localized data centers, have a symbiotic relationship with hyperscale data centers. Edge nodes vastly improve the efficacy of local decision-making, enabling hyperscale data centers to focus on long-term storage and analytics.

Cutting down latency is critical for applications like autonomous driving, AR (Augmented Reality), and IoT, most of which require sub-millisecond response times. Edge computing enables immediate decision-making, leaving broader computational tasks to hyperscale centers.

Edge nodes also optimize bandwidth, reducing bandwidth costs by up to 35%, via filtering and compressing data before transmission. This is especially beneficial in areas with limited connectivity. 

Hybrid ecosystems and its impact on AI

Modern infrastructure is shifting toward hybrid models that combine terrestrial or floating edge nodes with hyperscale centers. This architecture supports AI factories, smart cities, and distributed cloud platforms, offering speed, sustainability, and resilience.

In AI workloads, edge nodes enable real-time inferencing near users or devices, a core function for generative AI and autonomous systems. The hyperscale centers are responsible for handling massive model training, while edge nodes deploy updates, collect telemetry, and preprocess data. Modular edge centers also manage thermal output efficiently, supporting sustainable AI scaling. 

Enhancing global CDNs

Edge computing is also changing the way in which content is delivered. Urban edge nodes cache popular content such as videos, games, updates, which reduces strain on central servers and improves the user experience. Hybrid models create multi-tiered delivery paths, ensuring uptime during outages or traffic surges. Strategically placed edge nodes extend coverage to underserved regions, enhancing global reach.

Hybrid models

Hybrid models are reshaping key industries by combining low-latency edge processing with hyperscale scalability:

We have seen great indicators that edge computing can play a prominent role in major industries including:

Healthcare: Wearables analyze data locally for instant alerts. Telemedicine benefits from reduced video lag and on-site AI diagnostics. Local processing supports HIPAA and regional compliance.

Finance: Edge computing minimizes latency in high-frequency trading. Local AI detects fraud in real time. Corporate actions are processed locally for faster communication and compliance.

Telecommunications & ISPs: Edge nodes support real-time streaming and automation in 5G networks. IoT data is processed locally, improving responsiveness. AR/VR apps benefit from nearby computing power.

Entertainment: Remote edge centers handle video editing and storage. Streaming platforms use edge nodes for adaptive bitrate and local caching. Real-time analytics personalize content and ads.

These hybrid models empower industries to process data locally for speed and privacy, scale globally with hyperscale backbones, optimize costs and energy use, and deliver personalized, real-time services.

400G and 800G transceivers optimize performance in Edge-Hyperscale Interconnects

400G and 800G LR, ER, and ZR/ZR+ optical transceivers bridge the edge nodes and hyperscale data centers together. These new-generation transceivers enable high-capacity links to be built between these two critical data processing points, with greater synergy than ever before. This makes them a seamless fit in data centers that specialize in AI, IoT, and real-time analytics.

400G and 800G transceivers offer:

High-speed connectivity: LR modules support metro-scale links; ER and ZR/ZR+ modules enable long-haul transmission up to 120 km

Interoperability: Multi-vendor compatibility allows flexible deployment and avoids vendor lock-in

Scalable bandwidth: Ideal for AI training, video analytics, and smart infrastructure

Energy efficiency: Low power consumption and compact form factors suit space-constrained edge environments

Distributed cloud support: Optics enable ultra-fast, real-time content delivery

The future of Edge computing is on the horizon. The first step is to optimize our data center infrastructures and establish more efficient links between the edge nodes and hyperscaling data centers. To learn more about Edge computing, ask our engineers: 

 

About the Author

Carlos Berto
Director of Network Engineering, Axiom

Dr. Carlos Berto, Ph.D., leads Axiom’s Network Engineering division, where he helps enterprise and hyperscale data centers maximize performance, reliability, and energy efficiency.

With more than 25 years of leadership experience in the telecommunications and data infrastructure industries, Dr. Berto has overseen the development of next-generation optical, memory, and interconnect technologies that power modern AI and HPC systems.

A recognized expert in advanced networking, Dr. Berto holds a Ph.D. in Engineering and has authored numerous technical insights on topics ranging from 1.6T transceivers to liquid cooling for AI clusters. His work bridges theory and practice translating complex engineering concepts into actionable strategies that IT leaders can use to future-proof their infrastructure.

Focus Areas

  • Optical and Interconnect Technologies
  • AI and High-Performance Computing (HPC) Infrastructure
  • Network Design and Power Efficiency

Connect

Connect with Carlos on LinkedIn
View all articles by Carlos Berto
Inside The Stack: Trends & Insights