Date: 10/29/25

From Edge to Core: Building the Next Generation Of AI Infrastructure 

How AI workloads are redefining where compute lives

 

Oct 29th, 2025

 

Dr. Carlos Berto, Director of Network Engineering at Axiom

 

Artificial intelligence is no longer just accelerating compute demand — it’s restructuring where compute must live. While hyperscale data centers remain essential for large-scale model training, AI inference is increasingly moving to the edge, where real-time responsiveness and bandwidth efficiency become mission-critical.

Edge computing has been discussed for years, but AI has made it urgent. With explosive data generation, model-driven applications, and latency-sensitive workloads, centralized compute alone can’t keep up. The future isn’t core or edge — it’s the bridge between hyperscale and distributed edge nodes, forming a unified fabric that delivers intelligence wherever it’s needed.

Let’s examine the forces driving this shift, and the infrastructure required to support it.

Bridging Edge and Hyperscale Data Centers

AI workloads demand more than raw compute, they demand low-latency inference, localized decision-making, and efficient data movement. Edge nodes bring processing closer to users and devices, reducing round-trip latency and offloading hyperscale GPU clusters from handling every inference request.

 

  • Latency: Sub-10ms response windows are required for autonomous control systems, real-time analytics, and immersive AR/VR.
  • Bandwidth: Moving raw data to hyperscale for inference is costly and inefficient. Edge filtering reduces bandwidth load by up to 35% by preprocessing and compressing data streams.
  • Data Gravity & Sovereignty: As AI systems ingest more sensor, imaging, financial, or behavioral data, processing at the edge helps retain data locally when privacy or compliance demands it.

 

Edge nodes are not replacing the core — they’re relieving it. Hyperscale clusters continue handling model training, heavy inference bursts, and long-term analytics. Edge nodes execute immediate inference, send summarized telemetry upstream, and serve as distributed AI control planes. The result:

Faster decisions, lower bandwidth costs, and scalable AI deployment.

The Rise of Hybrid AI Ecosystems

The next generation of infrastructure blends:

 

  • Hyperscale AI campuses with GPU/HBM cores
  • Modular and micro-edge nodes near end users
  • Distributed cloud fabrics spanning metro and long-haul links

 

This hybrid architecture enables:

Hyperscale Core Edge Nodes
Train models Run inference close to users/devices
Store & analyze large datasets Preprocess & filter sensor/stream data
Long-horizon AI tasks Real-time responsiveness
High-density cooling & power Low-latency, distributed scale

 

Modern AI factories demand this division of labor hyperscale for deep learning and edge for real-time intelligence at scale.

Transforming CDNs, Networks, and Real-Time Services

Edge-AI compute extends beyond data centers and into the network fabric:

Content & Compute Delivery

Urban edge nodes cache content and perform inference for personalization, speech processing, and AI-driven media delivery. This hybrid model reduces central compute load and improves resilience during peak traffic.

Industry Examples

Sector Edge-AI Use Case
Healthcare Imaging & patient telemetry inference near care sites; privacy-preserving diagnostics
Finance Millisecond-sensitive anomaly detection & HFT edge acceleration
Telecom Local inference for 5G/6G automation, RAN optimization, and AR/VR streaming
Manufacturing & Robotics On-prem AI for motion control, defect detection, and autonomy
Entertainment & Media Real-time rendering, adaptive streaming, and fan-facing immersive experiences

In each case, edge inference reduces backhaul cost, protects sensitive data, and accelerates decision cycles.

Axiom Optical Solutions for Edge-Hyperscale Interconnects

To unlock this hybrid AI era, high-bandwidth, low-latency optical links are essential.

Axiom 400G and 800G LR, ER, and ZR/ZR+ transceivers are engineered to connect edge and hyperscale environments with scale, efficiency, and open-ecosystem flexibility.

Axiom optics enable:

  • High-speed edge-to-core transport: Metro & long-haul links up to 120km
  • Multi-vendor interoperability: No vendor lock-in across switches and routers
  • Scalable bandwidth for AI pipelines: Model updates, telemetry, media, and inference streams
  • Power-efficient deployment: Optimized for dense, space-constrained edge builds
  • Distributed cloud support: Ultra-fast delivery & inference paths for CDN and AI workloads

Edge-AI performance is only as strong as the fabric that connects it — and Axiom ensures the interconnect layer scales alongside the compute layer.

Future-Proof Your Network for AI:

As AI reshapes compute geography, organizations need infrastructure that:

  • Moves intelligence closer to users and devices
  • Scales inference without straining the core
  • Supports open, flexible, heterogeneous networks
  • Delivers cost-effective bandwidth across metro and long-haul paths

Axiom helps build this future — accelerating data, inference, and intelligence across edge and hyperscale environments.

Contact Axiom to learn how our optical and interconnect solutions enable AI at scale.

About the Author

Carlos Berto
Director of Network Engineering, Axiom

Dr. Carlos Berto, Ph.D., leads Axiom’s Network Engineering division, where he helps enterprise and hyperscale data centers maximize performance, reliability, and energy efficiency.

With more than 25 years of leadership experience in the telecommunications and data infrastructure industries, Dr. Berto has overseen the development of next-generation optical, memory, and interconnect technologies that power modern AI and HPC systems.

A recognized expert in advanced networking, Dr. Berto holds a Ph.D. in Engineering and has authored numerous technical insights on topics ranging from 1.6T transceivers to liquid cooling for AI clusters. His work bridges theory and practice translating complex engineering concepts into actionable strategies that IT leaders can use to future-proof their infrastructure.

Focus Areas

  • Optical and Interconnect Technologies
  • AI and High-Performance Computing (HPC) Infrastructure
  • Network Design and Power Efficiency

Connect

Connect with Carlos on LinkedIn
View all articles by Carlos Berto

Follow Inside The Stack:

Inside The Stack: Trends & Insights