Date: 04/15/26

What Engineers are Actually Buying in Q2 2026

 

A reality check on optics, fabrics, and what survives contact with deployment

By Dr. Carlos Berto, Director of Network Engineering | April 2026

 

Bottom line: 400G still carries the installed base and most brownfield work. 800G is now the default design point for new AI fabrics and spine tiers. Linear optics have moved from evaluation into controlled production, while 1.6T remains strategically important but operationally early.

 

By Q2, most infrastructure programs are no longer debating architecture. The architecture is already defined, the budget is already committed, and the first BOM has already been through review. What changes in Q2 is something more practical: which parts can actually be validated, approved, and deployed without creating a new operational problem.

That is where a lot of clean Q1 diagrams start to break down. Original part selections get swapped for what is in stock. Optics get pulled back to shorter reach to reduce heat. A fabric that was modeled as all-optical ends up with more DAC and AOC than anyone planned. Engineers are not changing direction. They are converging on what the plant, the thermal envelope, and the schedule will tolerate.

 

What changed between Q1 and Q2

• The question shifts from design intent to deployment survivability.
• Validation, availability, and thermal fit start outranking theoretical peak performance.
• Mixed-speed fabrics become more common because 400G and 800G are being used for different jobs, not because one side lost the argument.
• Linear pluggable optics are no longer a science project, but they still require a tightly controlled stack.

 

Where 400G still wins

400G is still the volume winner, but not because the market stalled. It wins because it remains the most forgiving option in the places where forgiveness matters: installed-base expansion, brownfield refresh, enterprise fabrics, and any environment where interoperability and operational stability still outrank maximum density.

• QSFP-DD 400G DR4 and FR4 remain easy to justify when teams need known-good behavior.
• Short-reach copper and AOC links still absorb a surprising amount of real deployment volume because they simplify validation and reduce thermal overhead.
• For operators extending platform life, 400G is often the fastest path to a stable upgrade rather than the most ambitious one.

 

Where 800G becomes the default

800G is no longer just the aggressive option. In new AI fabrics and modern spine tiers, it is increasingly the default design point. The reason is not marketing. It is math. Once port density, cable count, and power per delivered bit start dominating the discussion, 800G stops looking exotic and starts looking efficient.

• New AI back-end fabrics want fewer optical endpoints for the same aggregate bandwidth.
• Spine tiers benefit from better bandwidth density and cleaner scaling behavior.
• At the system level, 800G can be more efficient than 400G when the host, optics, and cooling profile were designed for it from the start.

 

The quiet shift: linear optics are now production tools

This is the shift most people still underestimate. Linear pluggable optics have crossed from trial into controlled production, especially in hyperscaler and AI-centric environments where the same organization effectively owns the switch ASIC, NIC, board channel, and optics policy. That control matters because LPO is not a universal drop-in replacement for retimed modules.

Where the stack is well controlled, the payoff is obvious: lower module power, less heat concentrated at the faceplate, and lower latency than a fully retimed path. Where the environment is heterogeneous, those same advantages can disappear quickly under interoperability risk, channel-margin sensitivity, and operational ambiguity.

 

Representative optics efficiency

 

 

Why optics power is now a rack-level design variable

 

 

Why 1.6T is still early

1.6T has moved out of the slideware category. The standards work is real, the form factors are real, and the design assumptions are already shaping roadmaps. But that is different from saying engineers are buying it at scale today. They are not.

• Absolute module power is still high enough to stress thermal budgets.
• Operational practices for 200G-per-lane environments are not mature everywhere yet.
• Most teams still have unexploited headroom at 800G before they need to absorb 1.6T risk.

The practical posture right now is straightforward: design with 1.6T in mind, but deploy 800G where you need dependable volume in the next cycle.

 

What engineers are actually approving in Q2

 

 

The common thread is not conservatism. It is accountability. Engineers are buying what they can defend to operations, facilities, and program management at the same time.

 

Final read

Most data center strategies look clean in Q1. By Q2, they look more practical. That is not a failure of design. It is the normal compression that happens when architecture meets validation, supply, thermals, and approval gates.

That is the real Q2 pattern: 400G remains the safe volume workhorse, 800G becomes the preferred speed for new AI-scale fabrics, linear optics graduate into controlled production, and 1.6T stays important mainly as a design horizon. The engineers who move fastest are usually not the ones with the boldest roadmap. They are the ones who know which compromises still ship cleanly.

 

If you're validating 400G or 800G deployments this quarter, the biggest risk isn't architecture - it's compatibility and thermals in production. Axiom in-house engineering teams validate optics, cabling, and memory in real-world environments before deployment. 

Deployment validation review - Send email to an Axiom Engineer (No cost, no obligation)

About the Author

Carlos Berto
Director of Network Engineering, Axiom

Dr. Carlos Berto leads Axiom’s Network Engineering team, working directly with enterprise and hyperscale data centers on real-world deployment challenges across optical, memory, and interconnect infrastructure.

With over 25 years in telecommunications and data infrastructure, he has been involved in the design, validation, and troubleshooting of high-speed systems from early 10G networks through today’s 400G, 800G, and emerging 1.6T environments.

His work focuses on where systems fail outside controlled lab conditions signal integrity breakdowns, thermal constraints, and power delivery instability in production environments particularly in AI and HPC deployments.

Dr. Berto holds a Ph.D. in Engineering and contributes technical insights that translate field experience into practical guidance for engineering teams responsible for performance and reliability.

Focus Areas

  • Optical and Interconnect Systems (400G / 800G / 1.6T)
  • AI and HPC Infrastructure
  • Signal Integrity, Thermals, and Power Delivery

Connect

Connect with Carlos on LinkedIn
View all articles by Carlos Berto

Follow Inside The Stack:

Inside The Stack: Trends & Insights