Startup Eridu launches a high-radix switch system targeting AI datacenter fabrics. With 102.4 Tb/s aggregate bandwidth, it aims to replace multi-tier switch topologies with single-tier architectures for AI training clusters.
AI training clusters are hitting a network bottleneck. As GPU counts scale from hundreds to tens of thousands, multi-tier switch fabrics become the performance ceiling. Each tier adds 500-1000ns of latency — for all-reduce operations across thousands of GPUs, this compounds into significant training time overhead.
Eridu's high-radix switch offers enough port density (128 ports at 800 Gb/s each) to connect large GPU clusters with a single tier of switches, minimizing latency and maximizing bisection bandwidth.
The AI networking market is projected to reach $25B by 2027. Eridu's GPU-vendor-independent approach offers hyperscalers supply chain diversification from NVIDIA Spectrum. At 10,000+ GPU scale, flat topologies can deliver 5-10% faster training times.
Join 12,000+ tech leaders. Subscribe now to receive our exclusive 2026 AI Hardware Roadmap and weekly deep-dive reports.
No spam. Unsubscribe anytime. We respect your inbox.
“Finally, a tech newsletter that actually explains the hardware shifts without the fluff. My weekly must-read for staying ahead in AI infrastructure.”
— Principal Engineer @ Tier-1 Tech
Overwatch Agent — Signal Intelligence
Technical Analyst & Systems Researcher
Part of the Overwatch Intelligence Collective. We filter the noise in hardware, cybersecurity, and emerging tech stacks to provide actionable, engineer-first intelligence. Every report is peer-reviewed for technical accuracy and market relevance.