The problem of constructing environment friendly cloud AI infrastructure has all the time been about scale – not simply including extra servers, however making these servers work collectively seamlessly. At Huawei Join 2025, the Chinese language expertise big unveiled an strategy that adjustments how cloud suppliers and enterprises can pool computing sources.
As an alternative of managing hundreds of unbiased servers that talk by way of conventional networking, Huawei’s SuperPod expertise creates what executives describe as unified techniques the place bodily infrastructure behaves as single logical machines. For cloud suppliers constructing AI providers and enterprises deploying personal AI clouds, this represents a major shift in how infrastructure will be architected, managed, and scaled.
The cloud infrastructure downside SuperPod solves
Conventional cloud AI infrastructure faces a persistent problem: as clusters develop bigger, computing effectivity truly decreases. This occurs as a result of particular person servers in a cluster stay considerably unbiased, speaking by way of community protocols that introduce latency and complexity. The result’s what {industry} professionals name “scaling penalties” – the place including extra {hardware} doesn’t proportionally improve usable computing energy.
Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Enterprise Group, defined that the corporate developed “the groundbreaking SuperPod structure based mostly on our UnifiedBus interconnect protocol. The structure deeply interconnects bodily servers in order that they’ll be taught, assume, and motive like a single logical server.”
This isn’t simply sooner networking; it’s a re-architecting of how cloud AI infrastructure will be constructed.
The technical basis: UnifiedBus protocol
On the core of Huawei’s cloud AI infrastructure strategy is UnifiedBus, an interconnect protocol designed particularly for massive-scale useful resource pooling. The protocol addresses two necessary infrastructure challenges which have restricted cloud AI deployments: sustaining reliability over lengthy distances in knowledge centres, and optimising the bandwidth-latency trade-off that impacts efficiency.
Conventional knowledge centre connectivity depends on both copper cables (excessive bandwidth, brief vary, usually connecting simply two racks) or optical cables (longer vary however with reliability considerations at scale). For cloud suppliers constructing infrastructure to assist hundreds of AI processors, neither possibility is proving best.
Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, stated fixing these basic connectivity challenges was important to the corporate’s cloud AI infrastructure technique. Drawing on what he described as Huawei’s three many years of connectivity experience, Xu detailed the breakthrough options: “We’ve got constructed reliability into each layer of our interconnect protocol, from the bodily layer and knowledge hyperlink layer, all the way in which as much as the community and transmission layers. There may be 100-ns-level fault detection and safety switching on optical paths, making any intermittent disconnections or faults of optical modules imperceptible on the software layer.”
The result’s what Huawei describes as an optical interconnect that’s 100 occasions extra dependable than standard approaches, supporting connections over 200 metres in knowledge centres whereas sustaining the reliability traits usually related to copper connections.
SuperPod configurations: From enterprise to hyperscale
Huawei’s cloud AI infrastructure product line spans a number of scales, every designed for various deployment situations. The Atlas 950 SuperPod represents the flagship implementation, that includes as much as 8,192 Ascend 950DT AI processors configured in 160 cupboards occupying 1,000 sq. metres of knowledge centre house.
The system delivers 8 EFLOPS in FP8 precision and 16 EFLOPS in FP4 precision, with 1,152 TB of complete reminiscence capability. The interconnect specs reveal the structure’s ambitions: 16 PB/s bandwidth throughout your entire system.
As Xu famous, “This implies a single Atlas 950 SuperPod may have an interconnect bandwidth over 10 occasions increased than your entire globe’s complete peak Web bandwidth.” The extent of inside connectivity permits the system to take care of linear efficiency scaling – including extra processors genuinely will increase usable computing energy proportionally.
For bigger cloud deployments, the Atlas 960 SuperPod incorporates 15,488 Ascend 960 processors in 220 cupboards in 2,200 sq. metres, delivering 30 EFLOPS in FP8 and 60 EFLOPS in FP4, with 4,460 TB of reminiscence and 34 PB/s interconnect bandwidth. The Atlas 960 will likely be accessible within the fourth quarter of 2027.
Implications for cloud service supply
Past the flagship SuperPod merchandise, Huawei launched cloud AI infrastructure configurations designed particularly for enterprise knowledge centres. The Atlas 850 SuperPod, positioned as “the {industry}’s first air-cooled SuperPoD server designed for enterprises,” options eight Ascend NPUs and helps versatile multi-cabinet deployment as much as 128 items with 1,024 NPUs.
Considerably, this configuration will be deployed in commonplace air-cooled tools rooms, avoiding the infrastructure modifications required for liquid cooling techniques. For cloud suppliers and enterprises, this presents sensible deployment flexibility. Organisations can implement SuperPod structure with out essentially requiring full knowledge centre redesigns, doubtlessly accelerating adoption timelines.
SuperCluster structure: Hyperscale cloud deployment
Huawei’s imaginative and prescient extends past particular person SuperPods to what the corporate calls SuperClusters – huge cloud AI infrastructure deployments comprising a number of interconnected SuperPods. The Atlas 950 SuperCluster will incorporate 64 Atlas 950 SuperPods, making a system with over 520,000 AI processors in additional than 10,000 cupboards, delivering 524 EFLOPS in FP8 precision.
A necessary technical resolution impacts how cloud suppliers would possibly deploy these techniques. The Atlas 950 SuperCluster helps each UBoE (UnifiedBus over Ethernet) and RoCE (RDMA over Converged Ethernet) protocols. UBoE permits UnifiedBus to run over commonplace Ethernet infrastructure, permitting cloud suppliers to doubtlessly combine SuperPod expertise with present knowledge centre networks.
In keeping with Huawei’s specs, UBoE clusters reveal decrease static latency and better reliability in comparison with RoCE clusters, whereas requiring fewer switches and optical modules. For cloud suppliers planning large-scale deployments, this might translate to each efficiency and financial benefits.
The Atlas 960 SuperCluster, scheduled for fourth quarter 2027 availability, will combine multiple million NPUs to ship 2 ZFLOPS (zettaFLOPS) in FP8 and 4 ZFLOPS in FP4. The specs place the system for what Xu described as future AI fashions “with over 1 trillion or 10 trillion parameters.”
Past AI: Normal-purpose cloud infrastructure
The SuperPod structure’s implications prolong past AI workloads into general-purpose cloud computing by way of the TaiShan 950 SuperPod. Constructed on Kunpeng 950 processors that includes as much as 192 cores and 384 threads, this method addresses enterprise necessities for mission-important functions historically run on mainframes, Oracle’s Exadata database servers, and mid-range computer systems.
The TaiShan 950 SuperPod helps as much as 16 nodes with 32 processors and 48 TB of reminiscence, incorporating reminiscence pooling, SSD pooling, and DPU (Knowledge Processing Unit) pooling. When built-in with Huawei’s distributed GaussDB database, the system delivers what the corporate claims is a 2.9x efficiency enchancment over conventional architectures with out requiring software modifications.
For cloud suppliers serving enterprise clients, this presents important alternatives for cloud-native infrastructure. Past databases, Huawei claims the TaiShan 950 SuperPod will increase reminiscence use by 20% in virtualised environments and accelerates Spark workloads by 30%.
The open structure technique
Maybe most important for the broader cloud AI infrastructure market, Huawei introduced that UnifiedBus 2.0 technical specs can be launched as open requirements. The corporate is offering open entry to each {hardware} and software program elements: NPU modules, air-cooled and liquid-cooled blade servers, AI playing cards, CPU boards, cascade playing cards, CANN compiler instruments, Thoughts sequence software kits, and openPangu basis fashions – all by December 31, 2025.
Yang framed this as ecosystem growth: “We’re dedicated to our open-hardware and open-source-software strategy that may assist extra companions develop their very own industry-scenario-based SuperPod options. The desire speed up developer innovation and foster a thriving ecosystem.”
For cloud suppliers and system integrators, this open strategy doubtlessly lowers obstacles to deploying SuperPod-based infrastructure. Relatively than being locked into single-vendor options, companions can develop customised implementations utilizing UnifiedBus specs.
Market validation and deployment actuality
The cloud AI infrastructure structure has already seen real-world deployment. Over 300 Atlas 900 A3 SuperPod items have been shipped in 2025, deployed for greater than 20 clients in web, finance, provider, electrical energy, and manufacturing sectors. The deployment scale supplies some validation that the structure features past laboratory demonstrations.
Xu acknowledged the context shaping Huawei’s infrastructure technique: “The Chinese language mainland will lag behind in semiconductor manufacturing course of nodes for a comparatively very long time,” including that “Sustainable computing energy can solely be achieved with course of nodes which can be virtually accessible.”
The assertion frames the SuperPod structure as a strategic response to constraints – reaching aggressive efficiency by way of architectural innovation somewhat than solely by way of superior semiconductor manufacturing.
What this implies for cloud infrastructure evolution
Huawei’s SuperPod structure represents a particular wager on how cloud AI infrastructure ought to evolve: towards tighter integration and useful resource pooling at huge scale, enabled by purpose-built interconnect expertise. Whether or not this strategy proves more practical than options – like loosely coupled clusters with refined software program orchestration – stays to be demonstrated at hyperscale manufacturing deployments.
For cloud suppliers, the open structure technique introduces choices for constructing AI infrastructure with out essentially adopting the tightly built-in hardware-software approaches dominant amongst Western opponents. For enterprises evaluating personal cloud AI infrastructure, SuperPod configurations just like the air-cooled Atlas 850 current deployment paths that don’t require full knowledge centre redesigns.
The broader implication considerations how cloud AI infrastructure may be architected in markets the place entry to probably the most superior semiconductor manufacturing stays constrained. Huawei’s strategy means that architectural innovation in interconnect, useful resource pooling, and system design can doubtlessly compensate for limitations in particular person processor capabilities – a proposition that will likely be examined as these techniques scale to manufacturing workloads in numerous cloud deployment situations.
(Picture taken from the video of Xu’s keynote speech on the opening of Huawei Join 2025)
Wish to be taught extra about Cloud Computing from {industry} leaders? Take a look at Cyber Safety & Cloud Expo going down in Amsterdam, California, and London. The great occasion is a part of TechEx and co-located with different main expertise occasions. Click on right here for extra data.
CloudTech Information is powered by TechForge Media. Discover different upcoming enterprise expertise occasions and webinars right here.

