Listen to the article
Huawei has publicly unveiled a detailed multi-year plan for its Ascend AI processors and supercomputing platforms, signalling China’s move to rival US-led AI hardware dominance amid ongoing sanctions.
Huawei has made a bold move by publicly unveiling a detailed multi-year roadmap for its Ascend line of artificial intelligence processors, signalling China’s strategic intent to lessen its dependence on U.S. technology and challenge Nvidia’s dominance in the global AI computing market. This comprehensive plan covers chip releases and large-scale computing system developments extending through to 2028, a departure from Huawei’s traditionally cautious approach to disclosing its semiconductor ambitions.
The roadmap begins with the Ascend 910C, launched earlier in 2025, and forecasts a series of advancements with the Ascend 950 series slated for 2026, followed by the 960 in 2027 and the 970 in 2028. For the first time, Huawei has spotlighted its integration of proprietary high-bandwidth memory (HBM) technology, aimed at addressing a longstanding vulnerability in China’s semiconductor supply chain stemming from reliance on U.S. and South Korean memory manufacturers. The Ascend 950 series, notably including two variants—the 950PR for inference prefill and recommendation workloads, and the 950DT for decoding and training tasks—promises sizeable improvements in memory capacity and bandwidth. These chips will support novel data formats such as FP8 and Huawei’s proprietary HiF8, delivering up to 2 PFLOPS in some configurations, alongside interconnect bandwidths reaching 2 TB/s.
Huawei’s AI processor advancements are closely linked to the unveiling of its Atlas brand supercomputing platforms. The Atlas 950 supernode, expected in Q4 2025, is designed to cluster up to 8,192 Ascend chips, while the Atlas 960, arriving in late 2027, will double that to 15,488 processors. These supernodes can be stacked into “superclusters” potentially scaling to half a million or even a million Ascend processors. Such configurations could position China’s domestic AI computational capacity in direct competition with global leaders, potentially exceeding the scale of Nvidia’s largest training systems such as the GB200 NVL72. Huawei claims the Atlas nodes excel in compute capacity, memory size, interconnect bandwidth, and overall card count, further reinforced by a new all-optical interconnect fabric that reduces latency and significantly improves reliability.
This disclosure comes amid escalating U.S. export restrictions targeting China’s semiconductor industry. Advanced manufacturing technologies, especially sub-7nm lithography, remain largely out of reach for Chinese fabs due to these controls. Huawei’s current leading-edge chips reportedly utilise approximately 7nm-class processes, impressive under sanctions but still trailing the 3nm processes deployed in Taiwan and South Korea. The production feasibility, chip yield, efficiency, and cost remain open questions that could affect Huawei’s ability to deliver these ambitious targets at scale.
Energy efficiency presents another critical challenge. Previous Chinese supercomputing efforts often relied on scaling by volume rather than efficiency, resulting in high power consumption relative to Western counterparts. Reports indicate Huawei’s recent CloudMatrix deployments consume up to four times the energy compared to Nvidia’s GB200 platforms, raising concerns about the power and cooling infrastructure demands of the much larger Atlas 950 and 960 supernodes. If these systems replicate such inefficiencies at scale, they could become costly to operate and less competitive globally.
Software and ecosystem support compound the hurdles. Huawei hopes to grow adoption of its Ascend chips by promoting its AI development platforms, such as CANN and the MindSpore framework, but faces stiff competition from Nvidia’s entrenched CUDA ecosystem and NVLink interconnect technology, which currently dominate AI software development. Real-world validation through workload performance and developer adoption will be crucial to Huawei’s success.
Beyond chip and system specifications, Huawei’s disclosure serves as a political and industrial signal. By breaking its longstanding secrecy, the company aims to demonstrate confidence in its roadmap and secure alignment with domestic customers and policymakers under China’s push for semiconductor self-reliance. Executing even part of this roadmap on time would significantly bolster China’s AI sector resilience against international supply chain disruptions.
Future milestones to watch include the Ascend 950 series launch in early 2026, which will mark the first critical test of whether Huawei can convert its announced plans into working products with competitive performance and manageable power consumption. Complementing the Ascend line, Huawei is also developing general-purpose Kunpeng 950 processors, aimed at finance and mainframe workloads, further expanding its AI and compute ecosystem.
In the broader context, Huawei’s roadmap exemplifies China’s determination to build an indigenous AI computing infrastructure from chip design through to massive supercomputing assemblies. While challenges remain in fabrication technology, efficiency, software maturity, and global adoption, the company’s unveiling signals a shift toward a more transparent, ambitious, and strategically coordinated effort to rival established global players in artificial intelligence hardware.
📌 Reference Map:
- Paragraph 1 – [1], [2]
- Paragraph 2 – [1], [3], [5], [6]
- Paragraph 3 – [1], [2], [4], [5]
- Paragraph 4 – [1], [3], [5]
- Paragraph 5 – [1], [3]
- Paragraph 6 – [1], [3], [5]
- Paragraph 7 – [1], [2], [5], [6]
- Paragraph 8 – [1], [3], [5]
- Paragraph 9 – [1], [5], [6]
Source: Noah Wire Services