Nvidia to Launch New AI Chip for China at Lower Price Amid Export Restrictions

Nvidia is set to launch a new artificial intelligence (AI) chipset for the Chinese market, offering a significantly lower price point compared to its recently restricted H20 model, as reported by Reuters.
The new graphics processing unit (GPU), part of Nvidia’s Blackwell-architecture AI processors, will be priced between $6,500 and $8,000, far below the $10,000-$12,000 range of the H20 model. This move comes as Nvidia faces ongoing challenges with US export restrictions impacting its access to the Chinese market, worth $50 billion in data center services.
Challenges Amid Export Curbs
Despite the reduced computing power of the new GPU compared to the H20, Nvidia aims to maintain its presence in China, where it accounted for 13% of its sales in the past financial year. The company’s previous market share in China was 95% before US export restrictions took effect in 2022.
According to Reuters, currently, Nvidia’s share has dropped to 50%, with competitors like Huawei gaining ground. Huawei’s Ascend 910B chip is a primary rival to Nvidia’s offerings in China. However, Nvidia’s advantage lies in its CUDA programming platform, which is widely adopted by developers for building AI models on its GPUs.
A Series of Adjustments to Meet Market Demands
Reuters reports that the new GPU, while less powerful than the H20, is designed to help Nvidia remain competitive despite the loss of substantial market share. Nvidia is also working on another variant of the Blackwell architecture chip, expected to be available by September. The company had initially considered a downgraded version of the H20 for China, but that plan was scrapped due to US export restrictions.
Also read: Swedish Consortium Partners with NVIDIA to Build Largest AI Supercomputer in Sweden
Impact of US Restrictions and Nvidia’s Strategy
The US export bans have forced Nvidia to adjust its strategy, with the company writing off $5.5 billion in inventory and forfeiting an estimated $15 billion in potential sales.
The latest restrictions specifically limit memory bandwidth, a key metric for AI workloads that require extensive data processing. The new regulations cap the memory bandwidth at 1.7-1.8 terabytes per second, compared to the 4 terabytes per second capability of the H20.