China’s DeepSeek just revealed how much it spent training its flagship AI model — and the number is raising eyebrows. According to a new paper published in Nature, the company says its R1
model cost only $294,000 to train. That’s a fraction of what U.S. firms like OpenAI have suggested their own systems cost, which often run into the hundreds of millions.
This is the first time DeepSeek has put a dollar figure on R1’s training bill. The company, based in Hangzhou, said it used 512 Nvidia H800 chips to build the reasoning-focused model. The disclosure comes months after DeepSeek made waves in January by releasing cheaper AI systems, which spooked investors and briefly rattled big tech stocks like Nvidia.
Since then, DeepSeek and its founder Liang Wenfeng have kept a low profile, aside from occasional product updates. The Nature article — co-authored by Liang — not only revealed the training cost but also clarified that while the final training was done on H800 chips, the team did use A100 GPUs in early experiments with smaller models.
That detail matters because of U.S. export restrictions. Washington banned shipments of Nvidia’s high-end H100 and A100 chips to China back in 2022, which is why Nvidia created the less powerful H800 specifically for that market. U.S. officials have previously claimed DeepSeek somehow gained access to large numbers of H100s, though Nvidia insists the company has only used H800s lawfully.
Still, the reported $294,000 price tag is stunningly low compared to what Sam Altman, CEO of OpenAI, has said about foundational AI models — that training them costs “much more” than $100 million. Even if DeepSeek’s figures are accurate only for one stage of development, it underscores how quickly costs, hardware availability, and competition in the AI race are shifting.
For now, one thing is clear: DeepSeek has managed to grab the spotlight again, and its approach will likely keep fueling the debate over China’s role in the global AI race. Photo by Wikimedia commons.


























