top of page

DeepSeek R1: Has the AI World Been Scamming Us to Build Massive Financial Reserves?

Writer's picture: Rich WashburnRich Washburn

Audio cover
Has the AI World Been Scamming Us

For years, the AI industry has been synonymous with jaw-dropping funding rounds and mind-blowing expenditures. OpenAI spent hundreds of millions training models like GPT-4, while companies like Meta, Microsoft, and Google collectively sunk billions into AI infrastructure. The narrative was simple: building cutting-edge AI required staggering amounts of capital and compute power.


Then, on January 20, 2025, DeepSeek R1 entered the scene—and shattered that narrative. A small Chinese research lab claimed to have trained a state-of-the-art AI model for just $5 million, a fraction of the cost Western companies have been shelling out. Even more shocking? The model, R1, wasn’t just competitive; it was openly challenging the best efforts of AI heavyweights like OpenAI’s cutting-edge models.


This revelation raises a provocative question: Has the AI industry been artificially inflating costs to hoard capital and dominate markets, or is something deeper going on?



DeepSeek’s Bombshell: David vs. Goliath


DeepSeek R1 wasn’t just another AI model—it was a paradigm shift. Not only did it achieve performance on par with models trained on budgets 10 to 20 times larger, but it was also entirely open-source. This meant developers worldwide could download the model, tweak it, and even train it themselves.


The shockwaves didn’t stop there. DeepSeek’s developers revealed the entire training process in painstaking detail, leaving no room for speculation about their methods. Suddenly, the high costs touted by U.S. tech companies for training AI models seemed questionable. If a small team in China could train a competitive model for just $5 million, were Western companies overspending—or overselling—their need for massive funding?



The “Scam” Narrative: Are We Being Played?


Let’s confront the elephant in the room: Are U.S. AI giants inflating the cost of AI development as a strategy to consolidate financial and technological power?


Critics point to several suspicious patterns:


  • Massive fundraising efforts: Companies like OpenAI and Meta have raised billions, citing the need for unprecedented compute power.

  • Opaque cost structures: Unlike DeepSeek’s transparent methods, Western companies often keep their training costs and strategies under wraps.

  • Overbuilt infrastructure: With AI breakthroughs like R1 showing that smaller budgets can deliver similar results, the billions spent on AI infrastructure appear questionable.


If DeepSeek’s $5 million model is legitimate, then the hundreds of billions poured into AI infrastructure might look more like a strategic money pit designed to ward off competition and secure long-term dominance.



Alternative Theories: Is DeepSeek Too Good to Be True?


While the “AI scam” narrative is tantalizing, there are counterarguments to consider.


  1. Hidden GPUs? Some experts suggest that DeepSeek may have access to more resources than they’re letting on. Due to U.S. export restrictions on advanced chips like Nvidia’s H100, DeepSeek’s claim of using second-tier GPUs could be a smokescreen. If they secretly leveraged thousands of high-end GPUs, their $5 million price tag could be a strategic misdirection.

  2. Subsidized Costs? DeepSeek R1 was developed by a team with ties to a Chinese quant trading firm, High Flyer Quant. This raises questions about whether the project was subsidized by other business operations. Could the true cost have been hidden behind the company’s broader financial activities?

  3. Geopolitical Maneuvering? Some analysts view DeepSeek as a deliberate move by China to destabilize U.S. dominance in AI. By making R1 open-source and free, they could force U.S. companies to question their high-cost models, ultimately undermining confidence in their business models and market valuations.



The Case for Efficiency: DeepSeek as a Wake-Up Call


Assuming DeepSeek’s claims hold up under scrutiny, the model represents more than just an embarrassment for Western tech giants. It’s a wake-up call about efficiency in AI.


  1. Necessity Breeds Innovation DeepSeek’s success highlights how constraints—like limited access to cutting-edge GPUs—can drive creative breakthroughs. Instead of relying on brute force, their team optimized every aspect of training, proving that clever engineering can rival raw spending power.

  2. Jevons Paradox in AI Even if DeepSeek lowers the cost of AI development, overall spending may increase due to greater demand. Lower costs make AI accessible to more industries, leading to an explosion of use cases and higher demand for compute power.

  3. A Win for Open Source DeepSeek R1 also reaffirms the value of open-source AI. By sharing their methods and model freely, they’ve leveled the playing field and challenged the closed-source dominance of companies like OpenAI.



Reactions from the Industry


The reactions to DeepSeek R1 have been explosive—and divided.

  • Supporters of Open Source Leaders like Yann LeCun (Meta’s AI chief) celebrated DeepSeek as proof that open research and collaboration can yield world-class results. “DeepSeek profited from open research,” LeCun noted, emphasizing the benefits of building on prior open-source efforts like Meta’s LLaMA.

  • Skeptics of DeepSeek’s Claims Others, like Scale AI CEO Alexander Wang, questioned whether DeepSeek was hiding resources. He speculated that the lab may have secretly used tens of thousands of high-end GPUs in violation of U.S. export controls.

  • U.S. Policy Hawks Some commentators framed DeepSeek as a geopolitical threat, arguing that the U.S. needs tighter export controls to maintain its edge. Former Google CEO Eric Schmidt summed it up: “DeepSeek is a wake-up call for America.”



The Bigger Question: Do Costs Really Matter?


DeepSeek’s low-cost model might shake confidence in the high-spending strategies of U.S. tech giants, but it doesn’t invalidate the need for massive investment in AI. At the end of the day, the AI race is about intelligence at scale—and scale still costs money.


Why Big Spending Isn’t Going Away:


  • Inference Costs Are Rising While training costs might drop, inference—running the model for real-world applications—still requires significant compute power. This is where infrastructure investments by companies like OpenAI and Meta could pay off.

  • The Superintelligence Race When (not if) the world reaches artificial superintelligence, the winner will likely be the entity with the most compute resources. Efficiency matters, but scale remains king.

  • Applications Drive Growth The cheaper AI becomes, the more industries will adopt it. This ensures that demand for compute power—and the need for infrastructure investment—will only grow.



So, Was the AI Industry “Scamming” Us?


The answer isn’t black and white. Yes, DeepSeek has proven that AI can be developed more efficiently, and its success may push U.S. companies to rethink their strategies. But claims of a grand conspiracy to inflate costs overlook the complexity of the AI landscape.


If anything, DeepSeek R1 is a reminder that innovation comes from competition, not complacency. Whether or not the industry was overspending before, one thing is clear: the race is far from over, and the stakes have never been higher.




17 views0 comments

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page