2024 Calendar
2025 Calendar
TechTalk Daily

DeepSeek R1: The Real Worry Behind R1 and Other Tools

DeepSeek R1: The Real Worry Behind R1 and Other Tools

By: Daniel W. Rasmus for Serious Insights

I haven’t written about DeepSeek R1 because I’ve spent most of my time commenting on it via podcasts and phone calls. I’ve been asked, “Why now?” The answer to that is Apple downloads. Had DeepSeek not piqued a viral curiosity streak, the newly minted foundations of the AI establishment would not have shaken so quickly. I was not surprised by the event but disappointed in the timing.

AI fatigue comes not just from learning and interacting with so many tools; it also comes from an overwhelming sense of intense fluidity. Those who are not agile thinkers (we have an answer for that here) can get bound very quickly in a new reality and struggle to realign when that reality meets a disruption.

R1’s release clearly shook some people. Stock prices for several well-established AI firms took a hit almost immediately. It remains to be seen if the market reaction was merely a knee-jerk response to something new or if the market knows something substantive. At this point, I’m leaning toward knee-jerk because most investors don’t understand AI. The reaction came from the cost models, not the knowledge models.

Let me back up for those who have not been keeping up with the AI news cycle. Those in the news cycle think everyone is paying attention, but they are not.

DeepSeek R1

What is DeepSeek R1?

This week, DeepSeek dropped R1, which caused consternation among those in the AI community to whom reality has become big data-driven machine learning built from across the webs, light and dark, powered by massive, energy-gnawing computing centers. DeepSeek claims to be able to train a model at a fraction of the cost of those associated with Open AI, Meta or Anthorpic. R1, a new large language model, is the result of seemingly secretive development in China funded quietly by a mix of private investors and corporate partnerships.

While many see R1 as a quantum leap forward—delivering not just faster response times but higher accuracy and more nuanced reasoning—others wonder if it is as special as it appears.

First, the public model, not the open-source version, is heavily censored in favor of the Chinese version of history and reality; that will create a level of skepticism as organizations test DeepSeek R1’s edges.

Second, the cost of computing comes from training, not using AI. Enterprises that want to leverage existing AI systems, open source or commercial, pay less for queries compared to training, with some of the training costs baked into the consumption costs. While it may cost less to train the model, most businesses aren’t training models. Now, for the big players, R1 challenges their assertions about reality, which gets back to me not being surprised.

The current models are unsustainable. A new approach was going to appear at some point that would undermine the assumptions about training, energy and processor farms. Even if the R1 story isn’t as pure as it seems on the surface, it challenges the current narrative, which isn’t good for marketing or credibility for U.S. AI executives.

Third, the claims may be overstated. My first inclination was that DeepSeek R1 must have used preprocessed data, which would push the cost back to the preprocessed data and away from the accounting related to the foundation model. Pre-GRPO (Group Relative Policy Optimization) pipeline finetuning likely accounts for some of the apparent gains.

Fourth, open source is not a panacea. Companies that adopt open source will need to implement internal governance, deal with skill shortages, develop approaches to integrated enterprise content and many other activities that many may not be able to afford, and for those who can afford it, the acquisition of talent, internally or outsourced, will constrain the total available bandwidth to develop high-quality AI-based applications.

If R1 is an evolutionary revolution (because it is clearly not new new) it has the following implications:

  • Enterprises that previously invested heavily in other AI platforms now find themselves questioning whether those platforms remain the right choice.
  • It makes people question the assumptions about AI model training and all of the ancillary components of that training, including chip manufacturing and energy use. Secondary implications include challenges regarding energy investment, server farms, AI skills, sunk costs for current AI projects and other items. The stock market, well, no market, likes uncertainty. DeepSeek R1 did not offer certainty but uncertainty, so that rippled through the markets.
  • The AI leader will need to answer to their customers about the validity of their approach and the efficacy of continued partnerships.
  • I see few European or U.S. companies jumping to DeepSeek as a partner (beyond their use of the open-source version) because of the risks associated with integration on a Chinese platform that already displays misinformation characteristics related to Chinese history and politics (our 2025 AI Forecast suggested that biased AI could be purposeful at some point and R1 seems to tow a party line), and issues of data privacy.

 

Read the rest of the article to find out what’s next for the AI  market, click here: DeepSeek R1: The Real Worry Behind R1 and Other Tools. For more serious insights on strategy, click here. For more Serious Insights in the News, click here.

About the Author:

Daniel W. Rasmus, the author of Listening to the Future, is a strategist and industry analyst who has helped clients put their future in context. Rasmus uses scenarios to analyze trends in society, technology, economics, the environment, and politics in order to discover implications used to develop and refine products, services, and experiences. He leverages this work and methodology for content development, workshops, and for professional development.