DeepSeek Day Two: Focus Turns to Enterprise AI Adoption

DeepSeek app

Chinese AI startup DeepSeek’s deeply discounted foundation models could herald lower prices for businesses seeking to adopt AI — by removing one of the biggest blockers to enterprise deployment.

“Cost is the biggest hurdle to adoption of AI applications,” wrote BofA Global Research analysts Brad Sills and Carly Liu in a note published on Tuesday (Jan. 28.). “We believe advancements in cost could drive price even lower and therefore adoption higher.”

DeepSeek tanked shares of AI companies on Monday (Jan. 27), after revealing that it could train a foundation model for only $5.58 million on 2,048 Nvidia H800 chips. OpenAI and Anthropic have pegged the costs at $100 million to a billion dollars using thousands of Nvidia’s AI chips.

DeepSeek proved that “small companies, individual developers, and even researchers are now able to harness the power of AI without breaking the bank,” Roy Benesh, CTO at eSIMple, told PYMNTS. “This can lead to the development of new ideas and technologies due to the increased competitiveness in the field. This can alter the current state of affairs by providing new options for customers where older established AI companies are likely to charge less and improve their technology faster.”

The analysts said Microsoft’s 365 Copilot Chat, for example, charges 1 cent to 30 cents per prompt depending on complexity, while Salesforce’s Agentforce for Service Cloud charges a flat rate of $2 per conversion.

Notably, BofA said the $5.58 million is “misleading” because DeepSeek did not include costs related to research, experiments, architectures, algorithms and data. However, the analysts said the bigger picture is that the startup introduced innovations that show less costly training is possible.

Pre-Training vs Inferencing

Foundation AI models such as OpenAI’s GPT-4o and Google’s Gemini are trained (pre-trained in industry lingo) to be generalist models, having absorbed the knowledge of the entire internet, for example. Enterprises typically have to further train (or fine-tune) these models on their own data to make them more company- and industry-specific to be more useful.

Once fine-tuned, the AI model is ready to take a user’s prompts and output a response that is relevant for the company. However, prompting and getting a response from the model incurs inferencing costs — or fees associated with engaging the model with new data to understand and analyze.

Most companies do not incur the cost of training foundation models. Only developers of these models do: OpenAI, Google, Meta, Amazon, Microsoft, Anthropic, Cohere, Hugging Face, Mistral AI, Stability AI, xAI, IBM, Nvidia, certain research labs and Chinese tech giants Baidu, Alibaba and others.

Businesses pay inferencing costs for processing AI workloads, which comprise the majority of AI costs.

The China Connection

DeepSeek itself offers inferencing and at much lower costs than Silicon Valley (see OpenAI’s rates). But there are caveats to using it directly.

In its privacy policy, DeepSeek stated that it stores a user’s information on its servers that are located in China. The startup also said it would “comply with legal obligations” and perform tasks in the public interest or to “protect the vital interests of our users and other people.”

China’s national intelligence law, article 7, states: “All organizations and citizens shall support, assist, and cooperate with national intelligence efforts in accordance with law, and shall protect national intelligence work secrets they are aware of.”

Kevin Surace, CEO of Appvance, told PYMNTS that “privacy is an issue because it’s China. It’s always about collecting data from users. So user beware.”

In an experiment, PYMNTS asked DeepSeek’s chatbot to explain how the 1989 Tiananmen Square protests have influenced Chinese politics; it replied, “Sorry, I’m not sure how to approach this type of question yet.”

“DeepSeek is a 100% Chinese-owned company located in China. Already, it’s clear that you can’t rely on DeepSeek for information on Tiananmen Square or senior Chinese government figures,” Tim Enneking, CEO at Presearch, told PYMNTS. “So, while the technology is exciting, control of it is not.”

However, Enneking added that since DeepSeek is open source, its models can be revised to remove government and corporate controls. Its engineering creativity opens “the opportunity for smaller companies and countries to play and succeed in the generative AI sandbox.”

How DeepSeek Could Lower Inference Costs for All

DeepSeek’s inventiveness in coming up with a cheaper way to train foundation models bodes well for companies like Microsoft, which could continue to lower the cost of AI computing and drive scale, according to Sills and Liu. “A lower cost of compute could bring down AI computing costs of sales and drive better margin on AI-enabled offerings.”

“Lower AI compute costs should enable broader AI services from autos to smartphones,” added BofA analysts Alkesh Shah, Andrew Moss and Brad Sills, in a separate research note.

That’s not to say that foundation model developers like OpenAI will see training costs plunge to DeepSeek levels. “DeepSeek’s innovative training and post-training techniques will likely be incorporated by competing frontier-model developers to enable greater efficiencies, but these current models will still require significant investment as they form a foundation for AI agents,” according to Shah, Moss and Sills.

But in the long term, the three analysts expect accelerated adoption of AI by enterprises, as “chatbots, copilots and agents become simultaneously smarter and cheaper, a scenario known as Jevons paradox.”

Microsoft CEO Satya Nadella posted on X that the “Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can’t get enough of.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.