DeepSeek Upgrades AI Reasoning Model to Rival OpenAI and Google

DeepSeek, AI

Highlights

DeepSeek’s upgraded R1-0528 model improves reasoning and now rivals top AI models from OpenAI and Google.

Accuracy jumped from 70% to 87.5% on a benchmark math test, driven by deeper reasoning and more tokens per query.

Open-source and customizable, the model is free to use under an MIT license and can run on private servers for data control.

Chinese artificial intelligence startup DeepSeek upgraded its open-source reasoning model R1 to perform nearly at par with the best models from OpenAI and Google.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    In a post on code repository Hugging Face, DeepSeek said its R1-0528 model underwent a “minor version upgrade” that led to “significantly improved” reasoning and inference capabilities.

    DeepSeek boosted the model by using more computation and adding mechanisms to optimize its algorithms, the post said. As a result, its overall performance comes close to OpenAI’s top model, o3, and Google’s best model, Gemini 2.5 Pro.

    The AI startup said in the post that R1-0528 not only improved on its math, programming and general logic capabilities, but its accuracy increased to 87.5% from 70% in a math test from AIME 2025. This is due to the model doing more reasoning, increasing the tokens used from 12,000 per question to 23,000 per question. (A token is a word, a piece of a word or punctuation.)

    The model also hallucinates less, has enhanced support for function calling and offers a better experience in vibe coding, where developers use natural language prompts in an AI chatbot to write code, per the post.

    DeepSeek took the AI world by storm after releasing R1, an AI model that costs a fraction to train, using fewer Nvidia GPUs while still performing at par with the top AI models globally.

    DeepSeek founder Liang Wenfeng became a tech celebrity in China and was invited by President Xi Jinping to a meeting with other high-profile entrepreneurs, Bloomberg reported Thursday (May 29). He was seated next to other renowned executives such as Alibaba founder Jack Ma.

    Read also: DeepSeek Debuts Upgrade to AI Model That Improves Reasoning and Coding

    Why Businesses Should Care

    As an open-source model, R1-0528 is free to use. It offers an MIT license, which lets users download, run and modify the model.

    Cloud providers Amazon Web Services (AWS) and Microsoft Azure offer DeepSeek’s R1 model to their clients as part of their respective AI platforms. But they strip out any connection to Chinese servers, so data stays in the client’s chosen servers, AWS has told PYMNTS.

    Companies with their own developers can also use DeepSeek’s R1-0528 model and customize it for their own use cases. As long as they don’t use DeepSeek’s API, the data stays in their designated servers.

    DeepSeek and Meta offer some of the most popular and powerful open-source AI models as a counter to proprietary models offered by OpenAI, Google, Microsoft, Anthropic and others.

    While open-source models are free, they are not always cheaper to use depending on how much customization is needed. A company that doesn’t have the staff to customize would still need to hire an outside firm to do so. Also, running the model incurs costs per token in the cloud, unless users can host the model on their own servers.

    “DeepSeek and [Alibaba’s] Qwen from China are among the best open-source AI models released freely,” Nvidia CEO Jensen Huang said during a Wednesday (May 28) earnings call with analysts. “They’ve gained traction across the U.S., Europe and beyond.”

    For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

    Read more: