OpenAI’s ‘Deep Research’ Aims to Impact Business Intelligence

OpenAI, Deep Research, b2b, AI

OpenAI’s new “Deep Research” tool is poised to transform how businesses gather intelligence and shape their corporate strategy, according to experts. But it does come with some key caveats.

Deep Research is an artificial intelligence (AI)-driven research assistant that can search the web for in-depth information about a topic and then generate a detailed report at the level of a research analyst in “tens of minutes,” according to OpenAI, which publicly demonstrated the tool on Sunday (Feb. 2).

“It is particularly effective at finding niche, non-intuitive information that would require browsing numerous websites,” OpenAI said in a blog post. “Deep Research frees up valuable time by allowing you to offload and expedite complex, time-intensive web research with just one query.”

OpenAI Chief Research Officer Mark Chen said in a YouTube video that Deep Research is one step closer to the company’s goal of achieving artificial general intelligence or AGI. That’s when machines reach human-level intelligence and can broadly apply that knowledge across a range of tasks.

But OpenAI is aiming even beyond it: “Our ultimate aspiration is a model that can uncover and discover new knowledge for itself,” Chen said.

Deep Research is now available through ChatGPT. It has been rolled out to Pro users with a cap of 100 queries a month, with Plus and Team users getting it next, followed by Enterprise accounts. OpenAI will raise the cap once it develops a faster and cheaper version of Deep Research.

Implications for Business

“OpenAI’s Deep Research tool can change the way companies conduct research by performing complicated tasks fast and efficiently. It can scan the internet, collect data, and generate thorough reports in a matter of minutes — tasks that would take human researchers hours to complete,” Sergio Oliveira, director of development at DesignRush, told PYMNTS.

“Businesses can use it for market research, evaluating potential business partners, or keeping up with new technology and trends,” Oliveira said. “The primary advantage is speed. It saves time, provides a broad overview of subjects and lowers expenses by decreasing the requirement for manual research.”

Peter Morales, CEO of Code Metal, added that Deep Research’s agentic workflow is “valuable” and a natural fit for industry verticals.

For example, in pharmaceuticals, an analyst tasked with creating a report on drug interactions and usage data for a specific drug would start by querying the web or drug databases for known variants of the drug, Morales told PYMNTS. Then, the analyst would manually research and correlate data on interactions and usage for each of these variants.

“This entire activity can now be automated” by Deep Research, Morales said.

In marketing, Deep Research lets a company “streamline the voice of the customer and competitor research,” Colby Flood, founder of Brighter Click, told PYMNTS.

Flood said this process normally entails gathering content from competitors’ websites and manually reviewing it to understand the rational and emotional motivators used to attract customers; and then compiling customer reviews to analyze sentiment and determine what customers like or don’t like about your and your competitors’ products.

“It also entails scraping text from sites like Reddit to understand the general market consensus,” Flood said. Now, Deep Research “could eliminate the need for many expensive social listening tools by providing an out-of-the-box AI solution.”

Alexey Chyrva, Chief Product Officer of Kitcast, told PYMNTS that Deep Research benefits businesses by offering efficiency and accessibility. Importantly, its research also can help “ensure that new products or features don’t violate existing intellectual property. That will save companies from possible lawsuits and other problems.”

Deep Research’s Drawbacks

But like all early experimental tools, it comes with caveats.

Deep Research can “sometimes hallucinate” or make “incorrect inferences,” though less than current ChatGPT models, OpenAI said. Deep Research may also “struggle with distinguishing authoritative information from rumors and … often failing to convey uncertainty accurately.”

Chyrva noted that this weakness in separating fact from rumors “affects the reliability of those reports.” Deep Research also “struggles with conveying uncertainty, which may lead to overconfidence in the findings,” he said.

Nathan Brunner, CEO of Boterview, tested Deep Research and said it was an “excellent” tool for gathering statistics and facts. However, he noticed that the results are “only as good as the websites it gets information from,” which could include less reliable sources.

After Deep Research generates a report, it is “always crucial to have someone verify that the sources are reliable and not random forums,” Brunner told PYMNTS.com.

Over time, Brunner is concerned that the quality of the content would decline if websites decide to block these AI agents because they are not getting any compensation.

Morales summed it up this way: “Deep Research has some of the inherent limitations of being non-deterministic that exists for generative AI. Additionally, in the successive refinement steps, it is unable to assess the reliability of information sources. This may cause it to produce unreliable data where there is a preponderance of non-authoritative sources.”