Visa The Embedded Lending Opportunity April 2024 Banner

Patronus AI and MongoDB Partner to Enhance Generative AI Testing and Evaluation

generative AI coding

Patronus AI, an automated evaluation and security platform, has announced a partnership with MongoDB, a database platform, to bring automated large language models (LLM) evaluation and testing capabilities to enterprise customers. 

By combining the strengths of Patronus AI and MongoDB’s Atlas Vector Search product, the partnership seeks to address the challenges faced by enterprises in the realm of generative artificial intelligence (AI) testing, Patronus AI said in a Wednesday (Jan. 10) press release.

Businesses using generative AI for internal operations can face problems as the software can often fail or produce unexpected behavior. 

Patronus AI said in the release that according to its research, even state-of-the-art AI systems can hallucinate in real-world scenarios — particularly in industries like financial services — or struggle with reasoning and numerical calculations.

The partnership between Patronus AI and MongoDB offers a solution that enables enterprises to develop reliable document-based LLM workflows. With the support of MongoDB Atlas, customers can build these systems and use Patronus AI to evaluate, test and monitor them. 

“Enterprises are excited about the potential of generative AI, but they are concerned about hallucinations and other unexpected LLM behavior,” said Anand Kannappan, CEO and co-founder of Patronus AI. “We are confident our partnership with MongoDB will accelerate enterprise AI adoption.”

AI is becoming an increasingly popular tool, but still needs a human touch to operate properly. 

As PYMNTS reported, in the middle of many enterprise concerns around the use of innovative AI solutions are questions around the data used to train the AI models, as well as protections around that data’s provenance and security.  

“LLMs are prone to hallucination and returning information that is at best inaccurate and at worst misleading — that’s because if bad data becomes the source of a response, it can then be further propagated by serving as an informational foundation for future responses an AI is tasked with,” PYMNTS said in May.