CareCredit Women's Health May 2024 Banner

Essential AI Raising $40 Million to Build LLM Software

Essential AI, funding, investments

Artificial intelligence (AI) company Essential AI has reportedly raised $40 million in new funding.

The funding round for the startup — described in a Bloomberg News report Wednesday (Sept. 13) as a “secretive” company — comes amid a wave of financing for the AI sector. 

The Bloomberg report — citing a source familiar with the deal — noted that Essential had raised $8 million a few months ago in a round led by Thrive Capital, which also invested in OpenAI.

According to the report, Essential was founded by Ashish Vaswani and Niki Parmar. They are among the authors of the paper “Attention Is All You Need,” which established the principle of large language models (LLMs). 

LLMs, the technology behind text-based chatbots, have helped drive the recent boom in AI that began with the rise of OpenAI’s ChatGPT. A report by Reuters in May said Essential AI is working to build LLM-related software for companies.

As noted here in August, LLMs have taken AI to new heights by expanding its capabilities beyond text to include images, speech, video and music. 

“As they build, companies developing LLMs will contend with the challenges of collecting and classifying large amounts of data — as well as understanding the intricacies of how models now operate and how that differs from the previous status quo,” PYMNTS wrote.

Tech giants like Alphabet and Microsoft and investors such as Fusion Fund and Scale VC are investing in LLMs and forging partnerships, and in doing so, embarking on a big task: ensuring their LLM protégés gather and train using large data sets to mold them so that they execute and generate desired results.

“LLMs require data, classifications, context and process to fulfill their promise,” the report said. “Data in very large quantities is the main ingredient for LLMs. Richer data sets provide more material or input with which the model can train to learn how to generate a relevant response.”

As the report notes, data by itself is meaningless. To be useful to models, it needs to be sorted, labeled, measured, clustered and categorized in a variety of ways. Classification and annotation data can also give it the right context and intent, conveying what a human user meant or intended to say. 

“Steering these volumes of data through rule sets with correct context is a work in progress.” PYMNTS wrote. “The effort requires that the model reviews and connects the dots with whatever happened earlier or happens later in the chat or text.”