PYMNTS Intelligence Alternate Banner June 2024

Touchcast Plans to Raise $100 Million to Improve AI Queries


Artificial intelligence startup Touchcast is reportedly raising $100 million from backers including Microsoft.

The New York-based company stores and offers responses to commonly used AI prompts, reducing the energy and computational resources needed for AI models Bloomberg reported Tuesday (June 4).

CEO Edo Segal said in the report that his company will have a valuation of at least $350 million after the new funding, although he declined to say how much Microsoft invested.

Companies are turning to AI to streamline everyday tasks, but rising demand has led to a bottleneck in electronics needed to train and operate AI models, the report said. Segal said his company’s approach could make better use of computing and energy resources.

Touchcast unveiled what it calls cognitive cache content delivery technology, similar to having several small library desks in different parts of a vast library instead of one main desk, according to the report. This makes it easier to access material. The technology helps improve the performance of the AI system.

Large language models (LLMs) have emerged as forces in AI, deriving their power from their ability to recognize patterns and extract knowledge from immense textual datasets. The models craft rich representations of concepts, facts and skills, using prompts or queries, to engage in dialog, answer questions, write articles and generate code.

“However, the rise of LLMs has also sparked concerns and challenges,” PYMNTS reported last month. “LLMs may make up information, affecting their credibility and reliability. The models can perpetuate biases found in their training data and generate misinformation. Their use to produce online content at scale may accelerate the spread of fake news and spam.”

Policymakers are concerned about the impact on jobs as these models encroach on knowledge work. There are also questions emerging about intellectual property, as these models are trained on copyrighted material.

“Companies and researchers are now working to address these issues,” the report said. “Model developers use techniques like ‘value alignment’ to constrain LLM outputs and build truthfulness rewards. Efforts are underway to watermark AI-generated content and equip LLMs with fact-checking abilities. Governments are weighing regulations and considering social safety nets for displaced workers.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.