Visa The Embedded Lending Opportunity April 2024 Banner

Princeton, DARPA Partner On AI-Accelerating Chips 

AI, artificial intelligence, computer chips

A new computer chip could accelerate artificial intelligence (AI) applications, enhancing speed and efficiency in business operations.

On Wednesday (March 6), EnCharge AI announced a partnership with Princeton University, supported by the U.S. Defense Advanced Research Projects Agency (DARPA), to develop advanced processors capable of running AI models. DARPA’s Optimum Processing Technology Inside Memory Arrays (OPTIMA) program is a $78 million effort to develop faster, more power-efficient and scalable compute-in-memory accelerators for commercial AI. 

“Companies are in the early stages of understanding how AI will transform business,” Jonathan Morris, vice president of government affairs and communications at EnCharge, told PYMNTS in an interview. “But what we do know is that the potential of AI will only be half realized if AI remains locked away in the cloud and behind a high cost to entry. A new generation of efficient AI processors can bring AI inference on-device, overcoming prohibitive costs of the cloud and enabling a variety of new use cases and experiences while reducing energy use and privacy concerns.” 

Widespread Implementation of AI Chips

The project will examine the latest advancements and how AI applications can be run from start to finish using the new computer chips. The goal is to make AI work outside of big computer centers so it can be used in everyday devices like phones, cars and even factories. 

EnCharge AI is already working on making these chips available, and with support from DARPA, they’re hoping to make them faster and more efficient.

These new chips use a type of switched-capacitor analog in-memory computing chips commercialized by EnCharge AI. The company claims the new chips deliver order-of-magnitude improvements in efficiency compared to digital accelerators while retaining precision and scalability that is impossible with electrical current-based analog computing approaches. 

The new computer chips could make personal computers much faster, letting users do more with their business software without worrying about privacy or security issues, Morris said. 

“A new generation of on-device AI applications could include AI assistants with awareness of local files, real-time language translation, meeting transcription/summarization, and personalized and dynamic content generation,” he added. “As with the smartphone revolution, we are just starting to understand what productivity gains will be realized with AI close to users in PCs.”

EnCharge faces competition in the crowded market for AI accelerator hardware. Axelera and GigaSpaces are working on in-memory hardware to speed up AI tasks. NeuroBlade has also secured venture capital funding for its in-memory inference chip, designed for both data centers and edge devices. 

More Power-Efficient Chips 

The requirements for AI software greatly surpass what current hardware can offer, especially in situations where there is a limit on power usage, Morris said. As a result, many AI applications are now run on large, expensive and power-hungry server farms in the cloud. He said that moving AI from cloud servers to personal computers requires computers to be much more efficient, a goal that the new chips could achieve.

“We have seen the rise of GPU-accelerated computing as the need for 3D-rendering-created compute demands that could not be met efficiently by the CPU,” Morris said. “Similarly, the nascent category of AI PCs will require a dedicated accelerator (NPU) for AI applications.”