Visa The Embedded Lending Opportunity April 2024 Banner

Meta Science Chief Says AI Still Has Much to Learn

AI

Some people think artificial intelligence (AI) could be a threat to human life.

Meta’s chief AI scientist thinks the technology has less learning capacity than a cat.

With that in mind, Yann LeCun told the Financial Times (FT) in an interview Thursday (Oct. 19), calls to regulate AI are premature and will only hinder competition.

“Regulating research and development in AI is incredibly counterproductive,” LeCun said. “They want regulatory capture under the guise of AI safety.”

The move to regulate AI comes from a “superiority complex” at some of the leading tech companies that contended that only they could be trusted to safely develop AI safely, said LeCun, whom the FT described as one of the world’s leading AI experts.

“I think that’s incredibly arrogant. And I think the exact opposite,” he said, likening it to regulating the aviation industry before modern airplanes even existed.

“The debate on existential risk is very premature until we have a design for a system that can even rival a cat in terms of learning capabilities, which we don’t have at the moment,” he said.

PYMNTS explored the dangers of over-regulating AI last month, writing that “a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.”

“And the danger of leading too strongly with regulation of the novel technology is that it could force AI development to other areas of the globe where its benefits can be captured by other interests,” that report said.

The FT report points out that Meta has veered away from other Big Tech companies with the launch of  its Llama 2 AI large language model (LLM).

That means, as PYMNTS wrote in August, that “software fans, researchers and developers around the world are now able to access the model’s artificial intelligence (AI) capabilities and tinker with the technology without needing to train their own systems.”

Meanwhile, a number of the leading foundational AI models are closed source, including Google’s, OpenAI and Microsoft’s and Anthropic’s.

Open-source AI allows for greater interoperability, customization and integration with third-party software or hardware, though critics warn that this openness could also open the door to misuse and abuse by bad actors.

“The interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming,” said U.N. Secretary General António Guterres in July, stressing that “generative AI has enormous potential for good and evil at scale.”