‘Godfather’ of Neural Networks Changes His Mind About AI’s Potential

Generative artificial intelligence (AI) captured the public imagination by trying to model and recreate it. 

Now, academic and business interests are clashing around the future use and development of the powerful, data-hungry models, with AI pioneer Dr. Geoffrey Hinton resigning from Google this week (May 1) to “speak freely” about potential risks that widespread integration of the technology may pose. 

Hinton’s concerns center around AI’s unparalleled capacity for creating and spreading misinformation at scale and ongoing job loss. The alleged “Godfather of AI” even told the New York Times he considered the technology he spent the past half a century developing to be a “threat to humanity.”

“It is hard to see how you can prevent the bad actors from using it for bad things,” he said.

Already, business leaders like IBM are planning to suspend hiring for jobs AI could someday do, with the tech giant freezing or slowing hiring for 26,000 back-office roles. 

But are Hinton’s fears around AI’s dangerous consequences valid, or is his sudden about-face a bit rich coming from someone who already made his fortune and career on the back of the tech’s development?

The most valuable human innovations throughout history transformed the economy while changing the realities of daily life.

PYMNTS has previously covered how the sign of a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

Read MoreCan Humanity Ever Match, Much Less Control, AI’s Hyper-Rapid Growth?

AI will Figure Centrally in Future Infrastructures 

“People are referring to [Hinton’s resignation] to mean: look, AI is becoming so dangerous, even its pioneers are quitting. I see it as: the people who have caused the problem are now jumping ship,” tweeted Dr. Sasha Luccioni of Hugging Face, a machine learning (ML) and AI platform and dataset provider. 

Hinton was a key proponent of neural networks, a crucial technique used in today’s generative AI models. 

But the fundamental technology he shepherded for his entire career wasn’t fully embraced by either the academic or business communities until the last decade and a half. 

Neutral networks are mathematical systems that learn skills by analyzing data via backpropagation, a technique that Hinton, along with his colleagues, popularized. 

Because neural networks focus on produced results rather than the ability to understand the method sourcing them, they were long anathema within leading academic and business research circles.

On the one hand, academics and other thought leaders looked askance at neural networks for years because they were comparatively opaque, and on the other hand, because the models focused on a “common task framework” versus the mathematical purity and elegance favored at that period in time. 

It turns out, however, that common task frameworks are particularly well-suited for focusing ML and AI algorithms around business goals and KPIs (key performance indicators), such as page views or time spent on an article or video, as well as other engagement metrics. 

That’s because when leveraging neural networks, it is the results produced that matter, not the ability to understand the method sourcing them. 

Read MoreGenerative AI Tools Center of New Regulation-Innovation Tug of War

Ethical Concerns Around AI Emphasize the Need for Responsible Regulation 

As time passed, predictive AI models began to prevail over a modeling of the underlying process of the thing being predicted, and prediction also began to prevail over a concern of being able to interpret and understand the process of the algorithm in making those predictions.

That’s why the ethical and political concerns of contemporary AI now circle around the recasting of AI as the optimization of metrics, rather than as a technical capability designed with the goal of mathematical provability and input-output parity in mind. 

Hinton won the Turing Award for his work in 2018, as neural networks and deep learning increasingly came to dominate the AI landscape. 

As the world moves from data-poor environments to data-rich ones, AI tools will increasingly figure centrally in the infrastructures mediating communication, politics, science and news. 

Hinton, a British expatriate, opened Google’s artificial intelligence lab, and his journey from AI originator to naysayer emphasizes how remarkable the present moment is for AI and the technology industry at large. 

This comes as big tech companies have increasingly leaned on AI applications to stabilize their businesses after a brutal year. 

PYMNTS has reported on how lawmakers in Europe want to give regulators more authority over AI companies as governments and corporations alike grapple with a technology that’s projected to disrupt industries across the planet.

By enacting guardrails around the provenance of data used in LLMs and other training models, making it obvious when an AI model is generating synthetic content, including text, images and even voice applications and flagging its source, governments and regulators can protect consumer privacy without hampering private sector innovation and growth.

“The algorithm is only as good as the data that it’s trained on,” Erik Duhaime, co-founder and CEO of data annotation provider Centaur Labs, told PYMNTS earlier this week.