Can Humanity Ever Match, Much Less Control, AI’s Hyper-Rapid Growth?

There is nothing artificial about the potentially century-defining impact of generative artificial intelligence (AI) capabilities.

Google and Alphabet CEO Sundar Pichai called AI and its implications across business and society “more profound than fire or electricity” in a televised interview on Sunday (April 16).

But who is winning control of the technology industry’s next big thing?

Samsung is reportedly considering changing its default smartphone search engine from Google to the OpenAI-powered Bing from Microsoft, while this generation’s jack-of-all-trades and master of many, Elon Musk, has announced his own plans to launch a generative AI company to compete with OpenAI, the developer behind the buzzy ChatGPT AI tool that kicked off the contemporary tech arms race.

Musk was among the founding team of OpenAI, which he left in 2018, somewhat ironically, over disagreements about the commercialization of its generative AI product.

Read more: ChatGPT-4 Looks to Push AI Innovation Beyond Shiny Object Category

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all,” Musk tweeted in February.

The Tesla, SpaceX, Boring Company, Neuralink, X Corp. (formerly Twitter), and now X.AI leader was also among the first signatories to an open letter published by AI watchdog group Future of Life Institute in March on the potential dangers of AI.

The ongoing marketplace frothiness has since forced Alphabet, the parent company of Google and widely viewed by industry observers as the perennial smartest business in the room, to start building its own all-new AI search engine as the tech leader feels the pressure of increased competition from industry peers and rivals.

Mismatch Between AI’s Advancement and Regulation

Alphabet’s CEO has said that AI will “impact every product across every company.”

The rapid development of AI capabilities, paired with its attractive industry-agnostic integration use cases, is already proving to be a challenge for regulators and lawmakers around the world as they race to address them.

As reported by PYMNTS, U.S. Senate Majority Leader Chuck Schumer last week (April 13) unveiled a new framework of rules designed to chart a path for the U.S. to regulate and shape the emergent AI industry.

Schumer’s proposed framework came just days after the Biden administration laid out a formal request for comment meant to help shape specific U.S. policy recommendations around AI.

China’s internet regulator has also released its own set of detailed measures to keep AI in check, including mandates that would ensure accuracy and privacy, prevent discrimination and guarantee protection of intellectual property rights.

“It is imperative for the United States to lead and shape the rules governing such a transformative technology and not permit China to lead on innovation or write the rules of the road,” wrote Schumer.

The latest example of the call for more AI oversight came Monday (April 17), when a group of European lawmakers issued a letter saying they want to make sure AI legislation moves “the development of very powerful artificial intelligence in a direction that is human centric, safe and trustworthy,” adding that the issue requires “significant political attention.”

Read more: AI Regulations Need to Target Data Provenance and Protect Privacy

What Is Driving Acceleration of AI?

Hard problems of formal analysis, interpretability and alignment are the foundational guardrails of AI development, but within them, the technology’s development is progressing at an unprecedented rate.

“Things are doubling every few weeks, two months … Moore’s law has been completely blown away,” said Patrick Murphy, founder and CEO at construction technology company Togal.AI, in an earlier conversation with PYMNTS that touched on modern advances in generative AI’s commercial applications.

Still, OpenAI’s CEO Sam Altman has recently suggested that GPT-4 could be the last major advance to emerge from OpenAI’s strategy of making its large language models (LLMs) bigger and feeding them more data as a way of increasing the capabilities of AI tools.

OpenAI’s GPT-4 model was trained using a LLM composed of trillions of words of text and many thousands of powerful computer chips, in a process that industry observers have estimated may have cost over $100 million to perform.

OpenAI’s technical report describing GPT-4 suggests that scaling up LLM size will eventually result in diminishing returns.

What that might mean is that by the time competitors have caught up to OpenAI’s capabilities, the developer may already be onto the next big thing beyond recurring neural networks (RNNs).

One hypothesis for the future of AI is that it will entail the use of transformers, deep learning models that can process entire inputs all at once, rather than sequentially.

The impact these advances might have are impossible to know, and likely just as impossible to prepare for.