Year in Review: The Milestones That Transformed AI in 2023

AI, artificial intelligence

2023 was the year of the generative artificial intelligence (AI) boom.

What was perhaps the innovation’s greatest trick to date was fitting a decade’s worth of technical and marketplace advances into just one year — and what a year it was.

Over the past 365 days, both the hopes and the fears around AI’s transformative capabilities went mainstream, as did global questions about how to handle the new technology.

And PYMNTS was there to cover all of it.

From the launch of the LLM (large language model) to the rise of the LMM (large multimodal model), to the regulatory dynamics between Washington, Brussels and Beijing and even the boardroom squabbles at industry pioneer OpenAI, these are the top stories about AI this year.

Read also: What Superintelligent Sentience (AGI) Means for the AI Ecosystem

Generative AI’s Text-Based Interface Goes Multimodal

The first wave of AI was about classification and prediction and text-based interaction.

But as the year progressed, and despite early calls to pause the technology’s development, the foundational models underlying commercially available AI platforms became more multimodal, allowing end-users to seamlessly operate between text, visual prompts and video.

Among other applications, the increasingly multi-modal capabilities of generative AI gave a shot in the arm to voice tech and voice AI capabilities, which had been languishing under the weight of consumer expectations that exceeded technical capability.

And while AI-generated content like music, videos, computer code, and more opened up exciting new possibilities for individuals and businesses alike — they also raised greater concerns around misinformation and copyright violations.

AI technology more broadly is continually undergoing a fundamental upgrade at even a name-level as large multimodal models (LMMs) start to evolve the capabilities of large language models (LLMs).

The AI companies behind these advances are showing no interest in slowing down, pushing the boundaries of what is possible with deep learning neural networks by working to develop AI systems capable of performing multi-step math operations.

As for what’s on the horizon, experts are split over the prospect of artificial general intelligence (AGI), or an AI system that is more capable than the average human.

Read more: It’s a ’90s Browser War Redux as Musk and Meta Enter AI Race

Technical Advances Lead to Marketplace Shifts

The rocket ship commercialization of AI systems has reshaped the tech landscape in some ways, while at the same time further entrenching the position of Big Tech in others.

OpenAI and Microsoft launched the first shot across the bow of the AI ecosystem, surprising Google, Amazon and Meta while spurring a handful of promising startups including Anthropic to increase their research and development.

While still in its infancy, the AI ecosystem is becoming increasingly multifaceted.

The idea of AI-specific hardware devices is gaining traction, as are the notion of purpose-built GPTs, or task-specific AI models. OpenAI is even developing an app store for them.

As PYMNTS CEO Karen Webster wrote, generative AI LLMs such as GPT with scale that operate as apps inside of existing operating systems today will likely see the potential to break out and create their own to get more control over the customer, the experience, the data and the revenues.

As the ecosystem matures from a capabilities standpoint, the software pricing models and unit economics of the various AI systems — from providers like Amazon, Google, Microsoft, OpenAI, Anthropic, Meta and others — are also starting to jostle for position.

PYMNTS covered how the pricing structures of AI systems also introduce a new vernacular: tokens (OpenAI, Microsoft, Amazon, Anthropic) or characters (Google). And those go beyond the various acronyms already populating the AI landscape.

But as reported here, the real winners of 2023 were the infrastructure vendors. This includes cloud platform providers like Google, Microsoft and Amazon; and GPU producers like Nvidia, Arm and others.

That’s because AI models are notoriously expensive to build and develop. LLMs and other vast, data-driven AI operations frequently require tens of thousands of GPUs that are running complex and resource-hungry processes 24/7 for weeks, or even months, in high-tech purpose-built data centers.

Read also: Is the EU’s AI Act Historic or Prehistoric?

AI Regulations Need to Target Data Provenance and Protect Privacy

The speed with which AI is radically transforming global economies has not escaped regulators’ attention.

While China was the first to establish a rulebook for AI, the European Union’s Artificial Intelligence Act officially reached provisional status this month (Dec. 8).

The U.S., boosted by the White House Executive Order, is currently weighing its own approach coming out of a series of congressional hearings, both public and closed-door, featuring in-depth testimony from AI executives and industry experts.

In previous discussions with PYMNTS, industry insiders have drawn parallels between the purpose of AI regulation in the West to both a car’s airbags and brakes and the role of a restaurant health inspector.

And of course, the AI firms themselves have their own ideas on how they should be regulated, as do industry policy groups.

“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania Law Professor Cary Coglianese told PYMNTS earlier this month as part of the “TechReg Talks” series presented by AI-ID. “It’s not one static thing.”

Complicating matters somewhat is the questionable provenance of the data within many popular training sets used to build out today’s LLMs and LMMs. AI providers like OpenAI are increasingly facing lawsuits over the use of copyrighted material for training purposes.

See also: What Does it Mean to be Good for Humanity? Don’t Ask OpenAI Right Now

Where AI Will Be Applied Next

While the 72-hour OpenAI drama, which was quelled with the return of Sam Altman, put the corporate structure of AI firms under a microscope, the technology’s applications within an enterprise use-case setting will be what makes or breaks the innovation’s future.

“AI is going to be an imperative for every company, and what you do with AI is what will differentiate your products,” Heather Bellini, president and chief financial officer at InvestCloud, told PYMNTS. “Functionally, it might get rid of a lot of the manual work people don’t want to do anyway and extract them up to a level where they can do more things that have a direct impact on the business.”

The generative AI industry is expected to grow to $1.3 trillion by 2032, and is projected at the same time to increase worker productivity by optimizing legacy processes.

As PYMNTS reported, around 40% of executives said there is an urgent necessity to adopt generative AI, and 84% of business leaders believe generative AI’s impact on the workforce will be positive.

While AI systems have the potential to free up huge swaths of human work hours, their success when it comes to the critical measure of ROI (return on investment) depends on how those regained hours are repurposed.

For example, PYMNTS Intelligence found that 72% of lawyers doubt the legal industry is ready for AI, while just 1 in 5 believe that the advantages of using AI surpass the disadvantages.

At the same time, nearly two-thirds of Americans want an AI copilot to help them do things like book travel — and travel companies are already leaning into the technology’s applications in their industry.

Tailoring AI solutions by industry is key to scalability. For example, more than 3 in 4 (77%) retailers think AI is the emerging technology that will have the biggest impact on their industry, and 92% of retailers are already tapping AI-driven personalization to drive growth.

When it comes to the future of AI, only one thing is certain: 2024 is shaping up to be the most transformative year yet.