Visa The Embedded Lending Opportunity April 2024 Banner

This Week in AI: Politics, Policies and Payments

artificial intelligence

As another week has gone by, generative artificial intelligence (AI) continues to work its way further and further into daily life.

The only problem, from the perspective of governments and policy groups around the world, is that so far, the technology continues its forward march relatively unfettered by any oversight or regulatory frameworks, other than a few voluntary commitments made by some of the world’s largest companies.

As those firms work to develop AI products that warrant trust, here are the key stories PYMNTS has been tracking this week.

White House Executive Order to Come

The U.K. is holding an AI safety summit next week (Nov. 1 and 2), and in advance of the global meeting, U.S. President Joe Biden is reportedly preparing an executive order that will regulate AI.

The order is expected to be released this coming Monday (Oct. 30).

But not everyone is 100% gung-ho on the Presidential directive. Senate Majority Leader Chuck Schumer reportedly believes action on AI needs to come from Congress, not the White House.

The Senate’s top Democrat on Thursday (Oct. 26) said the president’s executive order won’t be enough to properly deal with AI.

Sen. Schumer reportedly also stressed the need for federal investment — to the tune of up to $32 billion — in AI safeguards.

On the privates sector side, MicrosoftOpenAIGoogle and Anthropic on Wednesday (Oct. 24) named the first ever director for their regulation-focused Frontier Model Forum, and said they’ll commit $10 million to an AI safety fund.

The tech quartet appointed Chris Meserole from the Brookings Institution as their organization’s executive director. The companies announced the launch of the forum in July.

“We’re probably a little ways away from there actually being regulation,” said Meserole, who is stepping down from his role as an AI director at the Washington, D.C.-based think tank. “In the meantime, we want to make sure that these systems are being built as safely as possible.”

Kathleen Yeh, director of product compliance at Galileo Financial Technologies, told PYMNTS on Thursday (Oct. 24) that in 2024 and beyond, companies are going to grapple with the larger questions that arise at the intersection of consumer-level information and technology.

As Yeh noted, “currently, at the Federal level, we don’t have specific regulations or laws that pertain, specifically, to AI and the risks surrounding it.”

AI Steals the Earnings Spotlight

As the 2023 earnings season nears an end, public companies have been talking up the strides they are making on AI.

“In terms of investment priorities,” CEO Mark Zuckerberg said on Meta Platform’s third-quarter earnings call with analysts on Wednesday, “AI will be our biggest investment areas in 2024 — in engineering and computing resources.”

The company’s biggest loss area, on the other hand, remains its Reality Labs segment, which houses Meta’s metaverse efforts and logged a $3.7 billion operating loss with revenues down 26% from last year.

On Alphabet’s third-quarter 2023 earnings call Tuesday (Oct. 24), CEO Sundar Pichai told investors that “more than half of all funded generative AI startups are Google Cloud customers.”

Last quarter, Pichai reported that 70% of generative AI unicorns were using Google Cloud. In the time since Q2 and Q3 2023, Pichai added that the number of active generative AI projects being built on Alphabet-owned platforms grew sevenfold.

During Spotify’s quarterly earnings call on Tuesday (Oct. 24), founder, CEO and Chairman Daniel Ek said AI at Spotify was intended to boost engagement and generate “even more compelling value” for users.

Ek also noted that AI voice translation holds immense potential, particularly in non-English language content, where availability is limited.

“It can personalize things. It can contextualize things. It can provide this thing at a scale that would be impossible to do by humans,” Ek said.

News broke this past Sunday (Oct. 22) that Apple is reportedly investing $1 billion per year to integrate generative AI across its product line.

Building on the Technology

Two top executives at AI pioneer OpenAI believe the technology will be capable of doing any job a human can do within the next 10 years.

Mastercard, for its part, on Monday (Oct. 23) announced it was expanding its consulting business to include practices dealing with AI and economics, while also enhancing Digital Labs, its business transformation service.

According to the release, Mastercard’s AI consulting practice works with businesses to adopt relevant and responsible AI strategies, with experts identifying and integrating AI tools for improved customer experiences, operational efficiency and sustainable revenue generation.

On Wednesday (Oct. 25),  Brady Harrison, director of customer analytics solution delivery at Kount, an Equifax company, told PYMNTS that, “from a consumer perspective, I think some stuff is going to get way better … With the economic overhang, organizations are somewhat maxed out on new customer acquisition and are increasingly looking to grow and retain their existing customer wallet. We’re seeing machine learning and AI play a pretty substantial part of that on the risk side [by juicing authorizations].”

“By analyzing payment data, merchants can gain valuable insights because AI can identify trends and customer behaviors and help optimize pricing strategies and marketing channels [around those insights], and even predict and prevent customer churn,” Justin Shoolery, head of data science and analytics at sticky.io, told PYMNTS on Monday.

On Tuesday, anti-fraud tech firm Featurespace introduced TallierLTM, which it describes as the first AI “large transaction model.”

“What OpenAI’s LLMs have done for language, TallierLTM will do for payments,” David Excell, founder of Featurespace, said in a news release.

PYMNTS and Featurespace collaborated on the recent report “The State of Fraud and Financial Crime in the U.S.,” based on interviews of 200 executives working at financial institutions (FIs) with assets of at least $5 billion, and found that fraud attacks are becoming more commonplace.

The new tool is designed to offer a significant improvement when it comes to differentiating between genuine consumers and bad actors.