This Week in AI: Big Tech Comes to Capitol Hill, Arm Goes Public

artificial intelligence

Another week, another growth spurt for the artificial intelligence (AI) sector.

The U.S. Senate returned from its summer recess primed to tackle AI legislation with gusto — and countless hearings — meaning there were fireworks enough in both the public and private sector to light up even the darkest sky.

Buttressing the industry movement and political wheel spinning were insights from the just-published PYMNTS Intelligence “Consumer Interest in Artificial Intelligence” report, which dug into everyday Americans’ evolving perceptions regarding AI in their daily lives and work.

With the global governments and multinational technology companies standing back-to-back trying to measure themselves with a pencil, these are the AI stories PYMNTS has been tracking.

Big Tech Comes to Capitol Hill

It was a good week to be a chauffeur in D.C., as top industry executives and leading experts made their way en masse to Washington to speak with top policymakers and help shape legislation to protect the U.S. against AI harms while supporting innovation.

On Tuesday (Sep. 12), the Senate Judiciary subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of AI: Legislating on Artificial Intelligence,” where both Nvidia Chief Scientist and Senior Vice President of Research William Dally and Microsoft Vice Chair and President Brad Smith spoke.

“Uncontrollable general AI is science fiction. At the core, AIs are based on models created by humans. We can responsibly create powerful and innovative AI tools,” Dally said. Microsoft’s Smith repeatedly expressed support for creating a licensing agency for “advanced AI in high-risk scenarios.”

Smith later got some pushback from lawmakers who labeled him a “tech elitist” after claiming that “AI will replace drive-through workers,” adding that drive-through work does not require creativity.

PYMNTS Intelligence finds that as the debate surrounding AI’s role and impact across industries continues to broaden, many consumers are worried about the technology’s role in the workplace — and particularly the safety of their jobs.

And after the Tuesday (Sep. 12) hearing, eight more tech giants joined the White House’s voluntary commitment on safe AI development, bringing the total number of safety and security-focused signatories to 15.

Read also: US Eyes AI Regulations that Tempers Rules and Innovation

AmazonAnthropicGoogleInflectionMeta, Microsoft and OpenAI had previously all agreed to guidelines set by the White House in July, and now AdobeCohereIBM, Nvidia, PalantirSalesforceScale AI and Stability have joined them.

The move comes as firms look to get ahead of any upcoming AI regulation.

“If you go too fast, you can ruin things,” U.S. Senate Majority Leader Chuck Schumer reportedly told journalists after a closed-door meeting he held Wednesday (Sep. 13) where nearly two dozen tech executives and AI experts, including Meta Platforms CEO Mark Zuckerberg and Tesla and X CEO Elon Musk, came to Washington to discuss AI.

The European Union went “too fast,” Schumer added.

Schumer’s big meeting included many of the richest men in the world, and reports after the closed-door session indicated that while there exists a general consensus on the need for AI legislation, details remain slim.

There is one question lawmakers already want answers from the AI industry on, and that is about the working conditions of the data workers responsible for tagging the corpus of information that today’s bleeding-edge large language models (LLMs) are being trained on.

“Contrary to the popular notion that AI is entirely machine-based and autonomous, AI systems in fact depend heavily on human labor,” Sen. Edward J. Markey and Rep. Pramila Jayapal wrote in a letter to AI executives. “Despite the essential nature of this work to AI, the working conditions are grueling.”

Public Sector AI Gets Leg Up With Arm IPO

In the public sector, PYMNTS Intelligence finds that financial institutions (FIs) have had to elevate their systems and processes to combat the growth in fraudulent transactions and resulting financial losses — and that they are increasingly tapping AI to help them do so.

Interestingly, rather than outsourcing fraud detection and protection, they are moving to develop solutions in-house.

In one of the biggest pieces of news this week, British chip design company Arm launched its initial public offering (IPO) on Thursday (Sept. 14).

“AI on Arm is literally everywhere … Arm and its ecosystem have boundless opportunities because everything today is a computer, and in the AI era, the world’s computing needs are insatiable,”  Arm CEO Rene Haas said in a statement.

Apple’s Low-Key Approach

And in contrast to the drumbeat of buzzy announcements from its peers, tech giant Apple AI has reportedly been quietly reshaping Apple’s core software products using AI behind the scenes for months.

As announced during its Apple Day event, these utility-focused AI applications include making its voice assistant Siri 25% more accurate for Apple Watch, integrating a voice isolation feature for calls, improving camera quality and capturing gesture-based prompts with new precision, among other streamlined workflows.

In news from other firms, Adobe has unveiled a series of generative AI tools for its software on Wednesday (Sep. 13); while remote assistant service Double debuted an AI tool designed explicitly for executive assistants and Emburse added an AI-powered receipt scanning engine to its range of travel and expense management solutions.