PYMNTS.com

#pymntsAI

Tech-Enabled Solutions Put the Brakes on Returns Fraud

Read This

Meta Earnings Announcement Goes Big on AI

Read This

UK Set to Revise AI Oversight Amid Big Tech Data Boom

Read This

As British Regulators Scrutinize Microsoft and Amazon AI Deals, Industry Braces for Impact

Read This

Tech-Enabled Solutions Put the Brakes on Returns Fraud

An integral aspect of the retail cycle, returns have surged in recent years with the growth of online channels. However, accompanying this increase in returns is a concerning trend: the escalation of returns fraud and policy abuse.

According to a report by the National Retail Federation (NRF), retailers encountered various forms of return fraud in the past year. Among these, 44% dealt with returns of shoplifted or stolen goods, while 37% experienced returns involving fraudulent or stolen tender. Another 20% cited return fraud orchestrated by organized retail crime groups.

But it’s not just fraudulent actors exploiting the system. Returns abuse, wherein customers take advantage of lenient return policies or return used, non-defective merchandise — commonly known as wardrobing — further adds to the strain on retailers. 

Richard Kostick, CEO of 100% PURE, highlighted the issue in a recent interview with PYMNTS, noting that a significant portion of U.S. consumers (56%) have “confessed to returning” purchased items “after using them once or twice.” This, coupled with the 25% of consumers who have admitted to making purchases with the intent of returning them after use, has further compounded the financial strain on retailers, he said.

Translating this strain into numbers, return fraud contributed to $101 billion in overall losses in 2023 alone, with retailers projected to lose $13.70 for every $100 in returned merchandise, according to data from the NRF. Moreover, fraudulent activities and abuse accounted for nearly 14% of total returns last year.

However, amid these challenges, retailers are leveraging data-driven and tech-enabled solutions to detect and combat returns fraud and abuse more effectively. 

To start, connected devices offer promising solutions, with technologies like surveillance cameras and monitoring systems aiding in the detection of in-store swaps and prevention of price-switching at checkout. Additionally, advanced technologies can prove vital in analyzing customer behavior, distinguishing between legitimate and fraudulent patterns, and alerting store employees in real-time to potential abuses.

Moreover, artificial intelligence (AI)-driven chatbots deployed for online purchases can help flag customers exhibiting suspicious behavior, deterring fraudulent activities. These technological advancements not only enhance fraud detection but can also support employee training and education, empowering frontline staff to combat return fraud effectively.

When it comes to customers who create new online identities to exploit the returns process, data can play a pivotal role in separating genuine, transacting customers from fraudulent actors, Doriel Abrahams, head of risk, U.S., at Forter, told PYMNTS.

Forter’s platform, for instance, helps uncover user identities across its partner firms, employing advanced analytics and generative AI intelligence to map out these identities and link them to specific behaviors. This comprehensive approach enables retailers to identify and differentiate between trustworthy customers and potential threats effectively. 

“The key is to teach your AI models and the systems to ‘think’ the ways these people think and ask the right questions at the right time,” Abrahams said. 

Meta Earnings Announcement Goes Big on AI

If anyone had any doubts about Metas long-term commitment to artificial intelligence (AI), they were erased in the first minute of the company’s earnings call on Wednesday (April 24).

While analysts were waiting to hear about the return on investment in the technology would bring, CEO Mark Zuckerberg made it clear that AI is on a two-to-three-year development roadmap. AI and the metaverse were front and center well before a number was even mentioned by a Meta executive.

“So, let’s start with AI and the metaverse,” Zuckerberg said, less than a minute into the call. “We’re building a number of different AI services, from our AI assistant to augmented reality apps and glasses, to APIs [application programming interfaces] that help creators engage their communities and that fans can interact with, to business APIs that we think every business eventually on our platform will use.

“AI will help customers buy things and get customer support, it will write internal coding and development APIs for hardware, and a lot more,” he added. 

The only statement from the company that preceded Zuckerberg’s comments from Meta was a press release that detailed better-than-expected earnings and customer engagement metrics, all of which, he said, had been positively impacted by current usage of AI on the company’s various platforms.

AI Investments

The earnings showed that, far from accelerating ROI on its AI efforts, the company will spend $5 billion more than it initially forecasted developing new AI products for consumers, developers, businesses and hardware manufacturers.

Capital expenditures on AI and the Metaverse-development division Reality Labs will range between $35 billion and $40 billion by the end of 2024.

Referring to last week’s introduction of the latest version of its AI assistant, Meta AI, which is powered by the latest advances of its large language model (LLM) Meta Llama 3, Zuckerberg said: “I expect that our models are just going to improve further from open source contributions.

“Overall … our teams have achieved another key milestone in showing that we have the talent data and ability to scale infrastructure to build the world’s leading AI models and services. And this leads me to believe that we should invest significantly more over the coming years to build even more advanced models and the largest scale AI services in the world,” Zuckerberg added.  

The Meta AI assistant is free and can be used on FacebookInstagramWhatsApp and Messenger, the company said in a Thursday (April 18) press release. It’s also available on a website, meta.ai, for use on computers, according to the release.

As Zuckerberg said on the call, Meta AI is being offered in more countries. Previously available only in the United States, the assistant is being rolled out in English in more than a dozen other countries, the release said.

By the Numbers

By the numbers, the company showed notable growth across various metrics for Q1.

Family daily active people (DAP) averaged 3.24 billion, marking a 7% increase compared to the previous year. Total revenue for the period amounted to $36.46 billion, with revenue on a constant currency basis slightly lower at $36.35 billion, yet both figures reflecting robust growth of 27% year-over-year.

Ad impressions within Meta’s Family of Apps experienced a significant uptick, rising by 20% year-over-year. Concurrently, the average price per ad also saw a healthy 6% increase from the prior year.

Despite this growth, the company managed to control costs and expenses, which totaled $22.64 billion, representing a modest 6% increase year-over-year. Capital expenditures, including principal payments on finance leases, were reported at $6.72 billion. 

CFO Susan Li said during the earnings call that Meta anticipates strong financial performance in the second quarter of 2024, with total revenue projected to fall between $36.5 billion and $39 billion.

Looking at total expenses for the full year of 2024, Meta expects them to range between $96 billion and $99 billion, slightly higher than previously forecasted due to increased infrastructure and legal costs.

The company also foresees significant operating losses for Reality Labs throughout the year, primarily due to ongoing product development efforts and investments aimed at expanding its ecosystem.

While Meta isn’t offering guidance beyond 2024, Li echoed Zuckerberg’s comments that she anticipates continued growth in capital expenditures in the following year to support its aggressive AI research and product development endeavors. 

UK Set to Revise AI Oversight Amid Big Tech Data Boom

The head of the U.K.’s financial regulator announced plans to explore how big tech companies’ access to extensive data might lead to improved financial products and more options for consumers.

The regulatory shift seeks to maximize artificial intelligence’s (AI’s) potential for innovation, competitive pricing and expanded options for consumers and businesses. The move underscores a global trend of examining and potentially harnessing tech companies’ power with new regulations.

“This announcement is interesting in that the U.K. seems to be taking a different approach to innovation than the EU,” Gal Ringel, co-founder and CEO at Mine, a global data privacy management firm, told PYMNTS. “The EU, having just passed the AI Act, regularly goes out of its way to regulate technology before it reaches the market. The U.K. taking the approach of working hand-in-hand with Big Tech to help harness data insights and build better products puts a lot more faith in businesses and the free market.” 

He added, “One approach is not better than the other, as the EU prioritizes end-user safety and privacy and the traditional places like the U.S. have prioritized the end output, but seeing the U.K. start to diverge more from the EU is something to watch as the AI space heats up.”

Call for Action on Data

During a presentation at a Digital Regulation Cooperation Forum event, Nikhil Rathi, who leads the U.K.’s Financial Conduct Authority (FCA) and chairs the forum, explained his main concerns with big technology companies. Rathi mentioned that if the FCA’s analysis shows that tech firms’ data can benefit financial services, they would encourage more data sharing between tech and financial companies

“The dominance of a handful of firms and further entrenchment of powers will imperil competition and innovation,” Rathi said in the speech. “And, alongside promoting effective competition, the FCA has a primary objective to protect consumers from harm.”

The FCA also released a feedback statement regarding its request for input on data-sharing practices between Big Tech and financial services firms. While Big Tech companies have access to financial data through open banking, they are not obligated to reciprocate by sharing their data with the financial sector.

Ringel noted that the larger the dataset, the more insight you can draw from it and the more reliable the baseline you can build for AI or other products. 

“Those benefits, especially when combined with data collection and scraping practices that do not violate user privacy or safety, can drive innovation that leads to faster and more intuitive technologies on the consumer market,” he added.

Growing Call for Regulations

The decision by U.K. regulators to revisit their approach to AI and data use in Big Tech reflects a broader global trend evident in several recent regulatory actions. For instance, the EU has passed a sweeping AI act

In the United States, there has been increased scrutiny under the Biden administration, which has advocated for more rigorous enforcement of antitrust laws, particularly concerning Big Tech’s data practices. The Chinese government has implemented strict data protection laws and has cracked down on the previously unregulated expansion of tech firms like Alibaba and Tencent. 

As PYMNTS previously reported, the U.K. is adopting a distinctlypro-innovation” stance on AI regulation, diverging from its EU counterparts, who have unanimously agreed on the final text of the EU’s AI Act. The AI Act adopts a risk-based framework for regulating AI applications. Once it is enacted, it will affect every AI company serving the EU market and any users of AI systems within the EU, though it does not extend to EU-based providers serving outside the bloc.

In contrast, the U.K. government prefers an alternative regulatory framework that differentiates AI systems based on their capabilities and the outcomes of AI risks rather than just the risks alone. According to the U.K. government’s response in February to the consultation on AI regulation, the plan is to implement sector-specific regulation guided by five core AI principles rather than enacting specific AI legislation. This approach aims to foster innovation by tailoring regulation more closely to different sectors’ particular needs and risks.

Benoît Koenig, co-founder of Veesion, which makes AI-powered gesture recognition software, told PYMNTS that the EU AI Act is necessary for building trust in AI technologies. 

“For businesses operating within the EU, this will necessitate a greater focus on compliance, particularly for AI applications deemed high risk, which could include areas like surveillance and biometric identification,” he added. “This might increase operational costs and demand more rigorous testing and documentation processes.”

U.S. companies with EU operations or customers must adapt their AI strategies to comply with the forthcoming AI Act, Koenig said. 

“It could also serve as a precursor to similar regulations in the U.S., prompting businesses to proactively adopt more stringent ethical standards for AI development and use,” he added. 

Overall, while the act presents certain challenges, it also offers an opportunity for businesses to lead in the ethical use of AI, fostering innovation that is not only technologically advanced but also socially responsible and trusted by the public.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

As British Regulators Scrutinize Microsoft and Amazon AI Deals, Industry Braces for Impact

British antitrust authorities are soliciting opinions on the implications of partnerships between tech giants Microsoft and Amazon and smaller generative artificial intelligence (AI) model developers amid growing concerns about competition and innovation in the sector.

This inquiry could reshape the AI industry’s landscape. Some experts warned that a strict antitrust ruling may not only alter how major corporations interact with emerging AI firms but could also dampen enthusiasm for new partnerships, possibly stalling the pace of innovation. 

“A direct ruling that prohibits exclusive partnerships or creates substantial barriers to building direct partnerships between generative AI companies and the major tech companies will likely make capital more difficult to obtain and would therefore slow their growth,” Ryan M. Yonk, a senior research faculty member at think tank The American Institute for Economic Research, told PYMNTS.

“A ruling that limits these partnerships would certainly create greater reluctance to attempt such partnerships and would likely cause a reevaluation of whether new startups will be able to get off the ground without them,” he added. 

Growing Deals for Smaller AI Firms

The U.K.’s Competition and Markets Authority (CMA) is asking for input from stakeholders by May 9 to determine whether the business dealings in question should be classified as mergers. This request for comments is an initial step in the information-gathering phase, which precedes the start of a formal Phase 1 investigation by U.K. regulators. However, according to the CMA, this request does not initiate the formal review. 

“Foundation Models have the potential to fundamentally impact the way we all live and work, including products and services across so many U.K. sectors — healthcare, energy, transport, finance and more,” Joel Bamford, executive director of mergers at the CMA, said in a news release.

“So open, fair and effective competition in Foundation Model markets is critical to making sure the full benefits of this transformation are realised by people and businesses in the U.K., as well as our wider economy where technology has a huge role to play in growth and productivity,” he continued. 

Microsoft has invested 15 million euros ($16 million) in Mistral AI, an emerging French AI company founded by ex-employees of Meta and Google’s DeepMind. As part of the agreement, Mistral, recently valued at 2 billion euros ($2.14 billion), will make its advanced large language models (LLMs) available on Microsoft’s Azure cloud platform. Azure will be the second platform to host Mistral’s LLM technology, following OpenAI.

Meanwhile, Amazon has invested $4 billion in the U.S. AI company Anthropic, known for its LLM chatbot Claude. Amazon has stated it will keep a minority stake in Anthropic and not take a board position.

More Scrutiny 

In recent years, concerns about the market dominance of large technology companies, mainly those heavily invested in AI, have led to increased scrutiny and anti-trust rulings. 

In 2021, the European Commission investigated whether Google’s use of data for advertising constituted an abuse of its dominant market position. Similarly, the Federal Trade Commission (FTC) filed an antitrust lawsuit against Facebook in the United States, alleging that the company’s acquisitions of Instagram and WhatsApp aimed to eliminate potential competitors. 

Rulings on competition are based on the belief that large companies can unfairly dominate the market, necessitating protection for new competitors, Yonk said. 

“While this sentiment sounds good to the general public, it replaces questions about consumer welfare with questions about company welfare (especially the competing firms’ welfare) with the perverse result of helping competitors while leaving consumers worse off,” he said. “I would expect that a ruling that makes partnerships more difficult or disallows them will further embolden those looking to increase the regulation of competition and do little to make consumers better off.”