It can be difficult these days to remember the almost idyllic promise of social media when it first entered the general consumer consciousness. That’s not to tempt one into nostalgia, or to suggest that social media has become marred beyond recognition. Rather, it’s to remind one of how much has changed when it comes to views and uses of Facebook, Twitter and other such platforms.
That reminder sets up a story recently told via a PYMNTS podcast discussion with Yinglian Xie, CEO and co-founder at DataVisor, a company that uses artificial intelligence (AI) to defend against online fraud and other such digital attacks.
Like it or not, social media has evolved from a novelty or cool thing to an intimate part of daily life — with impacts on business, politics and culture that can hardly be overstated. Social media isn’t going anywhere (people who try to quit either tend to stay or return after a while), and that means it is becoming an increasingly juicy target for fraudsters and other bad actors, bent on crime and other misdeeds (such as so-called “fake news” and often-harmful political propaganda).
During that podcast, Xie told Karen Webster a story of how to best protect social media from those criminals — and how to best defend its value in the world of commerce and payments. This effort is probably among the toughest in the digital world, given the global scale of social media, the ease with which bad actors can create fake profiles and conduct other harmful deeds, and the reputational and financial stakes involved if something goes wrong and those criminals make it past fraud defenses.
Whatever innocence social media had is long gone, and it is becoming routine — even cliché — to hear barbs about putting the Facebook and Twitter genies back in the bottle. However, that doesn’t mean social media isn’t worth defending, of course.
Much of the discussion between Xie and Webster was anchored to data points from fresh PYMNTS research into fraud and AI-enabled fraud prevention. For instance, attacks on social media — some of them centered around payments and financial illegalities — increased 43 percent over the past year, and total losses due to fraud topped $4.2 trillion in 2018.
Those increases don’t indicate that social media — up until recent times — has been immune to fraud attacks, Xie said. However, the scale of attacks has changed, and criminals have switched tactics and strategies as they become more sophisticated about social media and new technologies. Indeed, according to Xie, in the older days of social media, spam was a bigger force when it came to those attacks. Today, worries span areas such as fake news and fake profiles, which can enable fraudulent credit and other financial woes.
“These are not lonely hackers,” she said. “These are professional attacks with strong financial and political incentives.”
The changing nature of those attacks on social media (to say nothing of attacks on other digital properties) requires new methods of defense, Xie noted — in general, less manual monitoring and review, and more reliance on machine learning and AI.
As PYMNTS readers likely know, AI still exists more as a dream than a reality, as least when it comes to the technology’s use by financial institutions.
Role Of AI
There still exists a good deal of confusion about what comprises true AI (simply put, unsupervised machine learning), with many people in payments seeming to think that machine learning (or supervised learning) is indeed true AI. Among the issues in moving to full AI systems, according to Xie, are the costs involved, the data and other infrastructures needed to support a strong AI system, and the challenge of proving that AI can provide real-time fraud protection (along with customer experience benefits). That is not always an easy sell, at least in 2019, given all the investments in legacy systems.
Yet, AI (along with machine learning) does play a vital role in stopping fraud and other attacks on social media and elsewhere, as Xie told it during the PYMNTS podcast. In general, the main advantage AI offers — assuming the system is properly configured and deployed — is its ability to look deep into consumer and payments data to find patterns of fraud, and doing so by looking at information across users instead of examining one user at a time.
That can not only provide an early warning for banks, social media platforms, retailers and others, but cut down the amount of false positives — those instances of potential fraud, after all, just add friction to the online consumer process.
“Attackers are constantly evolving,” Xie said, which means they are always coming up with new methods — criminals are committed to innovation, too. AI holds the promise of detecting those new types of attacks well before any human would.
The price of development — whether for a business, technology or person — is increased contact with bad actors and influences. When it comes to social media and other digital operations (all of which continue to take on bigger roles in our lives), the best remedy is to continue building better defensive technologies, and to never take anything for granted.