Socure CEO: Facebook’s Big Data Fix

It is easy to overlook the incredible power of Facebook, Sunil Madhu, CEO of Socure, told Karen Webster. But with two billion users – a billion or so of whom check in with the social network once a day – the reality, he said, is that what happens on Facebook can have massive ripple effects throughout the rest of society.

“Microsoft did a study a few years ago,” he noted, “where they found that by looking at ‘likes’ alone – what comments and things people liked – they could find out all kinds of things about them. Race, gender, political affiliation, education level, how much money they made – they could know all of this just by looking at their likes alone, with 94 percent accuracy.”

That kind of deep reach into their user group is great news for Facebook, and is why they are second only to Google when it comes to capturing digital advertising revenue (and also why both are light years ahead of everyone else on the web).

But it also means Facebook has a much “higher burden than most companies to treat privacy with a much greater degree of caution and care than most other platforms,” Madhu said.

It’s failure to meet that burden, Madhu noted, that has set the stage for what has come so far. How it lives up to that in the near future will have a significant effect on how well its business survives and thrives going forward.

With Great Power Comes Great Responsibility

As anyone following the headlines already knows, Facebook has had some difficulty managing that great power over the last few weeks.

Those difficulties sprang from three revelations and one action.

The first, that a political consulting firm called Cambridge Analytica used a personality testing app to vacuum up data from around 50 million of Facebook’s users, causing a massive stir. Cambridge Analytica is a subsidiary of a U.K. political consulting firm that uses data to target information to a country’s citizens to influence their thinking about political issues.

This combines with the second “revelation” – the speculation that said data was leveraged to build the models used by Donald Trump’s presidential campaign to stage an upset in the 2016 election.

And following all of that were the revelations in the Mueller indictment that Russian trolls used fake IDs to create Facebook accounts and influence the outcome of the election.

The icing on the cake was that it took Mark Zuckerberg five days to say anything – at all.

High-profile people called for users to delete their accounts, advertisers (and some investors) withdrew, and Washington found a rare moment of bipartisan agreement that something needs to be done to rein in Facebook’s power as a platform that connects billions and wields tremendous power as a social influencer.

And while Madhu said some changes are inevitable, it is important that people understand the details of what happened, and keep perspective of “right” from “wrong.”

What Facebook Didn’t Do Wrong

There are many reasons to believe that Cambridge Analytica was a bad faith actor in a lot of regards, Madhu noted.

First of all, Facebook maintains that they weren’t honest about the purpose of the app or the data it gathered. Cambridge Analytica disputes that, and says they were always honest about their intentions for how the data would be sold and used.

But even if that is true, Madhu noted, there is significant reason to doubt their story about how they ended up with 50 million users’ worth of data. Cambridge Analytica officially claims that about 200,000 people took the personality test directly, thus giving the app access to all of their own data, and the data of everyone on their friends lists. Because of all that friend data, the firm claims, they were able to build profiles on 50 million Americans.

But Madhu explained that isn’t really how it works – opening up one’s friends list to a third-party app doesn’t actually grant the app makers unlimited access to all of that data. Generally, the network makes decisions about how much of that friend data it will share based on the subject and which other friends are using the same apps.

Given that, he noted, it is extremely unlikely that Cambridge Analytica was able to generate 50 million user profiles from 200,000 voluntary personality test takers. It’s much more likely that they engaged in an extensive bot-based data scraping operation to get a lot of that information from Facebook – a practice that is already against Facebook’s rules, and something they work very hard to prevent.

And, Webster said, someone should have noticed.

Not necessarily so, Madhu said. Scraping has gotten smarter, and the bots that do it have gotten a lot better at imitating human behavior – to the point that sometimes, he said, it’s extremely difficult for networks to protect themselves against it.

Moreover, Madhu noted, because of the various dramatic details embedded in the Cambridge Analytica story, it can be easy to overlook the fact that it is not really a secret that Facebook is monetizing its data and that the social network is an open authentication provider. These are both facts that have been true and well-known of Facebook for over a decade.

In its terms of service, Facebook makes it very clear they are only the custodians, not the owners, of the data – though they can use it however they want without royalty.

“And if you really consider those first things in the terms of service, from their perspective, if a user consents to their data being shared with an application that relies on Facebook login, who is Facebook to come in and tell them no?”

This has been the normal operating modality at Facebook for a very long time, and is how they make money with advertising. It also allows them to work with security providers like Socure, and credit underwriting apps that use social media performance as part of evaluating risk.

And Facebook as an open authentication provider – in benign contexts – is also something consumers like, want and use as a friction reducer.

“There is a lot of easy Facebook bashing going on – but in a lot of ways, they didn’t do anything wrong,” Madhu pointed out.

Which isn’t to say there haven’t been many, many things Facebook could have done better, or that there aren’t many, many things they need to do better going forward.

Facebook Faced the Music Too Slow

The first step in every 12-step program is always the same: Admit that you have a problem.

Facebook had a really hard time with that first step, Madhu noted, because they fundamentally underestimated the significance of the event going on around them.

First up, he said, early (false) reporting indicated that Facebook had been breached or hacked, which stirred up unnecessary panic. Once that quieted down, it became clear that there had been an allowed use of data sharing by an app that Facebook knew was gathering the data, via practices that the site had since disallowed – and they had simply failed to respond quickly enough to how angry that had made everyone.

“It took them way too long to come up with a right response, and it gave the strong impression they were trying to ignore it, hoping it would go away. One of the problems with being as large as Facebook, though, is that they have a huge impact and so they get lots of scrutiny.”

And, Webster chimed in, this has been the year for scrutinizing Facebook, as the latest controversy has been the capstone on almost a year of increasingly disturbing Facebook-related headlines. There was also the scandal of its live streaming services being used to broadcast murder, violence, terrorism and animal cruelty; or the fact that it has become the planet’s leading repository of “fake news”; or the fact that it is in the process of becoming one of the internet’s premier destinations for “revenge porn.”

Users don’t like it, investors don’t like it, advertisers don’t like it – and regulators are beginning to wonder if Facebook has turned one too many blind eyes to these situations. After all, monetizing eyeballs – even the bad ones – is how the site makes money.

“[Facebook] failed to get out ahead of this, or to recognize that they have a higher burden than most companies to treat privacy with a greater degree of concern," said Madhu.

What’s Next

At this point, at the end of the current investigation into all that went wrong with Cambridge Analytica, there is almost no doubt that new regulations are coming. In fact, Zuckerberg said that he welcomes regulation – as long as it is the “right type” of regulation.

In this instance, it may not be for him to decide right from wrong. And that’s where there is a real potential downside for everyone, including the user.

How “right” the regulation is, Madhu noted, will come down to how well lawmakers understand what open authentication is, and how it works. Zuckerberg will have a chance to explain all of that when he appears before Congress on Capitol Hill.

“I think if anything, they are probably going to be required to step up their default authentication controls,” Madhu told Webster.

Because, as it turns out, Facebook already gives its users the capability to safeguard their data and privacy pretty tightly, if they so choose. Madhu told Webster that as a security professional, he has his Facebook secured “like my mobile banking app,” so that it sends him all kinds of notifications whenever his account is used in any new or non-standard ways.

But those security setting are “off” as a default – and he strongly suspects that may no longer be the case soon. “Facebook may have to start turning on some of those protections by default,” he said.

Depending on how high those default security walls are, that could have a devastating effect on Facebook’s monetization strategy – not to mention fundamentally undercutting the point of the platform: social sharing.

“The social network is all about sharing,” said Madhu. “It was built to share information like the internet. And it has grown like the internet.”

It’s a tough problem, he noted – and one that will have an impact for firms bigger than Facebook: Amazon, Apple and Google could all be affected by any changes regulators make on how large platforms share data with third parties.

We’re already seeing that play out. Last week, Facebook denied data access to some third-party ad brokers, such as Acxiom, WPP and Experian. It’s also pushed up some privacy control updates and postponed a big hardware reveal in response to the scandal.

As Madhu noted, the problems Facebook faces can no longer be ignored—and its actions going forward will certainly be less about doing things and then asking for forgiveness instead of permission.

The degree to which that permission will be granted by lawmakers and regulators – and under what circumstances – will be interesting to watch, as Facebook now throws its weight behind regulation in an effort to mute the calls to do something much worse: break it up.



The How We Shop Report, a PYMNTS collaboration with PayPal, aims to understand how consumers of all ages and incomes are shifting to shopping and paying online in the midst of the COVID-19 pandemic. Our research builds on a series of studies conducted since March, surveying more than 16,000 consumers on how their shopping habits and payments preferences are changing as the crisis continues. This report focuses on our latest survey of 2,163 respondents and examines how their increased appetite for online commerce and digital touchless methods, such as QR codes, contactless cards and digital wallets, is poised to shape the post-pandemic economy.