Facebook: Too Big To Govern?

Singapore Banks Mull Facebook Access

In December of 2008, Lindsay Ronson tried to log into her Facebook account with a new password — one she had recently created after experiencing, she said, numerous attempts to hack her account. She was presented with a message telling her that her account had been disabled.

Confused, she followed the link provided to Facebook’s FAQ page for more information. There she eventually discovered — after many more clicks — that her account was disabled for violating Facebook’s terms of service by using a fake name.

Lindsay Ronson, aka Lindsay Lohan, took to Myspace to air her complaint.

There, she said the reason she used a “fake” name on Facebook was because too many others had claimed her real name and created fake Lindsay Lohan profiles with it. Using the last name of her partner at the time, Samantha Ronson, was the only way Lohan felt she could share real Lindsay Lohan updates with her true Facebook friends.

Her account was eventually restored by Facebook, where it coexisted at the time with, The LA Times reported, 14 other accounts with similar account names and pictures of Lohan.

 

The Facebook Fakes

Tomorrow (October 31, 2017) Facebook executives will make their third trip to Capitol Hill to answer more questions surrounding the use of its platform by the Russians to influence the outcome of the 2016 U.S. Presidential election. The topic of discussion is the ability of Russian operatives to establish more than 470 fake profiles and spend $100,000 buying 3,000 ads promoting fake news. This reportedly took place over the course of nearly two years: June 2015 through May 2017.

Facebook acknowledged in a post by its chief security officer that there were such ads present on its platform focused on “divisive” social issues, including gun control, immigration, gay rights and race, rather than advocating candidates by name.

Citing a March 2017 U.S. intelligence report, Time provided insight into some of the tactics used by Russian operatives to spread fake news to Facebook users — plans, these intelligence officers said, were five years in the making.

For instance, they reported that a (male) Russian solider established a fake identity as a 42-year-old American (female) housewife to create a following and then spread messages about political issues at the center of the U.S. election that would be shared. The Wall Street Journal reported yesterday that of the 470 fake accounts — this being one among them — the six Facebook has disclosed so far were shared a total of 340 million times. Collectively, these 470 profiles distributed viral content that reached a massive number of eyeballs over that timeframe.

In September of 2017, Facebook announced it would overhaul its policies for disclosing the sponsorship of election ads and hire 1,000 more humans to its review process.

For Facebook, the question now is whether all of this is too little, too late.

And whether it’s even serving up a solution to the right problem, as Facebook has become the unregulated platform that 45 percent of Facebook users — in the U.S., that’s two-thirds of all Americans or some 90 million people — say they get their news and information from.

But I’m not writing today to debate the political or social or moral issues of fake news impacting our election.

Instead, I’m writing about the important lessons in platform governance that Facebook’s fake news issue raises.

And why it’s critical to manage the inevitable tensions between how platforms make money, the expectations of their investors, the interests of their stakeholders and, ultimately, the trust placed in the platform by those stakeholders — before the government tells you how they see it all working out.

 

Walking the Platform Governance Straight and Narrow

Facebook launched in 2004 as an open platform for the free exchange of information among its users that, soon, would be monetized by advertising to those users.

From day one, Facebook wanted to establish itself as a trusted, safe and credible place for friends to visit and feel comfortable sharing updates and pictures.

And, one suspects — given what was going on at Myspace at the time — to be regarded by its advertisers as a trusted, safe and credible place to advertise their brands.

Facebook created a number of things to instill that trust.

The social media platform required an authentic identity to create a user profile to minimize the use of fake accounts that were pervasive on its competitor’s website, Myspace, at the time. It also gave users the ability to decide who could become part of their individual social network and post updates that could be seen in their News Feeds.

Facebook also created a strict process for monitoring content to keep pornography and inappropriate content off the site. Users could flag content they perceived as unsavory. Facebook moderators monitored the site for violations and blocked such content. Users could block other users whose content they no longer wanted to see or even “unfriend” someone, too.

But as an advertising platform wrapped around a social network, the Facebook News Feed soon became populated with a mix of ads targeted to users based on their interests and demographics. The more users interacted with those ads, which have increasingly taken the form of content and video ads, the more users saw those ads and promotions and others like them.

Facebook’s successful shift to mobile made its News Feed front and center of the Facebook user experience — one with which its 1.3 billion daily active users now spend 35 minutes a day on average interacting. That combination of mobile plus ads and content in the News Feed has driven the value of the Facebook platform and mobile ad revenue to record heights.

Last quarter, Facebook reported that mobile ad revenue drove 87 percent of its total ad revenue, up from 84 percent a year earlier. A Needham & Company analyst described Facebook and its Instagram, WhatsApp and Messenger properties as “mobile advertising monopolies,” while others call it and its 2 billion worldwide users and $516 billion market cap a “remarkable success story.” The day following its Q2 earnings report, Facebook’s stock was trading 7 percent higher at the market’s opening bell.

Facebook also acknowledged in its Q2 earnings call that the News Feed as the source for ad revenue is approaching its available and acceptable limit of inventory of ad-sponsored content, which has accelerated its efforts to monetize WhatsApp and Messenger in new ways.

But getting those platforms to parity with the mega giant that is the Facebook News Feed is a long haul. For now and for the foreseeable future, it’s Facebook’s News Feed that will drive the bulk of how Facebook makes its money.

 

Hard Times at the Facebook News Feed

Fake accounts on Facebook are nothing new.

In a SEC filing made in August of 2012, four months after its May 2012 IPO, Facebook reported that nearly 9 percent of Facebook profiles (83 million of the 955 million active users) were fake — defined as inappropriate, duplicate or misclassified accounts. Concerns from investors over the impact of fake accounts potentially overstating its advertising reach — and thus its ad revenue potential — pushed the stock price below $20 at the time, down more than 50 percent off its IPO price of $38 just a few months before. Facebook deleted those 83 million accounts.

Five years later, after evidence of U.S. election tampering via fake accounts, Facebook said it removed “tens of thousands” of fake accounts prior to the German election and 30,000 fake accounts before the French election, as well.

Spoofing links and altering headlines isn’t new either. 

In September of 2011, an article appeared on Search Engine Watch that laid out step-by-step instructions on the ease with which to create fake news on Facebook using real news sources and logos. The examples in the article were ways to create silly, harmless headlines intended to get a laugh — news like “so and so from this or that company has just been knighted for being cool,” but appearing as if that “news” had come from a legitimate news outlet.

It was the same technique used four years later to create some of the election-oriented fake news stories.

In July of 2017, Facebook eliminated the preview link capabilities that made this headline spoofing possible for anyone but accredited publishers.

In mid-October, Facebook COO, Sheryl Sandberg, promised lawmakers that Facebook would fully cooperate as part of the Congressional investigation of alleged Russian involvement in the U.S. election. Following her testimony, Sandberg also emphasized the importance of preserving Facebook as a “free and open platform,” saying that “the responsibility of an open platform is to let people express themselves” and adding that [Facebook] does not “check the information that people put on Facebook before they run it.”

Yet they do.

It’s still not possible to post pornography on Facebook. Remember the outrage over the removal of the iconic Vietnam War “napalm girl” photo in September of 2016?  Facebook reconsidered and reinstated the photo after its users — and the community at large — urged Facebook to let the world see and share an important moment in history.

What’s amazing is that a site that has been so successful at stamping out nudity has failed to deal with so many other ways in which their platform gets sullied.

In an attempt to perhaps preempt regulation, last week Facebook CEO, Mark Zuckerberg, outlined steps that Facebook will take to create “more transparency” around the political ads that run on Facebook, which includes labeling them as political ads and requiring that the identity of the sponsor be disclosed, much like it is today for politically driven TV and radio ads.

It’s not clear whether lawmakers will be satisfied with those voluntary concessions. Many still want to know why it took so long for Facebook to identify a problem that went undetected for nearly two years, and why action was taken only when there was mounting and nearly indisputable evidence that its platform was used to circulate fake news created by the Russians.

The Honest Ads Act now circulating through Congress has bipartisan support and will force more stringent disclosures for political ads running on any social network. But, as it now stands, it does nothing to address content that isn’t obviously political ads but sensational content from real people — or divisive content from fake accounts like the Russian soldier posing as a housewife, with a big social following amassed over time and whose messages go viral.

Remember, the intelligence reports suggest that the election tampering was a plan five years in the making.

Here and around the world lawmakers are circling the wagons, asking questions about the unregulated nature of a platform as big as Facebook that has the attention of 1.32 billion people every day — where, in the blink of an eye, a post can be shared — unfettered — and impossible to pull back before it has reached millions and millions of people.

And where, Facebook has proven, there is a will to eliminate offensive and questionable content, there’s always been a way to make it happen. 

One wonders whether, in the absence of public and political pressure — including the pressure to regulate the living daylights out of them around the world — Facebook would be as quick to right the platform governance issues that it seems to have overlooked on its way to getting to scale.

It’s a lesson in platform governance that its once-competing social network, Myspace, learned the hard way.

Myspace was also established as an open platform for the free expression of musicians and artists. But it became a place for sexual predators and pedophiles, where inappropriate, often pornographic, content flourished. The ability to use fake identities made it easy for people to masquerade as someone else on that site. Myspace, under pressure to drive ad revenue, let almost anything go, devolving the site to what some called a “digital ghetto.” Users, increasingly soured by the mix of off-putting content and parents who refused to give their kids access, no longer trusted Myspace and left in droves. No eyeballs meant no ad revenue; no ad revenue meant no viable platform.

What was the biggest social network in the world in 2006 sold in 2011 for $35 million.

The squeeze put on Myspace by the State AGs to get its act together and monitor the site for child predators and other bad actors in the mid-2000s isn’t what shut it down. What did was a big competitor named Facebook and the loss of trust of its users — all driven by Myspace’s failure to put their interests first.

Today, Facebook’s problem seems not too dissimilar from the “too big to fail” dance that banks now enact with regulators — too big with 2 billion users worldwide (and growing) and too complicated now to even begin to know how to regulate.

So, it will be a race to see if Facebook can step back and get its platform governance mojo in check before governments around the world do it for them. Doing that will come at a cost for Facebook, but so will regulation that changes the nature of what it was founded to do — and interferes with all of the innovation that it and platforms like it are positioned to create.