Election day in the U.S. is tomorrow. There exists no shortage of issues that will be top of mind as Americans head to the polls, from healthcare to taxes to immigration. Add security to the list, of course, as in the security of the election process itself.
Anyone who scans the headlines, recent and not-so-recent, remembers the specter of Russian influence over the elections two years ago and, depending on where one gets their news, this election, too. We may not know the impact, if any, that holds sway on Election Day for quite a while. However, the notion of transparency is key in the age of social media — and transparency is lacking.
That sentiment was brought home recently when two senators sent an “open letter” to Facebook’s Mark Zuckerberg, which demanded that he fix the firm’s new tools geared toward political ads. As had been reported, the social media giant debuted new rules in May that require for the people or organizations that buy those ads to note who paid for them.
They are relatively new features in an arsenal aimed at bad actors, yes, but the tools can be easily manipulated. Vice News said a few days ago that it was able to impersonate 100 senators seeking to buy ads … and Facebook gave the go-ahead for each of them.
The letter from Senators Mark Warner of Virginia and Amy Klobuchar of Minnesota, issued in late October, stated that “the fact that Facebook’s new security tools allow users to intentionally misidentify who placed political ads is unacceptable. That Facebook is unable to recognize ads connected to a well-established foreign interference operation is also deeply troubling.” There’s a “central vulnerability” extant, they said, that comes as Facebook fails to use human reviewers to catch that fraud.
The vulnerabilities, and the way to address them, were front and center in an interview between Karen Webster and Philipp Pointner, chief product officer at Jumio.
When it comes to political ads in particular, there is a disconnect between what Facebook is doing and the ability to effectively police the platform. After all, just the acknowledgement that there is a political ad that has been bought by a certain person or entity doesn’t mean that the ad, buyer or both cannot be fake.
Pointner said, “Over the past few years, [when] it comes to political influence, social media has begun to play a much larger role than people originally expected. And I think that lawmakers were surprised by the impact that social media can have.”
Where The Process Breaks Down
The typical algorithm-driven models that have been in place with Facebook have been around for a while, Pointner said, and are evidently less than effective when it comes to spotting, say, hate speech or the aforementioned misleading ad buys. Yet, there is a conundrum when manual oversight cannot hope to manage the sheer volume of postings that go up on a platform like Facebook.
The company, he said, “deserves credits and points for trying … as the political division [seen today] is something for which they are not to blame. But the company is being used to … create that division.”
Facebook clearly needs to change its tactics, Pointner told Webster. Some ideas of “authorization systems” may include senators’ flagging of accounts that can post on their behalf. Thus, winnowing down who may be legitimately tied to campaigns — or not — may have some merit. However, overall, the vetting, especially manual vetting of ads before they go live on the Facebook platform, is no easy task.
“It’s an arms race,” he said, where the bad guys are going to constantly strive to get ahead of any new systems or tactics deployed by Facebook and other social media firms. Fraudsters will become smarter and better when it comes to gaining access to the system and passing any number of tests designed to thwart misleading ads.
When asked what advice Pointner might offer to Facebook COO Sheryl Sandberg, he said, “The solution of this issue has to become an objective for the entire company.” If not addressed effectively, Facebook could lose the trust of voters, lawmakers and users in general.
Using Account Openings As A Model
Pointner explained that the process of setting up a Facebook account and submitting an ad should be the same as opening an account at a financial institution (FI). In this setting, individuals and firms would have to show proof of identity, he said, “among the first steps” in tying ads back to legitimate buyers. Think of it as an onboarding process and verification done across digital means, where, for example, the company could ask for copies of driver’s licenses if would-be ad buyers seek to use the “Paid for by” feature. Of Facebook, he added, “they should be serious about checking those IDs and making sure that people are who they say they are in the real world.”
He also noted that, at Facebook, joint manual efforts and algorithms should be able to figure out if a poster is allowed to post such an ad on behalf of a politician or political entity. Here, a bit of friction, tied to a wait period while digital verification takes place, is no real problem, according to Pointner.
“Policing the user-created content for each and every account is a completely different problem” from policing political ads, he said, which are marked by far less volume than might be seen with more traditional social media postings.
In failing to protect transparency of civic and democratic life amid the pursuit to verify the legitimacy of the ads (a “trust, but verify” approach), said Pointner, “the damage is far greater than dollars and cents.”