Fake Social Profiles Blur Line Between Scamming, Legitimate Marketing

Fake Profiles Blur Line Between Scamming, Marketing

Take a close look at the profile of the “person” sending business pitches to your LinkedIn account, or perhaps through Twitter or another social networking platform. They may not really exist.

The digital shift and connected economy has given rise to is the use of artificial intelligence (AI) to generate convincing deepfakes of employees — and even entire companies.

Complicating matters further is the belief that some of this activity is coming from legitimate companies using well-crafted bot profiles to inexpensively scale lead-generation efforts. Distinguishing is difficult.

As first reported by The Register Monday (March 28), a pair of researchers at Stanford University accidentally discovered more than 1,000 fake LinkedIn profiles after receiving a software sales pitch from a LinkedIn profile that, on second glance, didn’t look right.

“Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a ‘Keenan Ramsey,’” The Register reported. “It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.”

On closer examination, DiResta noticed that eye alignment seemed unnatural, as did other minuscule discrepancies from jewelry to how the subject’s hair seemed blended into the image.

See also: Fake LinkedIn Accounts and ‘Job Fishing’ Fraud on Rise, Google Warns

Commenting on Twitter, DiResta said: “It’s not a story of mis- or [disinformation], but rather the intersection of a fairly mundane business use case w/AI technology, and resulting questions of ethics & expectations. What are our assumptions when we encounter others on social networks? What actions cross [the] line to manipulation?”

Some Fakes Aren’t Total Frauds

NPR dug deeper into what DiResta and Goldstein uncovered, reporting Sunday (March 27) that rather than being the work of malicious actors, some phony LinkedIn profiles seem to have a less sinister purpose: generating sales leads.

“By using fake profiles, companies can cast a wide net online without beefing up their own sales staff or hitting LinkedIn’s limits on messages,” NPR reported. “Demand for online sales leads exploded during the pandemic as it became hard for sales teams to pitch their products in person.”

In a PYMNTS interview in February, Carl Churchill, managing director at payment services provider (PSP) technologi said, “Everyone talks about consumer fraud, the consumer defrauding their supplier, the person they’re buying the goods or services from. What people don’t talk about is the significant levels of fraud that the acquirers, the payments providers, are seeing as well when they’ve got these kind of merchant businesses that are entirely fabricated.”

Read more: Fake Businesses Emerge as New Front in War on ID Fraud

In its most recent Transparency Report covering the first half of 2021, LinkedIn said its automated defenses caught 97% of the fake accounts it stopped, noting that 11.6 million attempts were detected and blocked at registration, while almost 86,000 were restricted after members reported them, and 3.7 million suspect accounts were restricted proactively.

More Human Than Human

In a February blog post, the Proceedings of the National Academy of Sciences (PNAS), journal of the National Academy of Sciences said, “Artificial intelligence (AI)-synthesized text, audio, image and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud and disinformation campaigns.”

PNAS noted that its “evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable — and more trustworthy — than real faces.”

Showing the extent of these activities, be they malicious or just unethical, a March 17 blog post by Google’s Threat Analysis Group noted its monitoring of a “financially motivated threat actor” identified as EXOTIC LILY using AI and deepfakes to infiltrate companies with false business offers.

For an example of AI-generated profile pictures, check out thispersondoesnotexist.com which generates a new — and totally made up — image of a nonexistent person each time the page is refreshed.

See also: Cyber Insurance Sees Price Hikes Ahead as Cyberwar Compounds Fraud Wave