That’s according to a Wednesday (Aug. 20) report by The Wall Street Journal (WSJ), which cites the example of a scam that developed following Joann Fabrics’ bankruptcy earlier this year.
Days after that announcement, a host of bogus websites appeared designed to look almost identical to the retailer’s actual site. These sites offered enticing discounts, with the goal of stealing visitors credit card information and data.
“The whole look and feel of the website was very similar to the real website,” Melanie McGovern, director of public relations and social media for the Better Business Bureau, told WSJ. “If you’re on your mobile phone, you’re not looking at that URL when you click on an ad or a link in an email that says ‘shop here.’”
These sort of scams have been around for years, the report said, though cybersecurity experts worry artificial intelligence (AI) tools now allow scammers, who might otherwise lack the proper skills, to create an almost perfect copy of actual websites. Scammers can deploy them in minutes without writing a single line of code.
“The scary thing is just how easy it is,” said Robert Duncan, vice president of intelligence and strategy at cybersecurity firm Netcraft. “It allows more nontechnical people access to the tools, lowering the barrier of entry.”
Advertisement: Scroll to Continue
His company has identified close to 100,000 domains created with the help of illicit AI tools, impersonating 194 different brands in 68 countries. Netcraft estimates these sites now account for 6% to 7% of all online phishing activity.
The report also includes ways consumers can avoid being scammed, such as navigating to a company’s official website by typing it in directly, rather than clicking links in texts or emails. It also cautions users that AI-generated content tends not to contain spelling errors, a longtime hallmark of phishing emails or webpages.
While scammers are turning to AI to carry out fraud, research by PYMNTS Intelligence has found that number of companies are turning to AI-powered tools to bolster their cybersecurity protections.
The number of chief operating officers who said that their companies had employed such measures was at 55% in August of last year, compared to 17% three months earlier.
Those executives, PYMNTS wrote earlier this year, “are moving to proactive, AI-driven frameworks — and away from reactive security approaches — because the new AI-based systems can identify fraudulent activities, detect anomalies and provide real-time threat assessments.”