Dolphin Hacking And The Invisible Threats To Connected Commerce Economy

Dolphines

In a world where one of the three big credit bureaus can be hacked — and, as an American adult, one’s most sensitive financial information is probably already on the dark web — paranoia is not unreasonable. The fear that an international network of loosely connected criminals is out to defraud you of all your money is not the work of a delusional mind, but a rational response to the actual news headlines of the past year.

Being afraid of the dolphins, however — until very recently — might have been a touch weird.

Admittedly, it is unlikely that aquatic mammals are going to try to hack your Siri, Bixby, Alexa or Google-enabled device. And yet, Chinese researchers at Zhejiang University discovered a way to use dolphin sounds to essentially hack voice assistants.

Humans can’t hear ultrasonic frequencies of this kind, but they’re clear as a bell to a digital speaker. Those ultrasonic commands can be used to trick any voice-activated AI assistant into all kinds of trouble.

To make use of the “dolphin attack,” all researchers had to do was translate human voice commands into ultrasonic frequencies (over 20,000 Hz). They then played these sounds from a regular smartphone equipped with an amplifier, ultrasonic transducer and battery.

All in, it’s about a $3 investment.

Most concerning is how widely applicable the hack seems to be, as it works across a variety of platforms and devices: smartphones, Apple iPads, MacBooks, Amazon Echo devices and even an Audi Q3 — 16 devices and seven systems in total.

And, to top it off, “the inaudible voice commands can be correctly interpreted by the SR (speech recognition) systems on all the tested hardware.”

The good news is that the attack range is only five or six feet, so unless the hacker is also a home burglar, the cyberattack needs a power upgrade to be really effective. Additionally, the user would have had to activate the assistant, and the system replying back to the inaudible frequency would alert a user at home that something weird was happening.

The cybersecurity loophole can be solved by programming home assistants to ignore any commands at 20,000 Hz or other frequencies that humans can’t possibly speak or hear.  While some connected commerce devices — the Chromecast for example — have some ultrasonic frequency use, the conventional wisdom is that a small patch could fix what seems to be a big flaw.

“We have tested these attacks on 16 [voice control system] models, including Apple iPhone, Google Nexus, Amazon Echo and automobiles,” the researchers wrote in their paper. “Each attack is successful on at least one [speech recognition] system. We believe this list is by far not comprehensive. Nevertheless, it serves as a wake-up call to reconsider what functionality and levels of human interaction shall be supported in voice-controllable systems.”

For now, unfortunately, no such patch is available or, as of the most recent announcements, there isn’t one in the works.

While the practicality of the ultrasonic cyberattack is questionable in its current iteration, its mere existence emphasizes that society’s shift into the world of voice-controlled devices is not just a new variation on the digital commerce of the past, but an entirely separate animal.

And, with that shift, an entirely separate set of vulnerabilities. While dolphin hacking in connected commerce is unlikely to become the next big thing in cybercrime, stay tuned. There may be ways into those devices that no one has even imagined to try and prevent yet.