Between smart speakers, smartphones and smart cars, speech has become an accepted and natural way to interact with electronics, both at home and on the go.
Whether consumers are simply looking for information (like the current weather, driving directions or how many cups to a pint) or actually carrying out activities, such as ordering goods online, paying bills or setting the thermostat, “Just ask Alexa” has become almost a reflexive reaction for many.
Voice-activated technology is cool, but how secure is it?
Using ultrasonic sounds to hijack connected devices, known as dolphin hacking, is not likely to put anyone’s data or privacy seriously at risk, since the attacker would have to be within five to six feet of the device, and the smart speaker’s audible response to the inaudible command would certainly alert the true owner to the risk.
However, other threats are far less hypothetical, and are already putting individuals at risk. Voice authentication, for instance, is seen by consumers as a preferable security method over the old-fashioned static password – but in fact, false authentications occur about once per every thousand instances.
Compare that to one in 50,000 instances for TouchID and one in a million for facial recognition technologies like Apple’s forthcoming Face ID or Samsung’s iris scan, and it’s clear that voice is a very insecure channel for identity verification. Researchers say that voice can be easily spoofed by hackers using social engineering, or even by mediocre impersonators.
University of Michigan researchers have addressed the shortcoming by creating a second channel to authenticate the authentication.
This takes the form of a wearable device – a necklace, earbuds or eyeglass attachment – running a program called VAuth, which continuously registers speech-induced vibrations on the user’s body, then pairs them with the audio.
Speaking causes the face, throat and chest to vibrate, so the wearable security token is able to measure those using an accelerometer and pair the signal with input from the electronic device. An algorithm compares the two inputs to determine whether they match.
The electronic device, in turn, only allows users to use voice authentication if the security token is also being worn, and blocks access if the signals do not match.
This two-factor authentication approach to voice supposedly creates a unique vocal signature that is much more secure than the single-channel audio of simply speaking. During tests, the VAuth creators saw a false positive rate of less than one per thousand, already making it more secure than voice authentication alone.
A study on the device, “Continuous Authentication for Voice Assistants,” was presented Oct. 19 at the International Conference on Mobile Computing and Networking, MobiCom 2017, in Snowbird, Utah.
The developers say that the wearable security token can thwart replay attacks, mangled voice attacks and impersonation attempts, in which fraudsters use a few samples of the user’s voice to generate a matching voice print.
Plus, if either the electronic device or the token is stolen or lost, the user can simply unpair it to prevent an attacker from using their device.
Finally, setting up the token is relatively easy, requiring no training on the user’s part – unlike traditional voice recognition, which requires each individual user to “train” the device to recognize their voice.
Consumers surveyed by the research team were largely ready to give the device a shot, likely inspired by some combination of its ease of use and anxiety over security following a season of blockbuster data breaches at Equifax and others. In a survey of 952 consumers, the team found that 70 percent were willing to try a product like VAuth, and half of them were willing to pay more for it than what the research team planned to charge.
The positive response to the survey shows a real desire for security on consumers' parts, and a willingness to try something different to get it – even at the risk of making a somewhat wacky fashion statement.