Amazon, Google Smart Speakers Could Be Breached By Hackers

Security researchers at SRLabs have discovered a new vulnerability that could leave Google and Amazon smart speakers exposed to hackers.

“Through the standard development interfaces, SRLabs researchers were able to compromise the data privacy of users in two ways: 1) request and collect personal data, including user passwords, 2) eavesdrop on users after they believe the smart speaker has stopped listening,” the researchers wrote.

The researchers showcased a flaw in several videos that detail how both assistants could allow hackers to secretly listen to a user. This happens when the assistants are given a series of characters they can’t pronounce, so while they don’t say anything, they do continue to listen for further commands. Anything the user says is then transcribed and sent to the hacker.

In addition, a hack impacting Google Home allows the user to ask for a random number to be generated, with the software continuing to listen long after the initial command. Another  — this time, a horoscope skill for Alexa — can ignore a “stop” command given by the user, and continue to listen. Speakers can also be manipulated into giving fake error messages, which leads to asking for the user’s password.

Fortunately, there’s no evidence that any of these have been used in the real world, and SRLabs shared its findings with both Amazon and Google before making them public.

“To prevent ‘Smart Spies’ attacks, Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores,” the researchers concluded.

Amazon said in a statement that it has launched new mitigations to prevent and detect this type of behavior. For its part, Google said it has review processes to detect these vulnerabilities, and has removed the actions created by the security researchers.