Skip to main content

Alexa and Google Assistant Speakers Open to Laser Hack Attacks

A first-generation Amazon Echo and a Google Home device.
These two smart speakers are just sitting ducks for anyone with a laser outside the window.
(Image credit: Corey Olsen/Tom's Guide)

An unmarked van pulls up outside your house. A laser beam shoots out of the van's back window, through your living-room window and onto the microphone of your Google Home or Amazon Echo smart speaker. 

The speaker says, "OK, opening garage door." The garage door lifts and a gang of thieves enters your house. Or the speaker says, "OK, unlocking and starting car," and a thief climbs into your Tesla and drives away.

Sound like the opening to the next Purge movie? It could really happen, say a team of American and Japanese researchers

The researchers discovered that precisely modulated lasers could silently send "voice" commands to smart speakers from hundreds of feet away. The attack also worked on smartphones and on an iPad, but only at short distances.

How can you defend yourself if this starts happening in real life? The best bet is to make sure your Amazon Echo, Google Home, Facebook Portal, Amazon Fire TV Cube and other smart speakers aren't facing windows. Putting black tape over the microphone may not work because a high-powered laser beam could shine, or even burn, right through.

The technical details

These attacks work because the microphones on smart speakers and on smartphones are actually printed-circuit chips with two layers, a flexible membrane and a stiff backplate, through which electric charges flow. 

Sound waves cause the membrane to flex and vary its distance from the backplate, and the resulting changes in electric capacitance are registered by the backplate, which converts the changes into an electric signal. The smart speaker or smartphone interprets this signal as sound.

But a laser beam can short-circuit this process, although it's not yet clear exactly how. It may be that the laser creates the same sorts of changes in electric capacitance on the microphone's backplate as sound would. Or it may be that the laser heats up the air around the microphone to move the membrane.  

In any case, the microphone will think there is sound, even if there is none, and send the resulting signal to the device's CPU. 

You can do this too for less than $700

The researchers, from the University of Michigan and the University of Electro-Communications in Tokyo, found that a setup involving a laptop, a standard photographer's tripod, a $30 audio amplifier, a $350 laser "driver" and a cheap laser pointer ($15-$20) could be used modulate the laser beam to mimic actual voice commands. That's enough equipment to send fake voice commands to microphones a few feet away.

Add a telephoto lens -- the researchers used a $200 one -- and you can send that laser beam hundreds of feet and have it still activate smart speakers. Excluding the laptop, which plays recorded voice files to the laser driver, the entire setup costs about $600-$700. 

The researchers climbed to the top of a bell tower on the Michigan campus and were able to control a Google Home speaker on the fourth floor of a building more than 200 feet away.

Right now, most Google Home and Amazon Echo devices are essentially defenseless against this sort of attack. They don't check for voice recognition by default -- anyone can give them a voice command if they just say "Alexa" or "OK, Google." 

Smartphones and tablets, which unlike smart speakers do tend to leave the house, are a bit better protected. The device owner often has to register his or her voice with the device in order to trigger voice commands. 

You can optionally turn on voice-recognition requirements on smart speakers. However, in those cases, only the wake words -- "Alexa," "Hey, Siri," or "OK, Google" -- need to be in the owner's voice. The command that follows the wake words can be in any voice.

The researchers are working with Amazon, Facebook and Google, as well as Tesla, to develop ways to stop these attacks. (Their paper, available here, found that Ford vehicles, like Teslas, were vulnerable to this attack through linked smart speakers.) 

Possible solutions would be to have smart speakers respond to voice commands with varied questions to the user before the command can be carried out. Or future generations of smart speakers could have more than one microphone and require that commands be audible on all of the microphones.

But in the meantime, move those smart speakers away from the window.