Tech

Alexa, Siri, and Google Assistant Can Be Hacked Remotely With Lasers

The attack highlights a vulnerability in “smart” assistant microphones that can be targeted from up to 350 feet away using lasers.
GettyImages-1154848776
Amazon Alexa. Image: Getty Images

Many consumers already have privacy reservations about giving Apple’s Siri, Google Assistant, or Amazon Alexa access to intimate details of our daily lives. New research isn’t likely dissuade them from that opinion. Researchers in Japan and the University of Michigan this week announced that they’ve discovered a way to compromise Siri, Google Assistant, Facebook Portal, and Alexa digital home assistants from hundreds of feet away using laser light and even ordinary flashlights. The problem stems from a vulnerability in the MEMS (microelectro-mechanical systems) microphones used by home assistant products by all major companies. According to the researchers, the poorly understood vulnerability allows attackers to remotely inject inaudible and invisible commands into light beams, which can then be used to control the voice assistants.

Advertisement

“The main discovery behind light commands is that in addition to sound, microphones also react to light aimed directly at them,” the researchers said. “Thus, by modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio.” To prove it, the researchers climbed a 140 foot bell tower at the University of Michigan, then successfully controlled a Google Home device on the fourth floor of an office building roughly 230 feet away. They also demonstrated that by focusing the laser light, they were able to compromise home assistants from as far as 350 feet away.

1572978720352-image1

The attack isn’t easy to accomplish; attackers need a direct line of sight to the target. The light will also be visible, and the targeted assistants will also still issue vocal confirmation of the commands, likely alerting any potential target of the attack. That said, researchers noted that while some of the complexity of some of the lab equipment could stump novices, much of the gear was relatively cheap and easy to obtain. Their demonstration attack utilized a $14 laser pointer, $340 laser driver, $28 sound amplifier, and a $200 telephoto lens mounted on a standard camera tripod to help focus the laser.

Still, the problem reflects an industry that isn’t doing enough to secure devices that have unprecedented access to everything from our door locks to daily calendars. Of particular concern to researchers was the fact that these devices, and the systems they connect to, don’t require any sort of PIN before carrying out potentially sensitive commands. Even when PINs are present on connected devices, researchers say they believe it’s possible to brute force four-digit PIN codes once the laser attack provides access to a digital assistant.

“We find that VC systems are often lacking user authentication mechanisms, or if the mechanisms are present, they are incorrectly implemented,” the researchers said, noting that the threat opens the door to the broader compromise of any number of “smart” systems marketed as making your home more secure. “We show how an attacker can use light-injected voice commands to unlock the target’s smart-lock protected front door, open garage doors, shop on e-commerce websites at the target’s expense, or even locate, unlock and start various vehicles (e.g., Tesla and Ford) if the vehicles are connected to the target’s Google account,” they added.

While researchers say they only tested Facebook Portal, Google Assistant, Alexa, Siri, and a handful of tablets and phones, they believe that any device that utilizes a MEMS microphone is open to attack, something that will drive the entire industry to contemplate new mic designs.

Outside of closing your blinds, the researchers say there’s no real defense against the attack at the moment—though they are currently working with Google, Apple, and Amazon to develop potential defensive measures to be included in future models. Ultimately though, it’s just another example of how sometimes dumber technology is the smarter choice.