It’s still a mystery to researchers at the University of Michigan and The University of Electro-Communications (Tokyo) – just what physically enabled them to inject commands into the embedded microphones of Amazon Alexa, Google Home, and other digital voice assistant devices via laser pointers.
The team in 2019 used light to remotely control Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri by exploiting a vulnerability in their so-called MEMS microphones. They used the light beams to inject invisible and inaudible commands to the digital voice assistants as well as voice-controlled smartphones and tablets – through glass windows as far away as 110 meters (120 yards).
They’re now taking their research to a new phase.
“There’s still some mystery around the physical causality on how it’s working. We’re investigating that more in-depth,” says Benjamin Cyr, a Ph.D. student at Michigan who, along with researcher Sara Rampazzi, will be presenting the latest iteration of the research at Black Hat Europe on Dec. 10. Why do the mikes respond to light as if it’s sound? he says. “We want to try to nail down what’s happening on a physical level, so that future hardware designs” protect them from light-injection attacks.
They are now studying the security of sensing systems overall as well, including those found in medical devices, autonomous vehicles, industrial systems – and even space systems.
Cyr, Rampazzi, an assistant professor at the University of Florida, and Daniel Genkin, an assistant professor at the University of Michigan, plan to show at Black Hat Europe how a security camera could be manipulated via a hijacked voice assistant with which it interfaces. They’ll be demonstrating their light-injection hack against the Amazon Echo 3, a newer model of the smart speaker system that was not available last year when they first tested Echo, Siri, Facebook Portal, and Google Home. Cyr says they haven’t had the opportunity yet to test the fourth-generation Echo speaker.
As a bonus, Cyr says he plans to demonstrate what the laser beam actually sounds like when it hits the mike of the digital assistant. “I’ll be taking some recordings of the mike” to play during the demo, he says.
At the heart of the research is the broader problem of an explosion of Internet of Things devices on the market that were not built with security in mind.
“We want to understand … how to defend against these vulnerabilities. Our final goal is to protect the system and make it more resilient, not only for the attack we found but for future attacks that have not yet been discovered,” Rampazzi says.
Cat Toys and Light Commands
The researchers spent just $2,000 in equipment to conduct the attack, which they dubbed “Light Commands” and included laser pointers, a laser driver, and a sound amplifier. However, they say it could be done for as little as $100, including a low-end laser printer for cats that can be bought on Amazon.
“The Amazon lasers we bought were for cats” that came with cat toys, Cyr says. “So we were giving away cat toys” after that.
For longer range attacks, they purchased a $200 telephoto lens, which allowed them to shoot the light beam down a long hallway. They encode the signal to the mike, and it gets modulated by the light.
“You shoot it to the acoustic part of the mike that then gets converted into an acoustic signal. So the voltage signal looks exactly the same is if it’s being done by an acoustic signal,” Cyr says.
This allows them to issue commands to voice-enabled devices, such as garage door openers, smart locks, and home security system cameras.
The researchers shared their findings with Amazon, Google, and the other vendors before they went public last year with the initial research. Rampazzi says Amazon has since made some slight updates to Alexa’s software, for example, such that an attacker would be unable to brute-force the device PIN.
“The new generation of devices also have a cover” over the mike, she notes, although the researchers don’t know whether that was in response to their attack. “The cover makes it harder to find the location of the mike and to be able to inject [light commands] into the device.”
Vendors could make other hardware adjustments to protect the devices from the Light Command attack, she says, such as ensuring the mike isn’t susceptible to light, or adding authentication techniques to the software so an unauthorized user can’t commandeer the digital voice assistant.
Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio
Recommended Reading:
More Insights