Researchers at the University of Michigan and the University of Electro-Communications,Tokyo, have devised a new attack technique against smart voice assistants. The attack technique leverages a new ‘Light Commands’ vulnerability that can be used to remotely hack Alexa and Siri smart speakers.
How does it work?
“By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio,” the researchers outlined in their research paper.
Risks associated with the vulnerability
In a real-time scenario, an attacker can misuse the vulnerability to instruct a voice assistant to unlock a door or make any other malicious operations.
“We show how an attacker can use light-injected voice commands to unlock the target’s smart-lock protected front door, open garage doors, shop on e-commerce websites at the target’s expense, or even locate, unlock and start various vehicles (e.g., Tesla and Ford) if the vehicles are connected to the target’s Google account,” noted researchers.
Researchers said they tested the attack across a variety of devices that use voice assistants including the Google Nest Cam IQ, Amazon Echo, Facebook Portal, iPhone XR, Samsung Galaxy S9, and Google Pixel 2. But they caution that any system that uses MEMS microphones and acts on data without additional user confirmation might be vulnerable.
Researchers have demonstrated countermeasures although there is no evidence of mass exploitation of the vulnerability in the wild. The countermeasures include the implementation of the second layer of authentication, acquiring audio input from multiple microphones or even implementing a cover that physically blocks the light hitting the mics.