• The flaw stems from micro-electromechanical systems (MEMS) microphones that convert voice commands into electrical signals.
  • An attacker can misuse the vulnerability to instruct a voice assistant to unlock a door or make any other malicious operations.

Researchers at the University of Michigan and the University of Electro-Communications,Tokyo, have devised a new attack technique against smart voice assistants. The attack technique leverages a new ‘Light Commands’ vulnerability that can be used to remotely hack Alexa and Siri smart speakers.

How does it work?

  • The attack exploits a design flaw in micro-electromechanical systems (MEMS) microphones which convert voice commands into electrical signals.
  • By using this laser light beam (which is in the form of electrical signals), the researchers demonstrated a successful injection of malicious inaudible commands into several voice-controlled devices such as smart speakers, tablets, phones across large distances and through glass windows.
  • The test showed that it is possible to send inaudible commands via laser beam from as far as 110 meters and between two separate buildings.

“By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio,” the researchers outlined in their research paper.

Risks associated with the vulnerability

In a real-time scenario, an attacker can misuse the vulnerability to instruct a voice assistant to unlock a door or make any other malicious operations.

“We show how an attacker can use light-injected voice commands to unlock the target’s smart-lock protected front door, open garage doors, shop on e-commerce websites at the target’s expense, or even locate, unlock and start various vehicles (e.g., Tesla and Ford) if the vehicles are connected to the target’s Google account,” noted researchers.

Vulnerable devices

Researchers said they tested the attack across a variety of devices that use voice assistants including the Google Nest Cam IQ, Amazon Echo, Facebook Portal, iPhone XR, Samsung Galaxy S9, and Google Pixel 2. But they caution that any system that uses MEMS microphones and acts on data without additional user confirmation might be vulnerable.

Bottom line

Researchers have demonstrated countermeasures although there is no evidence of mass exploitation of the vulnerability in the wild. The countermeasures include the implementation of the second layer of authentication, acquiring audio input from multiple microphones or even implementing a cover that physically blocks the light hitting the mics.

Cyware Publisher