Skip to content

OPLIN 4Cast #595: The sound of another security headache for smart speakers

Posted in 4cast, and Voice-activated assistants

Audio viruses are not a new thing, although they certainly haven’t gotten the attention that other kinds of hacks and malware have. As early as 2013, security researchers confirmed that it was possible to transfer malware via a speaker and have it picked up via a microphone. However, there’s now a new target for these types of attacks: voice-activated assistants, like Siri and Alexa. Vectors can be YouTube videos, radio shows and even TV programs.

Some researchers believe it’s possible to also hide attacks in music or spoken text. Right now, there’s no protection from what security experts have dubbed “Dolphin Attack.” However, practically speaking, there may not be much danger in this…at least, not yet.

  • Audio Virus is Coming? [Medium] ” This situation carries a potential threat because someone can make your phone call somebody, open websites or even buy something and unlock the door of the smart home through the speech recognition systems.”
  • Inaudible ultrasound commands can be used to secretly control Siri, Alexa, and Google Now [The Verge] “As with the rest of the research, this method is satisfyingly clever, but a little too impractical to be a widespread danger. For a start, for a device to pick up an ultrasonic voice command, the attacker needs to be nearby — as in, no more than a few feet away. The attacks also needs to take place in a fairly quiet environment.”
  • Hackers send silent commands to speech recognition systems with ultrasound [TechCrunch] “Security researchers in China have invented a clever way of activating voice recognition systems without speaking a word. By using high frequencies inaudible to humans but which register on electronic microphones, they were able to issue commands to every major “intelligent assistant” that were silent to every listener but the target device.”
  • ‘Dolphin Attack’ hides secret commands for Alexa and Siri inside music [Tampa Bay Times] “With audio attacks, the researchers are exploiting the gap between human and machine speech recognition. Speech-recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researchers were able to cancel out the sound that the speech-recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.”

From the Ohio Web Library:

 

Share