Americas

  • United States

Asia

Oceania

Voice squatting attacks: Hacks turn Amazon Alexa, Google Home into secret eavesdroppers

News
May 17, 20185 mins
HackingInternet of ThingsSecurity

Researchers devise new two new attacks -- voice squatting and voice masquerading -- on Amazon Alexa and Google Home, allowing adversaries to steal personal information or silently eavesdrop.

mute google home amazon echo
Credit: Florence Ion

Oh, goody, Amazon Alexa and/or Google Home could be hit with remote, large-scale “voice squatting” and “voice masquerading” attacks to steal sensitive user information or eavesdrop on conversations.

Third-party skills are what make virtual personal assistants like Alexa so handy; by enabling skills, your interactions with Alexa can be more relevant to your life and what you like. Skills are also what the group of researchers exploited to come up with voice squatting attacks. It’s doubtful that you’d even notice if you were hit with such an attack; unlike the researchers, adversaries are unlikely to have the skill tell you about the hack.

Similar invocation name is the first example of voice squatting attack provided by researchers from Indiana University, Bloomington; the University of Virginia; and the Chinese Academy of Sciences. Basically, they hijacked the voice command meant for a different skill since the attack skill trigger sounds very similar to a real and non-malicious skill.

Voice squatting attacks on Amazon Alexa and Google Home

The researchers explained:

We registered an attack skill “rap game” that has similar invocation name with target skill “rat game” on Alexa. We showed that when user tried to invoke “rat game,” attack skill “rap game” will be invoked instead.

We registered an attack skill “intraMatic opener” that has similar invocation name with target skill “Entrematic Opener” on Google Assistant. We showed that when user tried to invoke “Entrematic Opener,” attack skill “intraMatic opener” will be invoked instead.

An adversary could also register a skill such as Capital Won, which is what the virtual assistant might hear instead of Capital One if there is background noise or perhaps if English isn’t a person’s native language. Then the attack would be privy to sensitive financial information.

But that’s not the only way to pull off voice squatting attacks. The researchers also exploited politeness, although any extra word could be made to work to trigger a malicious skill instead of the intended one. One video example shows what could happen to the above “rat game” skill on Alexa if a user were to add “rat game please.”

They provided a different example in their research paper titled, “Understanding and Mitigating the Security Risks of Voice-Controlled Third-Party Skills on Amazon Alexa and Google Home” (pdf). It reads, “One may say, ‘Alexa, open Capital One please,’ which normally opens the skill Capital One, but can trigger a malicious skill Capital One Please once it is uploaded to the skill market.”

As another example of how users’ voice commands could be exploited while interacting with Alexa or Google Home, a person might say “play some sleep sounds” instead of saying “sleep sounds” which would trigger the malicious version.

Voice masquerading attacks

And why open virtual personal assistants to only one type of remote, large-scale attack when the devices could be vulnerable to more? You may recall hearing about voice masquerading-esque attacks by Checkmarx last month, but this group of researchers who responsibly disclosed the vulnerabilities noted, “Both Amazon and Google acknowledge us as the first one who discovered these vulnerabilities.”  

Thanks to voice masquerading attack tactics, say hello to the eavesdropping Alexa or Google Home spy in your house. One fake skill switch example described how a user might trigger a new skill, but the old malicious skill is still listening.

In this example masquerading attack on Google Home, the researchers said, “We registered an attack skill that pretends to open target skill ‘United’ on Google Assistant when user tried to open it during the interaction with attack skill.”

They gave additional examples of masquerading attacks based on faking termination of a skill. The skill may seem to be done, but it is still running even after providing “Goodbye” or silence. Alexa and Google Home have a reprompt skill, meaning it will normally continue to listen for a period of time before forcefully terminating a skill.

The researchers, however, developed an extra creepy silent audio file reprompt attack skill which allowed them to ultimately extend the silent eavesdropping for “192 seconds on Alexa and 384 on Google Assistant and indefinitely whenever Alexa or Google Assistant picks up some sound made by the user. In this case, the skill can reply with the silent audio and in the meantime, record whatever it hears.”

Researchers’ mitigations and conclusions

As mentioned previously, these attacks were responsibly disclosed to Google and Amazon. The researchers also came up with ways to mitigate these types of security risks. The researchers concluded:

In this paper, we report the first security analysis of popular VPA ecosystems and their vulnerability to two new attacks, VSA and VMA, through which a remote adversary could impersonate VPA systems or other skills to steal user private information. These attacks are found to pose a realistic threat to VPA IoT systems, as evidenced by a series of user studies and real-world attacks we performed.

To mitigate the threat, we developed a skill-name scanner and ran it against Amazon and Google skill markets, which leads to the discovery of a large number of Alexa skills at risk and problematic skill names already published, indicating that the attacks might already hap-pen to tens of millions of VPA users. Further we designed and implemented a context-sensitive detector to mitigate the voice masquerading threat, achieving a 95% precision.

ms smith

Ms. Smith (not her real name) is a freelance writer and programmer with a special and somewhat personal interest in IT privacy and security issues. She focuses on the unique challenges of maintaining privacy and security, both for individuals and enterprises. She has worked as a journalist and has also penned many technical papers and guides covering various technologies. Smith is herself a self-described privacy and security freak.