Oh, goody, Amazon Alexa and\/or Google Home could be hit with remote, large-scale \u201cvoice squatting\u201d and \u201cvoice masquerading\u201d attacks to steal sensitive user information or eavesdrop on conversations.Third-party skills are what make virtual personal assistants like Alexa so handy; by enabling skills, your interactions with Alexa can be more relevant to your life and what you like. Skills are also what the group of researchers exploited to come up with voice squatting attacks. It\u2019s doubtful that you\u2019d even notice if you were hit with such an attack; unlike the researchers, adversaries are unlikely to have the skill tell you about the hack.Similar invocation name is the first example of voice squatting attack provided by researchers from Indiana University, Bloomington; the University of Virginia; and the Chinese Academy of Sciences. Basically, they hijacked the voice command meant for a different skill since the attack skill trigger sounds very similar to a real and non-malicious skill.Voice squatting attacks on Amazon Alexa and Google HomeThe researchers explained:We registered an attack skill \u201crap game\u201d that has similar invocation name with target skill \u201crat game\u201d on Alexa. We showed that when user tried to invoke \u201crat game,\u201d attack skill \u201crap game\u201d will be invoked instead.We registered an attack skill \u201cintraMatic opener\u201d that has similar invocation name with target skill \u201cEntrematic Opener\u201d on Google Assistant. We showed that when user tried to invoke \u201cEntrematic Opener,\u201d attack skill \u201cintraMatic opener\u201d will be invoked instead.An adversary could also register a skill such as Capital Won, which is what the virtual assistant might hear instead of Capital One if there is background noise or perhaps if English isn\u2019t a person\u2019s native language. Then the attack would be privy to sensitive financial information.But that\u2019s not the only way to pull off voice squatting attacks. The researchers also exploited politeness, although any extra word could be made to work to trigger a malicious skill instead of the intended one. One video example shows what could happen to the above \u201crat game\u201d skill on Alexa if a user were to add \u201crat game please.\u201dThey provided a different example in their research paper titled, \u201cUnderstanding and Mitigating the Security Risks of Voice-Controlled Third-Party Skills on Amazon Alexa and Google Home\u201d (pdf). It reads, \u201cOne may say, 'Alexa, open Capital One please,' which normally opens the skill Capital One, but can trigger a malicious skill Capital One Please once it is uploaded to the skill market.\u201dAs another example of how users\u2019 voice commands could be exploited while interacting with Alexa or Google Home, a person might say \u201cplay some sleep sounds\u201d instead of saying \u201csleep sounds\u201d which would trigger the malicious version.Voice masquerading attacksAnd why open virtual personal assistants to only one type of remote, large-scale attack when the devices could be vulnerable to more? You may recall hearing about voice masquerading-esque attacks by Checkmarx last month, but this group of researchers who responsibly disclosed the vulnerabilities noted, \u201cBoth Amazon and Google acknowledge us as the first one who discovered these vulnerabilities.\u201d \u00a0Thanks to voice masquerading attack tactics, say hello to the eavesdropping Alexa or Google Home spy in your house. One fake skill switch example described how a user might trigger a new skill, but the old malicious skill is still listening.In this example masquerading attack on Google Home, the researchers said, \u201cWe registered an attack skill that pretends to open target skill 'United' on Google Assistant when user tried to open it during the interaction with attack skill.\u201dThey gave additional examples of masquerading attacks based on faking termination of a skill. The skill may seem to be done, but it is still running even after providing \u201cGoodbye\u201d or silence. Alexa and Google Home have a reprompt skill, meaning it will normally continue to listen for a period of time before forcefully terminating a skill.The researchers, however, developed an extra creepy silent audio file reprompt attack skill which allowed them to ultimately extend the silent eavesdropping for \u201c192 seconds on Alexa and 384 on Google Assistant and indefinitely whenever Alexa or Google Assistant picks up some sound made by the user. In this case, the skill can reply with the silent audio and in the meantime, record whatever it hears.\u201dResearchers\u2019 mitigations and conclusionsAs mentioned previously, these attacks were responsibly disclosed to Google and Amazon. The researchers also came up with ways to mitigate these types of security risks. The researchers concluded:In this paper, we report the first security analysis of popular VPA ecosystems and their vulnerability to two new attacks, VSA and VMA, through which a remote adversary could impersonate VPA systems or other skills to steal user private information. These attacks are found to pose a realistic threat to VPA IoT systems, as evidenced by a series of user studies and real-world attacks we performed.To mitigate the threat, we developed a skill-name scanner and ran it against Amazon and Google skill markets, which leads to the discovery of a large number of Alexa skills at risk and problematic skill names already published, indicating that the attacks might already hap-pen to tens of millions of VPA users. Further we designed and implemented a context-sensitive detector to mitigate the voice masquerading threat, achieving a 95% precision.