2020-09-08
Amazon Alexa, Google Assistant and other digital voice assistants are rapidly moving from the home to the workplace. Along with the many potential benefits, there are concerns to consider.
The idea of asking a voice assistant to set reminders, answer questions, and remember ideas is compelling. I can see using them in meetings and training events. At work, I use Alexa on my Amazon Echo for some of the research when I write. I similarly use Google Assistant on my phone for quick information searches including flight schedules and exchange rates. Organizations are also developing custom applications for the assistants to perform HR functions, among other tasks. Having written a "skill" or tool for Alexa, I can genuinely see the benefits; but I am also wary of the security implications.
Gizmodo explained in October how developers could use some very specific techniques to make the digital assistant devices listen to conversations after users believe their interactions with them are over. The article explains and shows videos that demonstrate how a developer could send an inaudible message prompt to a device and then wait for a response, listening and saving text all the while. The device could even ask for confirmation of a user's Amazon or Google password, thus implementing a phishing attack! I can imagine it recording a brainstorming session or other meeting.
It may seem as though users would be safe if they trusted the skills they invoked. Unfortunately, a skill can be started without sound. It seems one can modulate a laser to trigger the mic on a digital voice assistant. A website describing the vulnerability shows the simplicity and low cost of exploiting it. For around USD400 one can build a device to trigger the assistant from sometimes over 50 meters away. A line of sight to the assistant is needed, but that is often easy in urban areas.
There are countermeasures to these attacks:
- Only use skills you trust. This can be difficult, and in theory one can enable potentially risky skills on an Amazon Echo, Google Home, or another device. That feature needs to be disabled and devices secured.
- Be aware that users can use apps on tablets, laptops, or smartphones to activate the assistants. In a BYOD (Bring Your Own Device) environment, at least policy should prevent their use in company-confidential situations.
- Ensure that the devices are positioned in a way that they cannot be seen from an outside (or inside, for that matter) window.
- Educate users on the appropriate uses for digital voice assistants and their associated risks.
- Develop, enforce, and audit for a device, application policy.
We discuss various attacks and countermeasures in Learning Tree's cyber security introduction course.
I think these devices are a positive development particularly for individual use and for training. I can see them assisting in needed research in both environments. But I am not oblivious to the risks. I hope these tools evolve to be more secure and even more useful.