views
With more users taking to voice activated devices with assistants such as Amazon Alexa, Google Assistant, Apple's Siri and Microsoft Cortana (the latter on a much smaller scale), a group of researchers from Ruhr-Universitat Bochum and the Max Planck Institute for Cyber Security and Privacy in Germany has claimed to have identified over 1,000 words that seem to accidentally trigger a response from each of these assistants. The study directly ties into privacy concerns regarding such speakers, which have time and again been noted to be listening in on private or sensitive conversations by being accidentally called to action with a wrong call phrase. Now, the researchers behind this study believe that this is a deliberate engineering design that has been done to keep smart home products more responsive, rather than resistive.
To uncover this, the researchers placed these speakers in a room and played runs of popular TV shows such as Game of Thrones, House of Cards and Modern Family. While these shows were being played, the researchers took note of every time the light came on for any of these smart speakers, thereby signifying that they were called to action. They would then roll back and play the sequences that appeared to trigger these speakers, and take note of specific words that were activating these products, as part of a very interesting study that may throw light on necessary improvements that are desperately required in smart home products with embedded microphone, connectivity, AI and voice recognition capabilities.
For Amazon's Alexa, the research found that the intelligent assistant was being activated by commonly used words such as 'unacceptable', 'election', 'a letter' and 'tobacco'. Apple's Siri appeared to respond to phrases such as 'a city' and 'hey Jerry'. Google Assistant, which is present in every Android smartphone today, as well as Google's host of smart home hardware, responded to phrases such as 'OK cool' and 'OK, who's reading?'. Microsoft's Cortana, meanwhile, would also respond to being called 'Montana'.
While some of the accidental call phrases are outright funny but still understandable, it represents a security concern that these keywords may trigger the speakers into recorded unexpected bits of everyday conversations. This created considerable furore when it was uncovered that each of Amazon, Apple, Google and Microsoft employed third party human contractors, alongside their AI model, to vet snippets of audio recordings scrolled from these speakers, 'for quality monitoring purposes'. While all of the companies have since vouched that they are no longer employing humans or outsourcing their work, having smart speakers activate in situations where they are not deliberately called still makes for a serious security breach.
Comments
0 comment