Watch all the Transform 2020 sessions on-demand here.
Thanks to smart speakers like the Amazon Echo and Google Home, having artificial intelligence-powered devices in your home is no longer the novelty it might have been just a few years ago. Reflecting the devices’ growing worldwide adoption, users are increasingly comfortable talking to their virtual home assistant about the weather or the latest sports scores.
But as these speakers become even more integrated with smart devices like lights and home appliances, it raises the question of safety: How secure will home AI devices be for users? While the technology is still in its early stages, security analysts are keeping an eye on several areas in home AI development.
Home AI remains popular
Much of the growth in the home smart speaker market has come from manufacturers opening their hardware systems to third-party developers. On speakers like the Amazon Echo, Google Home, and Apple HomePod, users can do things like play music, program a thermostat to save on energy, and make home intercom calls.
If you’re concerned about the security of your voice command history, speaker manufacturers typically keep several safeguards around this data. For instance, Amazon has protections that allow you to remotely clear your command history, and their speakers (early bugs aside) only detect audio when a user says an activation keyword aloud. However, Candid Wueest, principal threat researcher at Symantec, notes that datacenters are always vulnerable to threats from outside attackers.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“The data is only stored on the backend servers, but it is impossible to judge from the outside how well it is protected,” Wueest said. “Hence, there might still be a risk of a datacenter breach or malware infection getting access to personal data.”
However, companies don’t keep your personal voice history just for archiving; advertisers have eyed Amazon Alexa and Google Assistant as a potential future avenue for targeted ads. How these sponsorships might look remains to be seen. Amazon has long denied that it would bring ads to Alexa, and Google suffered a minor social media uproar over a daily update that name-checked Disney’s live action Beauty and the Beast movie. However, the trove of personal information that AI speakers gather about users will likely play a significant role in the future.
IoT devices and AI speakers
At the moment, the limitations of AI-powered speakers protect homeowners from serious security problems. For example, door locks have become the latest piece of home hardware that connects to your home network, but most either don’t allow you to unlock them verbally or require a PIN beforehand. In part, this safeguard is because AI-powered home speakers can’t flawlessly recognize a user by their voice, so the companies need to guard against the scenario where an intruder could get into a home by merely asking a speaker to unlock the door.
It’s not to say that AI-powered smart home speakers are safe from outside threats, however. In 2017, Hacker News reported that nearly 20 million Google Home and Amazon Echo speakers were found to be vulnerable to a Bluetooth-based hacking exploit. Additionally, security researcher Mark Barnes discovered a way to turn an older Amazon Echo into a wiretapping device.
But as proactive as AI-powered speaker manufacturers are about information security, researchers say that much of the potential threat for home users comes from third-party, Internet of Things (IoT) devices that connect online and pair with home speakers. Especially as companies add IoT connectivity to major home utilities (such as lights and thermostats), proper security standards on smart home devices will be a long-term concern for home users.
Researchers see this problem as stemming from the differing goals between major AI home speaker manufacturers and IoT device companies. While speaker manufacturers can afford to juggle data security alongside other areas, makers of IoT devices often have to prioritize their ship date and design during the development process.
As a result, the cybersecurity climate for IoT devices has been dismal. According to Symantec, IoT device-based attacks rose by 600 percent last year. These incidents include exploits like cryptojacking — harnessing a device to mine cryptocurrency — and developing botnets for distributed denial of service attacks. For home users, vulnerable IoT devices can also be an entry point for attackers to reach and tamper with an entire AI-connected smart home system and network.
Gary Davis, chief consumer security evangelist at McAfee, says users should make sure to follow basic home network security precautions, like changing device default login credentials and ensuring that wireless networks have robust security settings. On the manufacturer side, he argues that companies need to take device security as seriously as any other part of the development process.
“It can no longer be viewed as an afterthought,” Davis said. “[These companies] have to sit down while they’re designing and building these products and say ‘What can we be doing with this product to make it secure as it can be?'”
Hilary Thompson is a digital journalist who writes for Yahoo!, Men’s Health, and Training Journal.