Skip to main content

10 ways Siri should become a better assistant in 2019

Apple VP and Siri chief Craig
Apple VP and Siri chief Craig Federighi introduces Siri shortcuts onstage at the San Jose Convention Center June 4, 2018 in San Jose, California.

Watch all the Transform 2020 sessions on-demand here.


Artificial intelligence and machine learning now touch our lives in more ways than we can possibly imagine, but if there’s one particularly tangible, ubiquitous example of these technologies, it’s the digital assistant that lives in your smartphone, smart speakers, tablet, and computer. Whether you say “Hey Google,” “Hey Siri,” or “Alexa,” you’re conjuring up an advanced collection of AI and ML tools designed to listen to you, understand you, and do things you ask.

None of these digital assistants is perfect, and even after years of interaction with people, they all have some non-trivial issues to address this year. This week, three VentureBeat writers are spotlighting 10 important issues with the digital assistant they use most. On Wednesday, we focused on Google’s Assistant, with Siri following today and Amazon’s Alexa tomorrow. We hope you enjoy all three articles.

Siri

Apple’s digital assistant Siri had a rough 2018 across almost every possible metric. It was the signature but weakest feature of the HomePod smart speaker, suffered through internal Apple leadership shuffles and exits, and became even more unreliable at performing some of its most basic features. When Apple said it was hiring someone to help executives understand Siri’s problems so the issues could be fixed, my initial thoughts were “Only now? Seriously?” and “Thank god!”

Superficially, Siri is such a train wreck right now that I could write this entire article with one sentence: Throw it out and completely replace it with something else. It doesn’t live up to Apple’s it “just works” standards of performance, and if it was a brand new product on the cusp of being announced, it would clearly call for more time in the oven.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


But Apple’s not going to shut Siri down and swap it with something else. Siri’s an actively used service that’s tied into virtually all of Apple’s operating systems and devices. Instead, the assistant’s going to be patched, repaired, and augmented until it works better — a process that already began last year with Siri Shortcuts and that will no doubt continue with additional tweaks this year.

Here’s my list of 10 ways Apple should improve Siri this year, even if I’m not holding my breath for any of them to happen.

1. 99% response accuracy

Last July, Loup Ventures published a “digital assistant IQ test,” claiming — through seemingly rigorous testing — that Siri on a smartphone understood user inquiries 99% of the time and correctly answered them 78.5% of the time. That was below Google Assistant (85.5% correct answers), but better than Amazon’s Alexa (61.4%) and Microsoft’s Cortana (52.4%). The numbers were similar in a December test of Siri on HomePod: 99.6% understanding and 74.6% correct answers.

Even if Loup’s numbers were generally accurate across real-world Siri usage, the assistant would provide incorrect answers one-fourth of the time. That’s terrible in a world that typically expects Apple (and most non-Apple) devices to work all of the time. After over eight years of development, if Siri understands something, it should be able to answer at least partially, if not fully, and correctly. Counting partial answers as correct, a 99% correct response rate — one incorrect answer in 100 responses — should be achievable.

In my experience, Siri has actually gone in the opposite direction over the past year. Common queries it used to be able to handle are now frequently returning hugely incorrect results, which makes using the “digital assistant” pointless. Take one look at Reddit’s SiriFail discussion group and you’ll know why this is the biggest area Apple needs to address, and right away.

2. Offline Mode

Siri desperately needs a capable “offline” mode so it can do things even when it’s struggling with an internet connection. As bad as the issue is for iPads, iPod touches, and Apple Watches when they’re out of wireless data range, it even happens with cellular iPhones, and it’s annoying to completely lose Siri functionality for stretches of time.

The problem is that Siri relies almost entirely on distant servers for its responses, which means it can be slowed down — or ground to a halt — by the internet. At a minimum, every current Apple device has an A8 chip and memory, so it should be able to store a subset of Siri’s features and do some things on its own. That’s actually how the iPhone 3GS’ Voice Control feature started, and it was surprisingly reliable almost a decade ago for playing music, making calls, and other basic device-specific features.

3. Multi-user support via voice-printing

Regardless of whether it’s on a speaker that sits in one room or an iPad that gets handed off to family members, Siri is inevitably going to be controlled by multiple people. It goes without saying that, like rival digital assistants, Siri should be able to use voice prints to distinguish between them.

Moreover, Siri should be able to meet users’ individual needs in at least basic ways. If so requested, each user’s voice print should be linked to a separate iCloud account so Siri can provide correct responses based on a person’s email, contacts, and other data. This isn’t to say that the device needs to store everything from a user’s full iCloud account — just that if a person is asking a question, Siri should be able to know who’s asking and temporarily retrieve information pertinent to the request.

4. Alternate Siri names (or gain adjustments)

Going beyond its other Siri-capable devices, Apple noted in its Machine Learning Journal that it took impressive measures to let HomePod’s microphones recognize voice commands through almost any kind of ambient interference. In practice, HomePod is legitimately capable of picking up voices from more than a full room away, even when a TV or other sonic source is nearby — conditions that make Echo and Google speakers struggle to hear commands.

Unfortunately, HomePod’s super hearing can make Siri incredibly annoying. If you say “Hey Siri” to an iPhone or iPad in the same room as a HomePod — or even a room away — you may notice that HomePod intercepts the request intended for another device and perhaps fails to properly address it. I wasn’t entirely surprised to encounter this problem frequently when HomePod was sitting on my office desk, but I later found myself slack-jawed when I moved the speaker into my family room and it picked up my voice from two rooms away.

Letting users rename Siri for a given device would fix this, as would allowing the user to adjust Siri’s sensitivity (gain control) on a device so that it didn’t work in an unwanted radius. This has an especially large likelihood of being a problem on HomePods, but it could be tweaked for other devices, too.

Above: Siri on Apple TV.

Image Credit: Apple

5. Make Siri a consistent experience across devices

A related problem is that HomePod’s Siri can’t do everything Siri can on an iPhone or iPad. Imagine making a request of one Siri device and getting a response from the one that “can’t handle your request.” That’s a thing that happens.

Siri works differently (and barely) on the Apple TV compared with even HomePod and Apple Watch, while macOS and iOS users have a substantially different experience. It’s easy for Apple to say this is in the service of customizing each Siri experience for its host device, but an Apple Watch or Apple TV user may well ask the same question as an iPhone user — and want a good answer rather than a prompt to try the same request on another device. Apart from hardware limitations, such as an Apple TV’s lack of an integrated microphone for phone calls, there shouldn’t be barriers to Siri’s performance.

6. Make Proactive Assistance per-user and useful

For years, iOS devices have been able to offer widget-like personalized summaries of weather, calendar appointments, reminders, news, and other details, all available at a glance — features that Apple referred to as making Siri a more proactive assistant, with an eye to rivaling Google Now.

But Siri’s feature has made little progress since it was rolled out. iOS now makes all but useless “Siri App Suggestions” based on … well, it’s hard to know. At one point, the recommendations appeared to be based on device usage patterns at various times, but now they just look random. The most proactive things it does are years-old basics for Google: automatically calculating time and distance to the location of your next calendar events and telling you what’s coming up next.

Apple has long used privacy concerns as a crutch to explain why its personal digital assistant doesn’t get too personal or offer much assistance. It’s time for Siri to start acting like an actual concierge, rather than someone who doesn’t want to do any work — even when asked and given the permissions to do so. If you tell Siri it’s okay to sync certain user-specific details across multiple devices or from your iCloud account so it can provide better services to you, it should be able to do so without further questions.

7. Bring Proactive Assistance to speakers

Beyond making Proactive Assistance more useful on screened devices, HomePod should offer an audio version of the feature for any user. This would bring the “digital assistant” concept to life by presenting a summary of key things you should know before starting or ending your day, spoken to you while you’re getting dressed or washing up — rather than forcing you to look at a screen.

8. Give speakers full FaceTime audio capabilities

Apple’s Siri speakers should be able to make audio calls of some sort without depending on an iPhone for service. Google offers users a free Voice telephone number; Amazon similarly offers free phone calling. But Apple requires you to have an iPhone nearby to facilitate calls.

Allowing Siri speakers to initiate and end FaceTime Audio calls on their own would be the bare minimum needed to make Apple’s options competitive. Enabling them to serve as a full telephone alternative for the room they sit in would be much better.

9. Create easier onboarding for Siri Shortcuts

Apple’s implementation of Siri Shortcuts — its alternative to Alexa Skills — gave Siri devices a real chance to expand their capabilities. But the current process of setting up those Shortcuts is very much manual rather than automatic: Users need to dive into either the Shortcuts app or an iOS Settings menu to start using them.

Siri Shortcuts is based upon Workflow, an app Apple acquired in 2017.

Above: Siri Shortcuts is based upon Workflow, an app Apple acquired in 2017.

Image Credit: Apple

With a user’s consent, a freshly set up Apple device could ask to share a device’s existing “Proactive Assistance” information about recently used apps and use that to surface and set up helpful Siri Shortcuts. There are other ways to do this, such as a “help me set up Shortcuts” voice command, but the end goal should be the same — get more users using more of the existing Shortcuts.

10. Expand Siri to a wider range of speakers and devices

Amazon took a proven approach to creating a lineup of Echo speakers: Start affordable, then add cheaper and more expensive options. The strategy proved so popular with customers that late-to-the-party Apple had to choose whether to copy the Echo family outright or do something different. (Google basically mimicked Amazon’s strategy with Home, then more quickly went to very low and very high price points with Mini and Max.)

A HomePod lineup with options at $199, $299, and $399 would be totally fine; the just-reduced $299 model could fit in the middle, flanked by “small monaural” and “bigger, true stereo” alternatives. It’s unclear whether Apple will actually release the new devices necessary to do this, but having a lineup rather than one speaker would go a long way toward making HomePods more viable in the marketplace and Siri more useful in homes.

I would also be in favor of an Apple TV revision that had microphones built into the box itself, or perhaps as an element of the HDMI cable (for hidden home theater Apple TV installations), in addition to or instead of the remote control. Depending on how this was implemented, Siri could become as useful on an Apple TV as on HomePods and iOS devices.

Final thoughts

For the time being, all of these hardware and software tweaks to improve Siri’s performance can be fairly described as wishful thinking. Even if Apple engineers have been working feverishly behind the scenes, consumer-facing enhancements have been glacial and uneven, more often than not offset by new or deepening problems elsewhere.

Going forward, my biggest hope for Siri is to see it continue to eliminate issues one by one without adding new problems to the list. Regardless of how many people are using the service today, the number who have given up on Siri is much higher than it should be, and it’s a rare example of an Apple offering that’s truly third-rate compared to competitors.