Skip to main content

Amazon, Microsoft, Salesforce, and others launch initiative to bring multiple voice assistants to devices

Amazon Alexa
Amazon Alexa
Image Credit: Shutterstock

Watch all the Transform 2020 sessions on-demand here.


Amazon and more than 30 other industry partners hope to give consumers greater choice in voice services. To this end, they together announced the Voice Interoperability Initiative today, a new program to ensure that voice-enabled products like smart speakers and smart displays provide users with “choice and flexibility” through multiple, interoperable intelligent assistants.

The lengthy list of signatories includes Baidu, BMW, Bose, Cerence, Ecobee, Harman, Logitech, Microsoft, Salesforce, Sonos, Sound United, Sony Audio Group, Spotify, and Tencent; telecommunications operators like Free, Orange, SFR, and Verizon; hardware solutions providers like Amlogic, InnoMedia, Intel, MediaTek, NXP Semiconductors, Qualcomm, SGW Global, and Tonly; and systems integrators like CommScope, DiscVision, Libre, Linkplay, MyBox, Sagemcom, StreamUnlimited, and Sugr. (Notably absent from the list is Apple, Samsung, and Facebook.) All have pledged to adopt similar technological approaches going forward, whether building voice-enabled products or developing voice services and assistants of their own.

For its part, Google says it was contacted about Voice Interoperability Initiative over the weekend and that it’ll “need to review the details” before making a commitment. “In general, we’re always interested in participating in efforts that have the broad support of the ecosystem and uphold strong privacy and security practices,” a spokesperson told VentureBeat via email.

The Voice Interoperability Initiative is organized around four core principles, the first of which is developing voice services that work “seamlessly” with others while preserving consumer privacy and security. Additionally, members will seek to build voice-enabled devices that support multiple simultaneous wake words and integrate multiple voice services on a single product. Finally, they’ll work to accelerate machine learning and conversational AI research to improve the breadth and quality of voice services.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Ultimately, explained Amazon founder and CEO Jeff Bezos in a statement, the goal is to enable customers to enjoy the unique skills and capabilities afforded by each voice service on a range of devices, from Alexa and Cortana to Salesforce’s Einstein Voice Assistant and any number of emerging platforms. To achieve it, participating hardware providers will develop products and services that make it easier for OEMs to support multiple wake words, while they and other companies work with researchers and universities to develop algorithms that allow wake words to run on portable, low-power devices.

“Multiple simultaneous wake words provide the best option for customers,” added Bezos. “Utterance by utterance, customers can choose which voice service will best support a particular interaction. It’s exciting to see these companies come together in pursuit of that vision.”

Amazon says that additional details and compatible devices will be revealed in the coming months.

The Voice Interoperability Initiative’s launch comes a year after Microsoft and Amazon brought Alexa and Cortana to all Echo speakers and Windows 10 users in the U.S., following the formation of a partnership first made public in a 2017 co-announcement featuring Microsoft CEO Satya Nadella and Bezos. Each of the assistants brought distinctive features to the table. Cortana, for example, can schedule a meeting with Outlook or draw on LinkedIn to tell you about people in your next meeting. As for Amazon’s Alexa, it has more than 90,000 voice apps made to tackle a broad range of use cases.

Separately, Facebook recently announced the AI Language Research Consortium, a community of partners the company said will “work together to advance priority research areas” in NLP. Through funding and annual workshop research, it’s intended to foster collaboration to tackle challenging tasks like representation learning, content understanding, dialog systems, information extraction, sentiment analysis, summarization, data collection and cleaning, and speech translation.