Presented by Qualcomm Technologies, Inc.
How are researchers working to advance on-device AI and speed up adoption? Join this VB Live event to learn about the exciting breakthroughs in AI on the edge already pushing the boundaries of what’s possible and helping to shape the future of AI.
On-device AI is the next evolutionary leap for artificial intelligence, says Jilei Hou, senior director of engineering at Qualcomm Technologies, Inc., and it’s big news for businesses and entrepreneurs. Gartner predicts that by 2022, 80 percent of smartphones shipped will have on-device AI capabilities — a leap up from just 10 percent in 2017. And the portion of AI tasks that move from the cloud to take place on edge devices instead will balloon more than sevenfold, from 6 percent in 2017 to 43 percent in 2023.
“One of our company-wide missions is making sure on-device AI is ubiquitous,” Hou says.
Taking the cloud out of the equation and enabling devices to process data eliminates latency, boosts privacy, increases reliability and efficiency. AI on the edge can analyze and learn from its environment in real time, and unlocks the kind of AI use cases that used to look like science fiction, and across industries, from automotive and mobile devices, to wearables, robotics, and smart homes, smart manufacturing, smart retail, smart video, and smart building.
To make that goal happen, a few aspects come into the research picture says Hou. When comparing to the cloud, an important factor is the very limited compute resources on device.
“We’re still confined by the area and the power constraints we have,” he explains. “But in such a limited space we still have to provide a great user experience, allowing the use cases to perform in real time in a very smooth manner.”
One of the most compelling use cases in the on-edge AI space is the advances it is enabling for voice UI, Hou says, particularly around user verification — meaning it can recognize who is speaking and react correspondingly.
“Your sound, your pitch, your tone changes constantly and in response to your environment,” Hou says. “To achieve the best performance possible we need to conduct what we call continuous learning. We can allow the model itself to adapt to sound changes over different seasons or different temperatures, or even just moisture in the weather.”
This kind of self-adaption is one of the most important research areas for on-edge computing, says Hou. If you want a model to adapt through different user tones or pitch or acoustic environment or user preference, the model itself needs to have enough generalizability to adapt in a very efficient way.
Another essential focus is multitasking in autonomous use cases. For example, in object detection, you want your model to recognize every type of object, he explains.
“I don’t want have to have models that can only look at, say, humans,” he says. “I need models for cars. I need models for road infrastructure. Ideally, I want all the use cases combined into a single model. That’s why it comes down to model generalizability.”
It boils down to model interpretation. That means, for mission critical use cases — it could be for the decision-making in autonomous driving, or it could be for assisted diagnosis in medical imaging, if the model itself can provide a certain amount of transparency in terms of how it’s coming to its decisions regarding classification or recommendation in a certain way, it will be hugely beneficial to companies interested in adopting these models for their own industries.
The next step for the AI and deep learning community, Hou says, is model prediction and reasoning, or using logic to predict the future, in a way very similar to how humans are able to size up a scene or a collection of data points and understand not just the content, but the past and present context, as well as what might happen next.
Hou and his team are fired up about the advances they’re making in their research, from identifying quantization techniques which allows companies to efficiently optimize their current AI models for on-edge computing use cases, to geometric deep learning, which allows AI to understand 3D space and objects with unpredictable surfaces, to the computer vision techniques that can take data from radar and ultrasonic sensors and transform them into images.
“Our research is helping apply deep learning technology into a much greater number of use cases,” Hou says. “Those are just three of the exciting research areas we’ve recently investigated, and we believe they’re going to make a lot of impact on the industry.”
For more of the “very cool” technical use case directions Hou and his team are pursuing, a look at the tools and techniques, and research findings that are helping to make on-edge AI pervasive and helping companies take their products to the next level, and more, don’t miss this VB Live event!
Don’t miss out!
In this webinar, we’ll discuss:
- Several research topics across the entire spectrum of AI, such as generalized CNNs and deep generative models
- AI model optimization research for power efficiency, including compression, quantization, and compilation
- Advances in AI research to make AI ubiquitous
Speakers:
- Jilei Hou, Senior Director, Engineering, Qualcomm Technologies, Inc.
- Jack Gold, Founder & Principal Analyst, J. Gold Associates