testsetset
Google today launched TensorFlow Lite to give app developers the ability to deploy AI on mobile devices. The mobile version of Google’s popular open source AI program was first announced at the I/O developer conference.
TensorFlow Lite is available to both Android and iOS app developers.
Since the debut of TensorFlow Lite in May, several competing products have emerged, including Core ML from Apple, Clarifai’s cloud service for training AI on a mobile device, and hardware like the Kirin 970 AI processor inside the Huawei Mate 10 smartphone.
“As you may know, TensorFlow already supports mobile and embedded deployment of models through the TensorFlow Mobile API,” the TensorFlow team wrote in a blog post today. “Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.”
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
TensorFlow Lite launches with access to a limited number of pre-trained AI models, such as MobileNet and Inception-v3 for object identification with computer vision, and Smart Reply, a form of natural language processing that provides suggested responses and which has already been deployed in native Google offerings such as Gmail and the chat app Allo.
Custom models trained with users’ own datasets can also be deployed.
More models and functionality will be added in the future based on demonstrated user need, the TensorFlow team said in the blog post.
TensorFlow Lite uses the Android Neural Networks API and falls back to CPU execution when accelerator hardware is not available to ensure that models can still run on a range of devices.