testsetset
At its I/O 2018 developer conference last May, Google introduced ML Kit, a cross-platform suite of machine learning tools for its Firebase development platform. The Mountain View company has given it lots of love since, mostly in the form of prebuilt AI models. And from the looks of it, that’s not poised to change anytime soon.
Today at I/O 2019 in San Francisco, Google debuted three new ML Kit capabilities in beta. Starting sometime this afternoon, ML Kit will begin shipping with the on-device Translation API, which will allow developers to use the same offline models that power Google Translate to translate in-app text into 58 languages. Also in tow will be the Object Detection and Tracking API, which will let apps locate and track objects of interest in a live camera feed in real time.
AutoML Vision Edge — which will let developers easily create custom image classification models in TensorFlow Lite format — will round out the ML Kit additions. Devs will be able to “teach” these models with training data collected piecemeal (or sourced from Google’s collaborative open source app) and uploaded to the console in Google’s Firebase development platform and deployed via TensorFlow Lite, a lightweight offline machine learning framework for mobile devices.
For the uninitiated, ML Kit uses the Neural Network API on Android devices and leverages the power of Google Cloud Platform’s machine learning technology for “enhanced” accuracy. In addition to the three APIs announced today, it comprises prebuilt models for text recognition, face detection, barcode scanning, image labeling, landmark recognition, and other applications.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
In October, Google enhanced ML Kit’s face detection API with face contours in beta, which enable apps to detect over 100 detailed points in and around a user’s face and overlay masks, accessories (on facial features like ears, eyes, nose, mouth, and so on), or beautification elements (like skin smoothing and coloration).
More recently, Google introduced the language identification API, which can discern which of 103 different dialects is being spoken in a string of text. It debuted alongside Smart Reply, and on-device natural language processing model that suggests text responses based on the last 10 exchanged messages. (It’s worth noting that Smart Reply has made its way into Gmail, Hangouts Chat for G Suite, and Google Assistant on smart displays and smartphones.)
Custom models trained with TensorFlow Lite can be deployed with ML Kit via the Firebase console. Developers have the option of decoupling machine learning models from apps and serving them at runtime, shaving megabytes off of app install sizes and ensuring models always remain up to date.
ML Kit works with Firebase features like A/B testing, which lets users test different machine learning models dynamically, and Cloud Firestore, which stores image labels and other data.