Watch all the Transform 2020 sessions on-demand here.
Apple’s investing heavily in artificial intelligence (AI). That much was clear from today’s iPhone and Apple Watch unveiling in Cupertino, California.
The new iPhone Xs and iPhone Xs Max boast the A12 Bionic, a 7-nanometer chip that Apple characterized as its “most powerful ever.” It packs six cores (two performance cores and four high-power cores), a four-core GPU, and a neural engine — an eight-core dedicated machine learning processor, up from a two-core processor in the A11 — that can perform five trillion operations per second (compared to 500 billion for the last-gen neural engine). Also in tow is a smart compute system that automatically determines whether to run algorithms on the processor, GPU, neural engine, or a combination of all three.
Apps created with Core ML 2, Apple’s machine learning framework, can crunch numbers up to nine times faster on the A12 Bionic silicon with one-tenth of the power. Those apps launch up to 30 percent faster, too, thanks to algorithms that learn your usage habits over time.
Real-time machine learning-powered features enabled by the new hardware include Siri Shortcuts, which allows users to create and run app macros via custom Siri phrases; Memoji, a new version of Emoji that can be customized to look like you; Face ID; and Apple’s augmented reality toolkit, ARKit 2.0.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
The news follows on the heels of Apple’s Core ML 2 announcement this summer.
Core ML 2 is 30 percent faster, Apple said at its Worldwide Developers Conference in June, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit would let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Apple introduced Core ML in June 2017 alongside iOS 11. It allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML is designed to optimize models for power efficiency, and it doesn’t require an internet connection in order to get the benefits of machine learning models.
News of Core ML’s update came hot on the heels of ML Kit, a machine learning software development kit for Android and iOS that Google announced at its I/O 2018 developer conference in May. In December 2017, Google released a tool that converts AI models produced using TensorFlow Lite, its machine learning framework, into a file type compatible with Apple’s Core ML.
Core ML is expected to play a key role in Apple’s future hardware products.
In a hint at the company’s ambitions, Apple hired John Giannandrea, a former Google engineer who oversaw the implementation of AI-powered features in Gmail, Google Search, and the Google Assistant, to head up its machine learning and AI strategy. And it is looking to hire more than 150 people to staff its Siri team.