Skip to main content

Apple debuts Core ML 3 with on-device machine learning

Apple today introduced Core ML 3, the latest iteration of its machine learning model framework for iOS developers bringing machine intelligence to smartphone apps. Core ML 3 will for the first time be able to provide training for on-device machine learning to deliver personalized experiences with iOS apps. The ability to train multiple models with different data sets will also be part of a new Create ML app on macOS for applications like object detection and identifying sounds.

Apple’s machine learning framework will be able to support more than 100 model layer types.

On-device machine learning is growing in popularity as a way to deploy quickly on the edge and respect privacy. Solutions for popular frameworks like Google’s TensorFlow and Facebook’s PyTorch to supply on-device machine learning through approaches like federated learning arrived in recent months.

Today’s news was announced at Apple’s Worldwide Developers Conference (WWDC) being held this week in San Jose, California.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


VentureBeat has reached out to an Apple spokesperson for more details about Core ML 3, and we will update this story as they become available.

Also announced today: watchOS 6 with Voice Memos and menstrual cycle tracking, iOS 13 with more expressive Siri and personalized results with Apple’s Home Pod, a modular Mac on wheels, and the ability to control Apple’s tvOS with PlayStation and Xbox controllers.

The Core ML framework is used internally at Apple for things like training Siri, the QuickType keyboard, language learning for the app Memrise, and predictions for the Polarr photo editing app.

At last year’s WWDC, Apple introduced Core ML 2, a framework the company called 30% faster than its predecessor, and Create ML, a GPU-accelerated framework for training custom AI models with Xcode and the Swift programming language. The initial Core ML framework for iOS was introduced at WWDC in 2017 and incorporated into iOS 11. Premade models that work right out of the box include Apple’s Vision API and Natural Language Framework.

Unlike Google’s ML Kit that works for both Android and iOS developers, Core ML is made exclusively for developers creating apps for Apple’s iOS operating system. Google integrated Core ML with its TensorFlow Lite back in late 2017.

More to come.

Apple WWDC 2019: Click Here For Full Coverage