Watch all the Transform 2020 sessions on-demand here.
Microsoft announced a new set of tools for developers aimed at helping them build Windows apps that leverage hardware acceleration for AI. The WinML API will take in a trained machine learning model and optimize its execution based on the hardware available on host devices.
That way, developers can rely on local processing for their applications’ AI algorithms as fast as users’ Windows hardware can run them. For example, the API will make it possible for developers to use GPUs to speed up AI calculations without a whole lot of heavy lifting.
Doing local AI computation is important for a few reasons: First, it can be faster than phoning home to a cloud service, since there’s no network latency involved. Second, customers can also get intelligent results without a network connection at all, which is important for applications that run in isolated environments. Third, some applications aren’t suited for cloud connection because of compliance concerns, and this API makes it possible to run them more quickly without outside processing.
Microsoft is following in Apple and Google’s footsteps here. Both companies offer their own tools for developers to run hardware-optimized AI computation on iOS and Android, respectively, for much the same reason why Microsoft is offering this Windows ML API.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Right now, developers can use programming libraries like Nvidia’s CUDA to optimize their AI applications on PCs, but those libraries are specific to individual hardware components. Targeting Microsoft’s new API will let developers write their code once and have it work across a broad spectrum of different devices, even if some are better equipped to run fast AI computation than others.
That’s especially important considering Microsoft’s push for Windows PCs powered by ARM processors. Some of Qualcomm’s chips already contain AI accelerators, and this new API will help developers tap into that without doing extra work.
The API is expected to arrive in consumers’ hands with the next major release of Windows 10. It will work across Windows applications, so developers building Universal Windows Platform and Win32 apps will all be able to take advantage of the new capabilities.
Microsoft’s system takes in trained networks that have been compiled in the Open Neural Network Exchange format. ONNX was originally created by Microsoft and Facebook, and has been adopted by a number of other companies and projects in the months since its launch. The company will be releasing a set of Windows ML Tools that are designed to convert models built using popular open source frameworks into the ONNX format.