Watch all the Transform 2020 sessions on-demand here.
Intel’s hardware for accelerating AI computation is finally on its way to customers. The company announced today that its first-generation Neural Network Processor, code named “Lake Crest,” will be rolling out to a small set of partners soon to help them drastically accelerate how much machine learning work they can do.
The NNPs are designed to very quickly tackle the math that underpins artificial intelligence applications, specifically neural networks, a currently popular branch of machine learning. One of the big problems with the large, deep neural networks that are popular right now is that they can be very computationally intensive, which makes them harder to test and deploy rapidly.
At first, the NNPs will only get released to a small number of Intel partners which the company plans to begin outfitting before the end of this year. The hardware is being developed in close collaboration with Facebook, one of the companies that’s trying to push the boundaries on rapid development and testing of neural nets.
Customers will be able to access the NNPs through Intel’s Nervana Cloud service, though the company plans to make the hardware more available in the future, according to Naveen Rao, the vice president and general manager of Intel’s AI products group.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
Observers should expect rapid iteration on the new silicon, with a faster release cadence than some of Intel’s other products. Rao said that the current fast-moving nature of the AI field means that customers want new neural network chips with new capabilities as quickly as possible, in contrast to the stability needs for CPUs and other hardware.
“When you’re working with a CPU, there are a lot of expectations put on a CPU,” Rao said in an interview. “And we’re very thoughtful about additions and changes to the CPU architecture. When you’re in a vastly changing field like neural networks, it’s valued more to iterate quickly.”
Right now there are three generations of the silicon currently in flight at Intel, and the company plans to hit at least a yearly cadence for its hardware releases.