Watch all the Transform 2020 sessions on-demand here.
Accelerator chips that use light rather than electrons to carry out computations promise to supercharge AI model training and inference. In theory, they could process algorithms at the speed of light — dramatically faster than today’s speediest logic-gate circuits — but so far, light’s unpredictability has foiled most attempts to emulate transistors optically.
Boston-based Lightelligence, though, claims it’s achieved a measure of success with its optical AI chip, which today debuts in prototype form. It says that latency is improved up to 10,000 times compared with traditional hardware, and it estimates power consumption at “orders of magnitude” lower.
The technology underpinning it has its origins in a 2017 paper coauthored by CEO Yichen Shen. Shen — then a Ph.D. student studying photonic materials at MIT under Marin Soljacic, a professor at MIT’s Department of Physics who runs the school’s photonics and modern electro-magnetics group — published research in the journal Nature Photonics describing a novel way to perform neural-network workloads using optical interference.
Lightelligence was founded months later, and Soljacic was one of the first to join its board of directors.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
“A student like Yichen only comes through rarely in a Professor’s career, even at MIT. Yichen is a real visionary and a pioneer in this field of using integrated optics for AI,” said Soljacic.
The chip in question — which is about the size of a printed circuit board — packs photonic circuits similar to optical fibers that transmit signals. It requires only limited energy, because light produces less heat than the electricity, and is less susceptible to changes in ambient temperature, electromagnetic fields, and other noise. It’s designed to slot into existing machines at the network edge, like on-premises servers, and will eventually ship with a software stack compatible with algorithms in commonly used frameworks like Google’s Tensorflow, Facebook’s Caffe2 and Pytorch, and others.
Lightelligence has so far demonstrated MNIST, a benchmark machine learning model that uses computer vision to recognize handwritten digits, on its accelerator. And it’s recorded matrix-vector multiplications and other linear operations — key components of AI models — running roughly 100 times faster than the state-of-the-art electronic chips.
“We are very pleased to reveal our working optical chip AI computing system,” Shen said. “Our prototype … is 100,000 times faster than the system demonstrated in our Nature Photonics paper and a fraction of the size. The system is a true testament to our team.”
To date, Lightelligence has raised $10.7 million in venture financing and has over 20 employees, including a number of industry veterans hailing from Columbia, Georgia Tech, Peking University, and UC Berkeley. Headlining the roster is Dr. Gilbert Hendry, who’s held various roles at Google and Microsoft, and Maurice Steinman, a former AMD senior fellow.
Lightelligence stands largely alone in the optical AI accelerator space, but it competes with Lightmatter, which has raised triple the amount of funding ($33 million) for its own chip. (Lightmatter’s CEO Nicholas Harris, interestingly, was a coauthor on that Nature Photonics paper.)