testsetset
This week, Google released a standalone version of Google Lens, its artificial intelligence-powered computer vision service for Android.
It has been available for some time on Google devices like the Pixel and Pixel 2, in addition to the Sony Xperia XZ2, Sony XZ2 Compact, a few OnePlus devices, and select smartphones from third-party manufacturers. (In May, Google said Lens would come to the native camera app on Android phones from LG, Motorola, Xiaomi, Sony, HMD Global, Transsion, OnePlus, BQ, and Asus.) But now, it is on the Google Play Store as a separate app.
It is worth noting that Lens is not compatible with all devices. A Google spokesperson told me that the Lens app is available for “most Android devices,” but (somewhat oddly) not the Huawei Mate 10 Pro specifically.
So what’s new in the standalone app? Not much. Google Photos users have had Lens since March — that’s the month Google announced Lens integration with the Google Photos app on Android and iOS, which arrived in the form of a Lens menu button that retroactively scans individual photos saved to the gallery.
June 5th: The AI Audit in NYC
Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.
I’ve been using Google Lens in Photos for a while now, and the results haven’t colored me impressed yet (to put it mildly). I was curious to see if the standalone Lens would perform any better, so I wandered around my apartment like a madman, snapping photos of random knick-knacks to see which version of the Google Lens performed better, if either. Here is what I discovered.
Google Lens vs. Google Lens
Google Lens can recognize lots of things, according to Google. The growing list includes phone numbers, dates, addresses, furniture, clothing, books, movies, music albums, video games, landmarks, points of interest, notable buildings, and barcodes. Google also claims that it can extract network names and passwords from Wi-Fi labels and automatically connect to the network that has been scanned, as well as recognize certain beverages such as wine and coffee.
Len has improved a lot recently. At Google I/O 2018, Google announced smart text selection, which lets users copy and paste text from printouts, business cards, and brochures; and style match, which IDs similar clothes and “home decor items” that are recognized by Lens’ algorithms. Sometime in March, Google Lens gained the ability to recognize celebrities.
Perhaps it is just bad luck, but I’ve never been able to get Google Lens in Photos to identify half of those things. Sure, it will extract text like a pro and recognize a barcode in seconds flat, but when it comes to drinks and books, the results are inevitably a mixed bag. As for landmarks and points of interest, I find I’m usually better off with Google Maps. It is simply less frustrating.
So, is the standalone Lens app any better? Sort of.
It certainly offers a better user experience — the Lens app scans in real time as opposed to the static Lens in Google Photos, which unintuitively requires that you launch the Photos app, find a photo to analyze, and tap the Lens button before spitting out results. With the standalone Lens installed, a shortcut on the home screen or app drawer throws you into the app’s bare-bones interface, which lets you immediately begin scanning things. Easy peasy.
As far as recognition accuracy is concerned, the Lens app seemed to do a slightly better job at identifying random assortments of things than Lens in Google Photos. A jar of jalapenos and a can of Diet Pepsi prompted product searches for pickled peppers and soda, respectively, and Lens had no trouble finding a book’s corresponding listing from both its cover and ISBN number. It didn’t find an exact match for my living room lamp, but it pulled up products that bore more than a passing resemblance. It even came close to recognizing a glass sculpture of a snowman (it guessed “snow”).
The standalone Google Lens app wasn’t perfect. It misidentified a flavor of energy bar (it thought “coconut” instead of “white chocolate”) and had trouble making out the aforementioned book’s front cover.
But Lens in Google Photos fell short more often. It found the same jar of pickled jalapenos, but thought the aforementioned Diet Pepsi was “vodka.” It couldn’t make out the book’s barcode. And it gave up on the lamp.
No clear winner
So, which version of Google Lens came out on top? Neither. That is not terribly surprising, given that they’re ostensibly based on the same machine learning model.
What my little experiment really demonstrated is how subtle (and sometimes imperceptible) differences in lighting and sharpness can make a huge impact on the accuracy of Lens’ object recognition. I snapped a photo of the energy bar from a slightly different angle and got related, but different results. Same with the Diet Pepsi can: My photo was slightly out of focus, hence the vodka.
This is a well-understood phenomenon among researchers. In a 2010 study conducted by researchers at Colorado State University, poor lighting — specifically side lighting — was found to affect the accuracy of facial recognition algorithms. “Side lighting strongly degrades algorithm performance,” the authors concluded. “More generally … lighting remains a problem for state-of-the-art algorithms.”
The more data Google’s algorithms ingest, of course, the better Google Lens will become. But even though it is becoming easier to use with the standalone Lens app, it remains something of a parlor trick — and an inconsistent one, at that.