Skip to main content

Google is adding Lens AI to Pixel, LG, and Sony camera apps

Google's Aparna Chennapragada shares the logos of brands that will incorporate Lens directly into their camera apps
Google's Aparna Chennapragada shares the logos of brands that will incorporate Lens directly into their camera apps at the I/O conference on May 8, 2018.
Image Credit: Khari Johnson / VentureBeat

Watch all the Transform 2020 sessions on-demand here.


Google’s Lens computer vision service is going to be directly integrated into the camera app for Pixel, LG G7, and other smartphones. Lens is also getting three new features in the coming weeks.

Google Lens made its debut at I/O last year, showcasing use cases like taking photos of flowers to learn the species or seeing an act on the marquee at the Fox Theater in Oakland and buying a ticket to the show.

Lens began to be be real last fall for owners of Google’s Pixel smartphones with features like landmark recognition, text recognition in images, and the ability to recognize works of art. Since then, the computer vision service has been rolled out to Google Photos and has learned to do things like identify species of plants and animals, scan business cards, and even recognize famous people.

“This way it makes it super easy for you to use Lens on things right in front of you already in the camera,” Google senior director of product Aparna Chennapragada said onstage today. The news was announced onstage at the Google I/O developer conference being held May 8-10 at the Shoreline Amphitheater in Mountain View, California.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Google Lens is also getting a series of new features such as smart text selection, style match, and real-time results.

Text selection was made available through Lens last year, and smart text selection will allow people to copy and paste text directly from images.

Style match, another new feature, will allow Lens users to take a photo of a piece of clothing or furniture they like so that they can find similar items.

The third new feature is real-time analysis. Whereas Lens today requires a tap of the screen for photo capture, real-time visual search will give people the option to get answers about the world around them by simply pointing their phone camera at a subject.

Real-time results will give Lens the ability to serve up related content and allow, for example, the delivery of a music video when you scroll over the top of a concert poster, Chennapragada said.

“This is an example of how the camera is not just answering questions but it’s putting the answers right where the questions are, and it’s really exciting,” she said.

Lens has been spreading to more devices since it was first introduced at I/O one year ago.

In February, at the release or ARCore, Google pledged to expand Lens to iOS devices later this year. Lens was made available to non-Pixel Android users in March.

Visual search has been been listed among the main features for other high-end smartphones like Samsung’s Galaxy S8 last year and S9 this year, as well as LG V30S ThinQ smartphones. LG V30S ThinQ will include a dedicated button for Google Assistant and Lens access.

Lens competes with other services like Amazon’s visual search as well as Pinterest’s computer vision service, also named Lens — which, ironically, is led by a lot of prominent former Google people like VP of engineering Li Fan and former Google computer vision lead Chuck Rosenberg.

Read Google I/O 2018 Stories Here