We have known Google Lens since May 2017 , when it was presented during the Google I / O conference. It is an image recognition service using artificial intelligence that gives you additional information about what you are photographing. Come on, like Google Goggles , but adapted to the times.

Now we see how it spans the Google ecosystem a little more. If a month ago came to Google Pixel first generation via Google Photos , today begins to reach the second place from which you can access: the Google Assistant . Of course, it is still an exclusive feature of Pixel phones .

Writing, voice and image

At first you could only contact the Google Assistant by voice : “Ok, Google, do this”, “Ok, Google, do this other”. Later the interaction came through text , that is, writing . Now the circle is completed, for now, with the adoption of the image as a way to interact with the Assistant .

Some users have found a new button in their Google Assistant: the Lens button . Pressing it opens the camera, which as we know will use image recognition in real time to offer you all kinds of information about what you are seeing.

Pressing the button opens the camera, and you must touch on the object from which you want to obtain additional information. If Google Lens recognizes it, it will show you information on a floating card , with additional actions where appropriate.

For now it seems that Google Lens deployment in the Google Assistant is in its first phase and few users have it activated . The function is still officially exclusive for Pixel, although it is likely to end up spreading to other phones later, just as it happened with the Google Assistant.