Google Lens actually shows how AI can make life easier

During Google's I/O developer conference keynote, artificial intelligence was once again the defining theme and Google's guiding light for the future. AI is now interwoven into everything Google does, and nowhere is the benefits of CEO Sundar Pichai's AI-first approach more apparent than with Google Lens.

The Lens platform combines the company's most cutting-edge advances in computer vision and natural language processing with the power of Google Search. In doing so, Google makes a compelling argument for why its way of developing AI will generate more immediately useful software than its biggest rivals, like Amazon and Facebook. It also gives AI naysayers an illustrative example of what the technology can do for consumers, instead of just for under-the-hood systems like data centers and advertising networks or for more limited hardware use cases like smart speakers.

Lens is effectively Google's engine for seeing, understanding, and augmenting the real world. It lives in the camera viewfinder of Google-powered software like Assistant and, following an announcement at I/O this year, within the native camera of top-tier Android smartphones. For Google, anything a human can recognize is fair game for Lens. That includes objects and environments, people and animals (or even photos of animals), and any scrap of text as it appears on street signs, screens, restaurant menus, and books. From there, Google uses the expansive knowledge base of Search to surface actionable info like purchase links for products and Wikipedia descriptions of famous landmarks. The goal is to give users context about their environments and any and all objects within those environments.

Image: Google

The platform, first announced at last year's I/O conference, is now being integrated directly into the Android camera on Google Pixel devices, as well as flagship phones from LG, Motorola, Xiaomi, and others. In addition to that, Google announced that Lens now works in real time and can parse text as it appears in the real world. Google Lens can even now recognize the style of clothing and furniture to power a recommendation engine the company calls Style Match, which is designed for helping Lens users decorate their home and build matching outfits.

Lens, which before today existed only within Google Assistant, is also moving beyond just the Assistant, camera, and Google Photos app. It's also helping power new features in adjacent products like Google Maps. In one particular eye-popping demo, Google showed off how Lens can power an augmented reality version of Street View calls out notable locations and landmarks with visual overlays.

In a live demo today at I/O, I got a chance to try some the new Google Lens features on a LG G7 ThinQ. The feature now works in real time, as advertised, and it was able to identify a number of different products from shirts to books to paintings with only a few understandable hiccups.

For instance, in one situation, Google Lens thought a shoe was a Picasso painting, only because it momentarily got confused about the location of the objects. Moving closer to the desired object I wanted to recognize, the shoe in this case, fixed the issue. Even when the camera was too close for Lens to identify the object, or if it was having trouble figuring out what it was, you could tap the screen and Google would give you its best guess with a short phrase like, "Is it... art?" or, "This looks like a painting."

Image: Google

Most impressive is Google Lens' ability to parse text and extract it from the real world. The foundational groundwork for this has already been laid with products like Google Translate that can turn a street sign or restaurant menu in a foreign language into your native tongue by just snapping a photo. Now that those advancements have been refined and built into Lens, you can do this in real time with dinner menu items or even big chunks of text in a book.

In our demo, we scanned a page of Italian dishes to surface photos of those items on Google Image Search, in addition to YouTube videos for how to make the food too. We could also translate the menu headers from Italian into English by just selecting the part of the menu, an action that automatically transforms the text into a searchable format. From there, you can copy and paste that text elsewhere on your phone and even translate it on the fly. This is where Google Lens really shines by merging the company's strengths across a number of products simultaneously.

We unfortunately didn't get to try Style Match or the Street View features that were shone off during the keynote, the latter of which is a more experimental feature without a concrete date for when it will actually arrive for consumers. Still, Google Lens is much more powerful one year into its existence, and Google is making sure it can live on as many devices, including Apple-made ones, and within as many layers of those devices as possible. For a company that's betting its future on AI, there's few examples as compelling for what that future will look like and enable for everyday consumers than Lens.


#Google #Android #Smartphones #OS #News @ndrdnws #ndrdnws #AndroidNews

Popular Posts:

Your monthly update has arrived

Your monthly update has arrived

Google Pixel 3 / Pixel 3 XL case reviews: Because breaking an $800+ phone would really suck [Updated continuously]

MATERIALISTIK ICON PACK v3.6 APK