The Big News Out Of I/O

So, what did we learn this week at Google I/O?

As Alphabet kicked off its annual developer conference in Mountain View, California, this week, the name of the game was artificial intelligence (AI). Google announced that its research division will heretofore be known as Google AI, which will highlight how Google’s research and development efforts are increasingly centered on computer vision, natural language processing and neural networks.

The goal of all that updating? As Google noted in its own promotional video, the answer is simple: Google wants to do “it” for its customers.

What is it?

Well, John Legend can help explain that (see above).

For those who can’t see the video, Google is advertising its ability to help you — whether you’re famous like John Legend or Sia or are just an average surfer or lunar explorer needing Google Assistant to help you take a selfie or lock your front door. The idea is simple: Going forward, whatever it is, however it works, odds are you can get Google to do it for you — unless you want to break up with your boyfriend; Google confirms that one is pretty much on you.

To make this work, Google is raising the level of its game when it comes to consumers using — and interacting with — that AI. Conversations, for example, will become more like, well, conversations. Users will no longer have to start every query with “Hey, Google” or “OK, Google” before beginning the first in a series of commands. The company will be rolling out a feature over the next few weeks that will basically make it possible to give Google a list of questions to answer within the same request, and Google will be able to follow along.

When necessary, Google will take the lead — like turning the AI assistant loose on users photos to make it easier for them to tap into built-in editing features and to expand its AI-powered features beyond automatically creating collages, movies and stylized photos. Going forward, Google’s AI will be able to enhance pictures with  black and white photo colorization, brightness correction and suggested rotations. The photo editing is good for customers, but it also helps Google learn more about images with each picture it gets to “look” at. The AI gets better at image recognition with more training, and what Google needs more than anything else to perfect visual search is a lot of pictorial data.

Speaking of getting visual, it seems Google’s answer to the Amazon Echo Show is finally coming: the company’s Smart Display devices are supposedly arriving in July (just in time for Prime Day). Smart displays will, of course, be powered by YouTube and Google Assistant. Google Maps is also getting an AI upgrade — whether one is accessing the service via Android or iOS. Instead of merely offering directions, maps will also be improving its recommendation set for places of local interest for users.

Maps will also be integrated with Google’s computer visioning technology: Consumers will be able — from within maps and using Lens — to identify buildings, dog breeds and clothing brands merely by aiming their camera lens at them; and it will be able to identify text.

Google also announced a major AI upgrade for Google News. According to the company, the redesigned news destination app will “allow users to keep up with the news they care about, understand the full story and enjoy and support the publishers they trust.” The new feature will combine elements from Newsstand (Google’s digital magazine app) and YouTube and then layer on entirely new features like “newscasts” and “full coverage” for customers who want to dig deeper or explore a certain topic.

And then there were the shiny new ideas…

One particularly attention-grabbing demo saw Google Assistant truly assisting — by making phone calls for users and quite successfully imitating a human calling customer service.

A Google AI that can call customer service personnel might also in the future — Google speculated — be used by busy parents to stay on top of scheduling doctor’s appointments, or even just to navigate through an automated service to help users reach a live person.

Yes, in the the future, one’s virtual assistant might actually wrestle with the phone company so that eventually two humans can actually talk to each other.

It’s a brave new world out there.

And, as I/O continues to roll on, the developmental hits will keep on coming

We’ll keep you posted.


Latest Insights:

Our data and analytics team has developed a number of creative methodologies and frameworks that measure and benchmark the innovation that’s reshaping the payments and commerce ecosystem. In the December 2019 Mobile Card App Adoption Study, PYMNTS surveyed 2,000 U.S. consumers for a reveal of the four most compelling features apps must have to engage users and drive greater adoption.