Google Developer Knocks Future of Voice Search Despite Major AI Advances

Google voice search, artificial intelligence

A top Google software engineer is causing some controversy — and possibly dropping hints — by downplaying the future of voice-enabled search in favor of other modalities.

During a recent conversation on Google’s ‘Search Off the Record’ podcast titled “The Future of SEO,” developer Martin Splitt was asked by host John Mueller “What about voice search? Will SEOs have to optimize for voice search?”

Splitt replied, “Oh God, the future that never will be.”

Recalling futuristic ideas for computers without keyboards, he added, “I think that has been a recurring theme from the ‘90s. But I think in the future, it won’t change (so that voice) will naturally or magically become the number one thing that we need to worry about.”

Noting that voice search is imperfect based on “how queries are phrased,” and differences between AIs from Alexa to Siri, Splitt walked the comment back a bit, saying, “I don’t know.”

The comment triggered headlines like “Voice Search Is Not the Future” from media outlets like Search Engine Journal, renewing the debate on search inputs — text, image and voice.

Yet it’s hard to find voice search doubters in the connected economy. Consulting firm TrueList estimates that nearly 123 million Americans are using voice search in 2021, and Google itself has doubled down on voice, while Amazon’s Alexa is being integrated across that portfolio.

See also: Amazon Leans on Alexa to Connect With ‘All Aspects of Consumers’ Lives’

MUM’s the Word

In May, Google unveiled Multitask Unified Model (MUM), its next gen speech recognition AI.

In a company blog, Google technologist Pandu Nayak said “MUM not only understands language, but also generates it. It’s trained across 75 different languages and many different tasks at once, allowing it to develop a more comprehensive understanding of information and world knowledge than previous models.”

Nayak said MUM is “1,000 (times) more powerful” than the BERT natural language processing (NLP) AI Google introduced in 2018. The MUM AI is also multimodal, meaning “it can understand information from different formats like webpages, pictures and more, simultaneously.”

According to PYMNTS’ Connected Economy research, 26% of consumers own a device with a voice-controlled assistant, one-third of all consumers have created shopping lists using voice-activated devices, and 25% of consumers say they’ve used voice to make a purchase.

However, visual search is making greater inroads as image recognition AIs improve.

For example, PYMNTS reported in October that shopping optimization platform Fast Simon launched its AI Visual Discovery Suite.

In a statement, Fast Simon CEO Zohar Gilad said, “Images are one of the most powerful inspirations for shoppers, especially in fashion. Using advanced AI, we are letting merchants provide their shoppers with a powerful engagement medium — visual discovery — and better adapt to the shifts in users’ shopping experience preferences.”

See also: Fast Simon Debuts Shopping by Image Search