Voice recognition could soon play a bigger role in retail — especially in the world of quick service restaurants.
That’s the signal sent recently by McDonald’s. The fast food operator recently announced an agreement to acquire Apprente. The company said in an announcement that the arrangement is another bold step in advancing employee- as well as customer-facing innovations while bolstering the technological capabilities of the chain.
Founded in Mountain View, California in 2017, Apprente was started to make voice-based platforms for complex, multi-accent, multilingual, and multi-item conversational ordering. Within McDonald’s restaurants, the technology is forecast to enable simpler, faster and more accurate order-taking at the drive-thru with the possibility of becoming a part of mobile ordering and kiosks.
McDonald’s Corporation President and Chief Executive Officer Steve Easterbrook said in the announcement, “Building our technology infrastructure and digital capabilities are fundamental to our Velocity Growth Plan and enable us to meet rising expectations from our customers, while making it simpler and even more enjoyable for crew members to serve guests.”
That’s not the only recent major move that involves voice recognition and the biometric authentication method’s potential use in retail and payments.
“Our new all-neural, on-device Gboard speech recognizer is initially being launched to all Pixel phones in American English only,” wrote Johan Schalkwyk, a Google fellow of its Speech Team, in a blog post. “Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application.”
While voice recognition on smartphones is nothing new, there is always a slight delay when virtual assistants — such as Siri, Alexa and Google — respond to a user’s query. That happens because the data from the user’s voice has to travel from their phone to the servers of the service provider, where it is then analyzed and sent back. However, Google’s new technology is an end-to-end speech recognizer, compact enough to be kept on a phone.
“This means no more network latency or spottiness — the new recognizer is always available, even when you are offline,” explained Schalkwyk. “The model works at the character level so that, as you speak, it outputs words character by character, just as if someone was typing out what you say in real time, and exactly as you’d expect from a keyboard dictation system.”
It’s not surprising that Google is working to improve its voice recognition capabilities. Data shows that voice-activated devices have become an important product for consumers.
In the second annual edition of the PYMNTS and Visa How We Will Pay survey, 28 percent of all U.S. consumers reported owning a voice-activated device that was used to listen to music, check the weather and ask “fun questions.” More important, 27 percent used them to make a purchase in the seven days that the survey tracked their purchasing behavior.
Ads for Illness
Amazon is also active in this game, of course, and is expanding the use of voice recognition technology. For example, Amazon is trying to make Alexa, its voice-activated personal assistant, act as a doctor or nurse, detecting illness by a change in the user’s voice.
Amazon plans to send ads to users based on how they are feeling. For instance, if Alexa was able to determine that a user has a cold, it could present ads for cough medicine.
Voice promises to become a bigger part of retail in the months and years to come.