Artificial Intelligence

Microsoft Enters The AI Conversation With Semantic Machines Acquisition

The tech giants are racing to see who can make their artificial intelligence (AI) the least artificial and the most human-like. Now, Microsoft has become the latest to make a move, announcing Monday (May 21) that it has acquired Berkeley, California-based AI startup Semantic Machines.

Semantic Machines leverages machine learning to add context to chatbot conversations, making the AI smarter each time it talks to a user, as past knowledge can be applied to future conversations. The startup’s speech recognition team previously worked on automatic speech recognition development for Apple, powering the company’s virtual assistant Siri.

David Ku, CVP and chief technology officer of Microsoft AI & Research, explained in a blog post, “Most of today’s bots and intelligent assistants respond to simple commands and queries, such as giving a weather report, playing a song or sharing a reminder, but aren’t able to understand meaning or carry on conversations. For rich and effective communication, intelligent assistants need to be able to have a natural dialogue instead of just responding to commands. We call this ‘conversational AI.’”

But just how conversational is too conversational?

Amazon has said that its smart assistant, Alexa, will be gaining new skills, including just such a “memory” to power contextual conversation. The capability will start out simple: Alexa could help a customer remember his wife’s favorite brand of wine, for example.

One day, however, Alexa could learn to make recommendations based on buying habits, the same way Amazon’s website does. Or she could remember the last time a customer ordered toilet paper and remind them to reorder. With the upcoming “Context Carryover” feature, the virtual assistant could respond to follow-up commands, such as “What is Bruce Springsteen’s first album? Play it now.”

This is the sort of thing tht Google is already doing with its own AI-powered voice assistant, but Google’s Assistant may have found and crossed that too-conversational line, plunging headfirst into the uncanny valley. The tech giant demonstrated its progress at its recent annual I/O conference, and people were horrified.

In the demo, the Google Assistant successfully phoned a hair salon and booked an appointment. Then it called a restaurant and went back and forth with the receptionist to make a reservation. In both scenarios, the AI used the same types of pauses and fillers (“um” and “uh”) that a real person would use in a conversation — successfully fooling the person on the phone into believing it was a human caller.

With its foray into conversational AI, Microsoft, too, will have to wrestle with the question of, “How conversational is too conversational?”

The tech giant has put decades into researching the building blocks of conversational AI, including speech recognition and natural language understanding, with the goal of creating a world where computers can “see, hear, talk and understand as humans,” Ku said.

However, as Google may have learned the hard way, the world may not be ready to welcome artificial intelligence that is quite so …  intelligent. It will be up to innovators such as Microsoft to walk the very thin line between smart and too smart.



The September 2020 Leveraging The Digital Banking Shift Study, PYMNTS examines consumers’ growing use of online and mobile tools to open and manage accounts as well as the factors that are paramount in building and maintaining trust in the current economic environment. The report is based on a survey of nearly 2,200 account-holding U.S. consumers.