Amazon’s Alexa Gets A New Set Of Speaking Skills To Sound More Human

Amazon announced that Alexa is going to sound more human, with a new set of speaking skills that allow her to do things like whisper, take a breath to pause for emphasis, adjust the rate, pitch and volume of her speech, “bleep” out her words and more.

According to Tech Crunch, Alexa can already respond to questions about herself, tell jokes, answer to “I love you,” and even sing a song if you want.

But her voice can still sound robotic at times, so the new tools to make her sound more human were provided to Alexa app developers in the form of a standardized markup language called Speech Synthesis Markup Language (SSML), which will allow them to code Alexa’s speech patterns. This enables the creation of voice apps – “Skills” on the Alexa platform – where developers can control the pronunciation, intonation, timing and emotion of their Skill’s text responses.

There are five new SSML tags, including whispers, expletive beeps, emphasis, sub (which lets Alexa say something other than what’s written), and prosody, which is about controlling the volume, pitch and rate of speech.

In addition, Amazon introduced additional “speechcons,” which are words and phrases that Alexa can express in a more colorful way. Some already available in the U.S. include “abracadabra!,” “ahem,” “aloha,” “eureka!,” “gotcha,” “kapow,” “yay,” and more. Now, Alexa can use regionally specific terms such as “Blimey” and “Bob’s your uncle,” in the U.K. and “Da lachen ja die Hühner” and “Donnerwetter” in Germany.

There are now over 12,000 Alexa Skills on the marketplace, but it’s unknown how many developers will actually put to work.