The science fiction genre is going to need a new name, or some new tropes, as many of its greatest hits are becoming a right-now reality.
The combination of Bluetooth headsets and voice-based generative artificial intelligence (AI) capabilities is changing the way users engage with the technology, as people are increasingly having full, verbal conversations with GPT platforms out loud — the same way sci-fi characters have talked to their computers, from “Star Trek” to “Star Wars” and beyond.
The expansive capabilities of multimodal AI are letting people have hours-long discussions with the most updated versions of today’s leading AI programs.
All they need is a connected device and a subscription to a GPT program. The suggested prompt? Ask the generative AI program to “act as a friend you’re having a phone call with.”
As GPT programs from Google, Meta, OpenAI, Anthropic and other industry players become further fine-tuned and move beyond text-based interactions, voice recognition and voice synthesis features offer a new avenue for users to interact with the technology.
The companies behind the GPT platforms are leaning into the usability of their voice features, simulating vocal tics and breathing noises to their voice responses.
After all, voice is one of the more effortless ways for end-users to engage — and even transact — with AI platforms.
But effortless doesn’t equate to flawless, and there are still some kinks to work out beyond conversational realism.
Brainstorming ideas and simply having long conversations to pass the time are two of the right-now areas where GPT users are beginning to have deeper relationships with simulated generative AI “people.”
And while firms like OpenAI have conditioned their platforms to keep conversations from becoming too intimate or personal, with one approach being by limiting the ability of the voice AI to maintain a long-term memory, users are still turning to AI to chat out loud when other people aren’t around.
The intersection of other modern connected technologies, like Bluetooth headphones (Apple’s Airpod product alone brings in more revenue than KFC, FedEx, Spotify or Twitter), and smart cars, has ideally primed the marketplace for voice-activated and AI-refined experiences.
As noted in the PYMNTS Intelligence report “Consumer Interest in Artificial Intelligence,” consumers interact with about five AI-enabled technologies every week on average, including browsing the web, using navigation apps, and reviewing online product recommendations. Nearly two-thirds of Americans want an AI copilot to help them do things like book travel.
Younger consumers have shown the greatest interest in AI. Data showed that 56% of Generation Z consumers are interested in AI-enhanced communication, 62% are interested in AI-enhanced entertainment, and 60% have shown interest in AI-enhanced shopping.
According to the report “How Consumers Want to Live in the Voice Economy“ 54% of consumers said they would prefer voice technology in the future because it is faster than typing or using a touchscreen. Nearly 1 in 3 U.S. millennials already use a voice assistant to pay their bills.
The ability to provide both individual and enterprise end-users with an ambient, always-on AI companion is a future-fit vision that many tech companies are working toward.
Meta in particular has planted its flag in the AI companion space, introducing 28 AI personas Sept. 27, as well as a product that lets celebrities and public figures create their own AI chatbots to interact with fans.
Each of the new AIs has a personality and unique interests. One of the bots being developed is known as “Bob the robot,” a self-described “sass master general” with “superior intellect, sharp wit and biting sarcasm,” and the dozens of AIs will also have their own social profiles on Facebook and Instagram, enabling users to learn more about them.
“We’ve been creating AIs that have more personality, opinions and interests, and are a bit more fun to interact with,” Meta said in a blog post.
“You can imagine a world where over time every business has an AI agent that basically people can message and interact with them,” said Meta Co-founder and CEO Mark Zuckerberg on the company’s second-quarter 2023 earnings call in July. “… It’s quite human labor intensive for a person to be on the other side of that interaction.”
The company also unveiled a set of smart glasses that allow users to engage with the AI bots by using their voice, bringing the connected and voice-activated future one step closer.
Amazon, whose Alexa smart assistant has gone from leader of the pack to somewhere near the back in voice capabilities, will need to hope its Anthropic investment starts paying off soon to keep up.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.