Artificial intelligence (AI) has been a buzzword in technology, and science fiction, for decades.
Now, in the 21st century, it is finally beginning to have its star turn in the workforce.
“Computers can now behave like humans. They can articulate, they can write and can communicate just like a human can,” Beerud Sheth, CEO at conversational AI platform Gupshup, tells PYMNTS as part of the AI Effect series.
“And [Large Language Models] are generative. They can generate an intelligent response. No one ever thought a bulldozer could behave like a human, or fire, or any of the prior inventions throughout history,” Sheth added. “AI has animated society in a way that no other technology has before.”
That’s why the transformative potential of generative AI technology is poised to uniquely shape the future of enterprise workflows and processes, marking a significant step forward in the tools organizations have at their disposal.
Elon Musk has even said that AI will one day render all jobs obsolete.
By automating routine tasks, AI is poised to disrupt traditional enterprise workflows and processes in numerous domain-specific ways, where organizations can capture additive value unique to their goals while freeing up human workers to focus on more advanced or complex issues.
“Enterprise use of AI has to be accurate and relevant — and it has to be goal oriented. Consumers can have fun with AI, but in a business chat or within an enterprise workflow, the numbers have to be exact, and the answer has to be right,” Sheth said.
“But in terms of how customers acquire new customers, sell their services, engage and manage the relationship and collect payments, AI will transform virtually very touchpoint,” he added, noting that conversational interface is “much more natural and intuitive.”
Old interfaces forced humans to behave like computers, but now new interfaces are forcing computers to behave like humans, Sheth explained.
As for what the GupShup CEO sees the future holding?
“Humans aren’t going anywhere,” he said. “Ultimately, if you can combine AI plus human, or natural and artificial intelligence, it will be better than either of the two alone.”
“Importantly, it doesn’t have to be perfect,” he said, “If AI can automate 20%, 50%, 70%, depending on the use case, even that is enough to free up natural human intelligence for more strategic thinking. You can think of it as a dial that you can turn up or down as the AI gets better.”
But the shift in AI capabilities comes with a dark side, as the technology can potentially misrepresent information as well as empower bad actors — not to mention that fact that some across society are concerned, more broadly speaking, about the very notion of computers able to act like humans.
The convergence of these abilities has triggered a societal debate, presenting both opportunities and challenges. Governments, like the U.S., are beginning to recognize that AI’s transformative power extends far beyond previous technological advancements.
“It’s good that the government is setting up guardrails for AI development. These are good directional statements that give time for the industry, legislature and society to debate and define the details around these guidelines,” Sheth said.
“Businesses that are developing AI have to be mindful of both the positive and the negative use cases, and enterprises that are users of AI should also be aware of how this applies in their respective domains,” he added.
After all, while regulation can provide safety and security standards, it must still strike a delicate balance to avoid stifling innovation.
“Regulations can slow down innovation, and can also lead to what’s called regulatory capture, meaning a few large companies could use regulations to increase the cost for everyone else, which reduces the ability of startups to create new innovation,” Sheth explained, while adding that guardrails around certain applications of AI are necessary, and effective oversight can still have a very positive impact.
The ethical and compliance considerations surrounding AI integration are substantial. Generative AI has the potential to gain consumer trust by impersonating humans, but this can lead to privacy violations and security issues as individuals may unknowingly share personal information. The need to navigate these complex issues is apparent in various sectors, from healthcare to commerce, where AI can have far-reaching implications.
But at the same time, Sheth noted that “the opportunities and the possibilities are endless.”
“Worldwide, very few people have access to doctors — and the opportunity to have an AI doctor, even if they have just 30%, 50% of an average provider’s knowledge and capability, is still a massive value add,” he said.
Other examples of simulated humans include AI tutors and customer support agents, each able to provide personalized support on a global scale to even the most vulnerable populations.
“The ability to guide and support [individuals at scale] dramatically improves the productivity of every person, by augmenting and amplifying [their workflows] with the help of generative AI,” Sheth said.
“There’s no question that the technology touches almost everything humans do, unlike prior waves of technology, and most of the world is now technologically connected, aware and sophisticated,” he added.