The commercial viability of artificial intelligence (AI) is officially here, and so are its pitfalls.
At the center of many enterprise concerns around the use of innovative generative AI solutions is the same thing at the center of the tools themselves: questions around the data and information fed to the AI models and that data’s provenance and security.
The product is designed to allay firms’ fears around employees inadvertently giving the chatbot access to proprietary information when they use it — as Samsung engineers did last month.
Many businesses harbor worries around the fact that AI platforms store their data on external servers and often continually re-train their AI’s large language models (LLM) by leveraging user-submitted information.
This means that a query about a company-specific proprietary process could end up being used to inform an answer to a competitor’s own request of a similar nature, as long as both organizations use ChatGPT.
That’s why the private solution from Microsoft will run on its own dedicated servers, separate from the ones used by other companies and individuals using ChatGPT for less sensitive or business-critical tasks. Per the report, the solution’s dedicated private server space won’t be cheap and may run interested organizations up to 10 times the normal cost.
Businesses are racing to integrate AI solutions that can connect historically disparate and fragmented data to get a more unified picture of their operations, as well as identify previously obscured opportunity areas.
And tech companies are racing to be the ones that provide those next-generation solutions to them.
PYMNTS research found that 54% of consumers said they would prefer using voice technology in the future because it is faster than typing or using a touchscreen.
Still, the increasing adoption of generative AI tools and automated machine learning (ML) solutions isn’t without its accompanying disruptions and growing pains.
“There is a lot of value [around generative AI capabilities], but the key question is when can we use it without the fear of bias and where this information is coming from,” Bank of America CEO Brian Moynihan said in April. “We need to understand how the AI-driven decisions are made…”
Data rests at the heart of the generative AI tools and capabilities that represent the next wave of economic innovation.
“Data is foundational to building the models, training the AI,” Michael Haney, head of Cyberbank Digital Core at FinTech platform Galileo, the sister company of Technisys, told PYMNTS in March. “The quality and integrity of that data is important…”
By enacting guardrails around the provenance of this data being used in LLMs and other training models, including making it obvious when an AI model is generating synthetic content such as text, images and even voice applications and flagging its source, governments and regulators can protect consumer privacy without hampering private sector innovation and growth.
“AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks,” the White House said Thursday (May 4).
While policymakers continue to struggle to enact effective oversight of generative AI, areas like healthcare have the opportunity to serve as a best practice standard bearer around data privacy protections and data set integrity and provenance.
As the world continues to undergo a tectonic shift driven by the technical capabilities of AI applications, both private enterprises and public leaders will need to work together to promote fair competition while protecting end-users.