Moving Generative AI Beyond Its ‘Bronze Age’ Moment

According to Shaunt Sarkissian, CEO of AI-ID, artificial intelligence (AI) is in the midst of its “Bronze Age.”

Through the past three decades, we’ve seen the rise of the internet age, the rise of the mobile device age and more recently the blossoming (and continuing) period of greater connectivity between businesses, consumers, devices and omnichannel experiences. Now we’re in what might be termed the “smart” age, as data is harnessed, continually refined and fed into models that can help change the fabric of everyday life.

As Sarkissian told Karen Webster, “out of all the early ages of humanity, the Bronze Age seemed to be the most significant. Why? Because we finally went from sticks — and we started making metals.”

The AI industry is seemingly changing with the speed of each browser refresh, and the heady pace of change will last several years. But look a bit closer, and things are shifting. The Silicon Valley frenzy of throwing money at the space is now being combined with a closer examination of business models. The rush of companies adding “AI” to their names with abandon is slowing. And now, at water coolers up and down Wall Street, the Valley and in the marbled halls of Congress, the attention is turning to what’s next.

Looking Beyond the Initial Excitement

“Just going out now and saying, ‘AI has built something — let’s all clap and go look at it.’ That’s phase one,” Sarkissian said. “Now the question is: ‘What are we going to do to make [AI] responsible, traceable, accountable and accurate?’”

The industry is learning in real time, so to speak, as scores of companies rush to apply data sets and large language models (LLMs) to various use cases in a bid to advance everything from science to medicine, mathematics, social policy and commerce.

As it stands now, the models are “such a black box with these large language models that to really ‘go in there’ and do a postmortem of what has been trained is going to be difficult,” Sarkissian said.

There need to be data sharing agreements, he said, between stakeholders, between major publishers and AI firms and major content firms, so that there’s transparency in the mix, the models wind up becoming more accurate and the AI enterprises themselves are more accountable for their actions.

Such multilateral agreements may go a long way toward helping address the frictions already inherent in the AI landscape. Earlier this month, two authors sued OpenAI, alleging the ChatGPT creator violated copyright law and that the technology generates summaries of their written work that are accurate to a degree that is only possible if it were trained on their books. And, per the suit, the authors alleged that the training was done without their consent, leading to financial damages.

Taking Some Cues From the World Wide Web

As Sarkissian noted, “people should start looking at these large language models and what’s being trained much like the ways in which they look at the internet.”

With a nod to the development of the web over the past several decades, he said that web search grew in leaps and bounds as web crawlers collected information every second of every day for Google and other search engines. They amassed reams of data to help fine-tune what’s displayed as users look for the goods and services they need on a day-to-day basis.

“If you were to ask somebody today, ‘Do you want to not be on Google, and not have these search tools go out and source your information,’ no one would say yes,” Sarkissian said.

The search engines are simply too critical for commerce, he added.

We’re likely to see some parallel in AI’s use in the everyday world, said Sarkissian, who added that as the LLMs continue to take shape, they still, and always will, must be trained on something. And for the businesses or even governments that opt to restrict data access or collection, he cautioned: Be careful what you wish for.

“Because if that ends up being the case, and your particular ‘slices of information’ are left out of these models, well, that data would have been used for things in the future — and now will not include your business or information,” he said.

One solution rests with the approach in which, as Sarkissian said, if AI output uses a component of someone’s intellectual property, they might receive compensation. The data should be tracked and traced (and tagged), he said, adding that “the ingestion can be compensated.”

One hypothetical situation, he said, might be that a publisher with 200,000 books in its roster can be paid a “flat rate” if all those books can be ingested into LLMs to train them.

Looking beyond the Bronze Age, Sarkissian said there’s a lot to be learned and a lot of evolution to be had, and that needs to be embraced before AI reaches its full potential. For AI, he said, “this is a Bronze Age moment where we’re going in and creating things that I think are going to be much more impactful than even people can realize yet.”