AI Adds Fresh Parameters to Open- vs Closed-Source Software Debate 

Since the dawn of software, there have been two types of development: open source and closed source.

A simple binary with complex ramifications, it is from these two varietals that the entire digital landscape we inhabit and operate across has sprung.

As one might expect, under the closed-source model source code is not released to the public, whereas under open-source, also referred to as free and open-source software (FOSS) model, the source code is openly shared so that people are encouraged to voluntarily improve its design and function.

Traditionally, businesses prefer closed source as it protects their trade secrets, while academics and researchers prefer open source as it allows for democratized tinkering and exploration.

Chinese tech giant Alibaba announced that it will be open-sourcing its 7 billion-parameter large language model (LLM), Tongyi Qianwen.

It joins Meta’s Llama 2 in open-sourcing its foundational LLM, meaning that software fans, researchers and developers around the world are now able to access the model’s artificial intelligence (AI) capabilities and tinker with the technology without needing to train their own systems, saving both time and expense.

Many of the leading foundational AI models, including Google’s, OpenAI and Microsoft’s, as well as Anthropic’s, remain closed source.

While the same choice of keeping foundational code open or closed exists across most software models, AI has completely upended the conversation. That is because, as it relates to AI, the technology’s capabilities have compounded both the benefits and the pitfalls of either choice.

Read moreMeta Sharpens AI Strategy as Tech Giants Crowd Marketplace

AI’s Next Battle: Open or Closed

The first AI models brought to market were closed source. Part of the reason was that their makers, namely OpenAI and Microsoft, want to protect their first mover advantage and not give any unnecessary clues to their later-to-the-game competitors.

Now, open-source AI models are growing in popularity as startups and tech giants alike seek to compete with the incumbent market leaders like OpenAI, which despite its name, has kept the source codes to its popular products, including ChatGPT, under lock and key.

This March, open-source software community and nonprofit Mozilla announced an open-source initiative for developing AI, saying they “intend to create a decentralized AI community that can serve as a ‘counterweight’ against the large profit-focused companies.”

“The AI inflection point that we’re in right now offers a real opportunity to build technology with different values, new incentives and a better ownership model,” Mozilla wrote.

Alphabet and Google CEO Sundar Pichai told investors on his company’s second quarter 2023 earnings call that he sees open-source AI having a “critical role to play in this emerging ecosystem,” while demurring when asked about Alphabet’s plans to open up its own models.

There exist at least 37 open-source LLMs. Once their code is out in the wild, it is almost impossible to pen back up.

And that’s what worries governments around the world.

Read moreUN Security Council Wants to ‘Exercise Leadership’ in Regulating AI

Implications of Open vs Closed AI

With closed black-box AI models, any research or results are not reproducible or even verifiable — and the companies behind the models may change them at any time without warning, as well as revoke access. That is why critics of closed-source AI models have called for firms like OpenAI to open up their foundational code.

But the source code remains firmly in the hands of the organizations that developed it and is unable to be manipulated by outside actors.

While open-source AI allows for greater interoperability, customization and integration with third-party software or hardware, this openness could also allow for misuse and abuse by bad actors.

“The interaction between AI and nuclear weapons, biotechnology, neurotechnology, and robotics is deeply alarming,” said U.N. Secretary General António Guterres on July 18, emphasizing that “generative AI has enormous potential for good and evil at scale.”

Some observers and even nations fear that open-source models could help dictators and terrorists looking to weaponize AI. MIT researchers found that within just one hour AI chatbots could be coaxed into suggesting step-by-step assembly protocols for producing four potential pandemic pathogens.

“Widely accessible artificial intelligence threatens to allow people without formal training to identify, acquire, and release viruses that are highlighted as pandemic threats,” the paper stated.

But at the same time, much of AI’s creation and evolution has happened thanks to open-source development, which prevents black-box systems and makes companies more accountable while fostering innovation across the economy.

The go-forward answer for AI is not black and white and will certainly require increased cooperation between public and private interests, particularly as ethical issues around data privacy, provenance, and more begin to swirl.