A PYMNTS Company

The Tech Industry Can’t Agree on What Open-Source AI Means. That’s a Problem

 |  March 29, 2024

By: Edd Gent (MIT Technology Review)

Suddenly, “open source” has become the latest trend in AI circles. Meta has committed to developing open-source artificial general intelligence, while Elon Musk is taking legal action against OpenAI for its lack of open-source AI models.

Simultaneously, an increasing number of tech leaders and companies are positioning themselves as advocates for open source.

However, a fundamental issue arises: there is no consensus on the definition of “open-source AI.”

On the surface, open-source AI holds the promise of a future where anyone can participate in its development, potentially accelerating innovation, enhancing transparency, and empowering users with greater control over systems poised to impact various aspects of society. But what exactly constitutes open-source AI, and what attributes render an AI model ineligible?

These questions carry significant implications for the technology’s future. Without a universally agreed-upon definition, powerful companies could exploit the concept to align with their own interests, potentially reinforcing the dominance of current industry leaders.

Stepping into this debate is the Open Source Initiative (OSI), which positions itself as the authority on what qualifies as open source. Established in 1998, this nonprofit organization oversees the Open Source Definition, a widely recognized set of criteria determining whether software can be labeled as open source.

The OSI has now convened a diverse group of 70 individuals, including researchers, legal experts, policymakers, activists, and representatives from major tech firms like Meta, Google, and Amazon, to craft a functional definition of open-source AI.

CONTINUE READING…