The expanded partnership, announced Tuesday (July 11), will help Shutterstock “solidify its position as a leading provider of high-quality training data for OpenAI models,” the stock image, video and music provider said in a news release.
With the new collaboration, OpenAI — creator of the artificial intelligence (AI) chatbot ChatGPT — has secured a license to access additional Shutterstock training data from the company’s library.
Shutterstock, meanwhile, gets priority access to OpenAI’s latest technology.
The companies say they will work together to bring generative AI capabilities to mobile users through the Giphy platform, acquired by Shutterstock from Meta in May.
“We’re pleased to be able to license Shutterstock’s high-quality content library,” said Brad Lightcap, chief operating officer of OpenAI.
“This extended collaboration not only enhances the capabilities of our image models but also empowers brands, digital media, and marketing companies to unlock transformative possibilities in content creation and ideation.”
Shutterstock is no stranger to the AI world. Its partnership with OpenAI is two years old, and earlier this year, the company released its AI image generator, which lets users instantly create customized visuals.
The partnership expansion with OpenAI was one of several AI-related announcements Tuesday as businesses increasingly integrate the technology into their products and services.
But as PYMNTS wrote last week, the cost of running and training AI solutions is often more than companies can realize.
“Training generative AI requires either owning or renting time on hardware, significant data storage needs and intensive energy consumption,” the report, published July 8, said. “The cost of simply training OpenAI’s GPT-3 — the version before the one employed in ChatGPT — was more than $5 million.”
Still, there has been some progress in making it easier for companies to access generative AI. One solution crafted at the Massachusetts Institute of Technology (MIT) claims to reduce the cost of training an AI large language model (LLM) by 50%. This more efficient training method also claims it can train LLMs in half the time.
Meanwhile, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have also explored the idea of smaller, specialized LLMs to reduce cost and boost efficiency.
“Limiting the data set that a model is working from not only allows it to outperform models with 500 times as many parameters but also promises to address some privacy and accuracy concerns,” PYMNTS wrote.