Meta said its latest artificial intelligence (AI)-related venture will help developers write code.
“Today, we’re releasing Code Llama, a large language model (LLM) that can use text prompts to generate and discuss code,” the social media giant wrote in a Thursday (Aug. 24) blog post. “Code Llama is state-of-the-art for publicly available LLMs on coding tasks. It has the potential to make workflows faster and more efficient for developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software.”
The launch marks the Facebook owner’s latest venture into the AI space. It joins other companies trying to capitalize on the technology.
“We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing,” the company said on its blog. “By making AI models available openly, they can benefit everyone.”
Meta said this method is safer, as providing access to today’s AI models lets developers and researchers stress test them and collectively spot and solve problems faster.
“By seeing how these tools are used by others, our own teams can learn from them, improve those tools, and fix vulnerabilities,” it said in the July blog post.
Earlier this year, Meta made LLaMA (Large Language Model Meta AI) available to researchers working in the AI field. LLaMA requires less computing power and was created for researchers whose infrastructure access is limited.
This month, PYMNTS examined the debate surrounding open-source and closed-source AI models following the release of tools from several tech giants.
“As one might expect, under the closed-source model, source code is not released to the public, whereas under open-source, also referred to as the free and open-source software (FOSS) model, the source code is openly shared so that people are encouraged to voluntarily improve its design and function,” that report said.
While open-source AI offers greater interoperability, customization and integration with third-party software or hardware, this openness could also allow abuse by bad actors.