Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months.
“We are excited to release a powerful new open-weight language model with reasoning in the coming months,” Altman wrote on X.
Altman said in the post that the company has been thinking about releasing an open weight model for some time, adding “now it feels important to do.”
The move is partly a response to the runaway success of the R1 model from Chinese company DeepSeek, as well as the popularity of Meta’s Llama models.
OpenAI may also feel the need to show that it can train the new model more cheaply, since DeepSeek’s model was purportedly trained at a fraction of the cost of most large AI models.
OpenAI currently makes its AI available through a chatbot and through the cloud. R1, Llama and other open weight models can be downloaded for free and modified. A model’s weights refers to the values inside a large neural network—something that is set during training. Open weight models are cheaper to use and can also be tailored for sensitive use cases, like handling highly confidential information.
Steven Heidel, a member of the technical staff at OpenAI, reposted Altman’s announcement and added “we’re releasing a model this year that you can run on your own hardware.”
OpenAI today also posted a webpage inviting developers to apply for early access to the forthcoming model. Altman said in his post that the company would host events for developers with early prototypes of the new model in the coming weeks.
Meta was the first major AI company to pursue a more open approach, releasing the first version of Llama in July 2023. A growing number of open weight AI models are now available. Some researchers note that Llama and some other models are not as transparent as they could be because the training data and other details are still kept secret. Meta also imposes a license that limits other companies’ ability to profit from applications and tools built using Llama.