Despite its name, OpenAI can’t claim to be open. All recent language models from the ChatGPT team are closed and proprietary, something for which it has been widely criticised, even by Elon Musk, one of its founders and now a direct rival.
But the company seems poised to change that narrative. Its current CEO, Sam Altman, has announced that OpenAI will soon release in the coming months a new open-weighted language model that can be run locally, rather than exclusively on the company’s servers. The last model OpenAI released with these features was GPT-2, released six years ago.
The announcement in question was made by Sam Altman through a publication on X in which he noted that it is a long-considered step and now considered a priority.
TL;DR: we are excited to release a powerful new open-weight language model with reasoning in the coming months, and we want to talk to devs about how to make it maximally useful: https://t.co/XKB4XxjREV
we are excited to make this a very, very good model!
__
we are planning to…
— Sam Altman (@sama) March 31, 2025
“We’re planning to release our first open weights language model since GPT-2,” Altman wrote. “We’ve been thinking about this for a long time, but other priorities took precedence. Now we feel it’s important to do so.”
‘Open-weighted’, but Not Open-Source
Unlike an open-source model—where developers have access to the source code, training data, and the complete architecture—an open-weight model offers more limited openness.
This approach allows users to fine-tune the project to their needs, but it won’t be possible to rebuild it from scratch, as they won’t have access to essential components like the original dataset or full architectural details.
The strategy represents a significant shift from OpenAI ‘s more conservative stance in recent years, with its most powerful models accessible only through APIs.
The decision comes amid growing competition in the world of artificial intelligence, with rivals such as Meta and its family of Llama models, Google with its recent Gemma and its reportedly most advanced model Gemini 2.5, and the Chinese laboratory DeepSeek, whose latest open-source reasoning model, DeeSeek-V3-0324, has easily surpassed OpenAI’s o1 version.
Altman acknowledges internal debate at OpenAI
The move hasn’t been entirely unanimous within the company. In a recent Reddit thread, reported by Decrypt, Altman expressed that there are divided opinions within OpenAI regarding the open source approach.
“Yes, we are discussing (releasing some model weights and publishing some research),” he wrote. “Personally, I think we’ve been on the wrong side of history in this regard and need a different open-source strategy. Not everyone at OpenAI shares this view, and it’s not our top priority right now.”
Despite this, Altman made it clear that the decision to move forward with an open-weight model reflects a need to reconnect with the community and adapt to the new competitive environment.
As part of its collaborative approach, OpenAI has posted a form on its website to gather developer feedback and plans to host in-person events in the coming weeks. The first sessions will be in San Francisco, followed by meetups in Europe and the Asia-Pacific region.
“We are excited to collaborate with developers, researchers, and the broader community to gather input and make this model as useful as possible,” OpenAI wrote in its statement.
Steven Heidel, a member of the OpenAI API team, also revealed that this new model will be able to run on local hardware. This feature will offer users a greater degree of autonomy, in contrast to the current dependence on cloud services or proprietary APIs.
Seeking greater competitiveness
Despite the enthusiasm generated, OpenAI hasn’t provided any key details about the model. It’s unknown how many parameters it will have, the size of the context window, the training methods used, or the exact license under which it will be released.
That last point is particularly relevant: depending on the conditions, there could be significant restrictions, such as a ban on reverse engineering or fine-tuning in certain countries.
Altman indicated that the model will have reasoning capabilities comparable to o3-mini, which would make it the most powerful reasoning model within the category of open-weight models. This would give it a competitive advantage even against DeepSeek R1, which recently garnered attention for its performance.
As noted earlier in his post on X, the decision to launch this model, in any case, is not a priority for OpenAI, which just closed a new round of funding that valued the company at $300 billion and plans to continue developing proprietary models. It is seeking input and suggestions from researchers and developers to decide what features it will ultimately have and plans to launch it “in the coming months.”
The funding round comes at a sweet spot for the company. Following the launch of its new GPT-4o generative model, millions of users flooded social media with ChatGPT-generated illustrations in the style of the famous Japanese animation studio Studio Ghibli.
The tool’s user growth skyrocketed, registering more than a million new users in just one hour and forcing OpenAI to impose limits on free accounts due to the burden that creating these images placed on the company’s servers.