OpenAI announced the launch of a new GPT-5-Codex model optimized for agent programming early this morning. It has now been launched in Codex, its AI programming assistant. This model achieves significant upgrades through dynamic computing power, allowing it to flexibly allocate "thinking time" from a few seconds to seven hours, resulting in improved performance on programming benchmarks.
Unlike previous "routing" mechanisms that rely on predicting task complexity, GPT-5-Codex can adjust computing power requirements in real time during task execution. For example, it might determine after five minutes that processing will require an additional hour, or even extend this to seven hours for certain complex tasks. This mechanism gives it a significant advantage in scenarios such as refactoring large codebases. Test data shows that this model outperforms the standard GPT-5 version in tasks such as SWE-bench Verified.
Currently, GPT-5-Codex is available to ChatGPT Plus, Pro, Business, Edu, and Enterprise users, supporting terminals, IDEs, GitHub, and the ChatGPT platform. It will be expanded to API users in the future. OpenAI emphasized that the model, specifically trained for code review and evaluated by experienced engineers, has a lower error rate and provides more valuable code feedback.
Product Head Alexander Embiricos stated that dynamic thinking is at the heart of this breakthrough, making it particularly well-suited for solving complex programming problems. With the model's launch, developer tools are expected to reach a new level of efficiency and intelligence.