AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI’s Codex Max: Faster AI Coding, Fewer Annoyances

▼ Summary

– OpenAI announced GPT-5.1-Codex-Max, a new AI model for coding that will be available to ChatGPT Plus, Pro, Business, Edu, and Enterprise users.
– The Max model handles larger context windows through compaction, enabling it to work on bigger tasks like complex refactors and run for up to 24 hours.
– It achieves the same performance as the previous model but uses 30% fewer tokens and runs 27% to 42% faster, improving efficiency and extending usage limits.
– Enhanced cybersecurity features include secure sandboxing and better long-horizon reasoning to detect and disrupt malicious activity during extended tasks.
– This is the first OpenAI model trained to operate effectively in Windows environments, improving its collaboration in the Codex CLI for cross-platform development.

OpenAI has launched Codex Max, a significant upgrade to its AI coding model that delivers faster execution, reduced token consumption, and enhanced handling of complex programming tasks. This new version, designated GPT-5.1-Codex-Max, becomes available tomorrow for ChatGPT Plus, Pro, Business, Edu, and Enterprise subscribers, with API access planned for the near future. It replaces the earlier GPT-5.1-Codex as the primary model recommended for agentic coding tasks within Codex and similar environments.

A central improvement involves the model’s capacity to manage much larger workloads. Every AI system operates with a context window, essentially its working memory limit, measured in tokens. While the previous Codex handled substantial projects capably, it could struggle with extremely large inputs, such as extensive crash log dumps. Codex Max overcomes this through compaction, a technique that intelligently compresses parts of the ongoing context when the token limit approaches, allowing the AI to maintain focus over significantly longer tasks.

This compaction process enables Codex Max to work coherently across millions of tokens, supporting extended operations like complex, system-wide code refactoring. OpenAI states the model can manage a single coding task continuously for up to twenty-four hours. While compaction itself isn’t entirely new, competing tools like Claude Code also use it, OpenAI emphasizes that Max operates effectively across far larger token volumes than previously possible.

Performance benchmarks show that Codex Max matches the problem-solving accuracy of its predecessor on the SWE-Bench Verified evaluation, but does so while using 30% fewer “thinking” tokens and achieving speed improvements between 27% and 42% on real-world coding assignments. This efficiency gain means users on token-limited plans, such as the $20 monthly ChatGPT Plus subscription, could enjoy additional productive hours for the same cost.

In practical tests provided by OpenAI, the model consistently generated working code with fewer lines and tokens. For example, one task required only 27,000 tokens and 707 lines of code with Max, compared to 37,000 tokens and 864 lines for the older model, while running 27% faster. Producing functional code with greater conciseness generally leads to programs that are easier to maintain and often execute more efficiently.

Security has also been strengthened in this release. Since the debut of GPT-5, OpenAI integrated cybersecurity monitoring to identify and block malicious activities. Codex Max demonstrates significantly improved performance in sustained, long-horizon reasoning, which contributes to better security outcomes. The model operates within a secure sandbox where file writing is confined to a designated workspace and network access remains disabled by default. OpenAI strongly advises keeping these restrictions active, as enabling internet search could expose the system to prompt-injection attacks from untrusted sources.

Another notable advancement is Windows environment support. While Codex has always excelled on macOS, the platform favored by many OpenAI developers, GPT-5.1-Codex-Max is the first model specifically trained to operate effectively in Windows. Training included Windows-specific tasks to make the AI a better collaborator within the Codex CLI. This focus aligns with OpenAI’s deepening partnership with Microsoft.

As Codex Max rolls out, developers can expect to tackle larger projects with improved efficiency and cross-platform capability. The combination of compaction for massive context handling, token economy for cost savings, and specialized Windows training may reshape how coding teams approach extensive development workflows.

(Source: ZDNET)

Topics

codex max 95% context window 90% token efficiency 88% faster execution 87% compaction process 85% large workloads 83% windows training 80% cybersecurity monitoring 78% secure sandbox 75% model performance 73%