Claude AI Now Supports Longer Prompts for Better Responses

▼ Summary
– Anthropic has expanded Claude Sonnet 4’s context window to 1 million tokens, allowing it to process requests as long as 750,000 words or 75,000 lines of code, surpassing OpenAI’s GPT-5.
– The update is available for enterprise API customers and through cloud partners like Amazon Bedrock and Google Cloud’s Vertex AI, targeting AI coding platforms.
– Anthropic’s business focuses on selling AI models to enterprises, with Claude being popular among developers, though GPT-5 poses competition with its pricing and performance.
– Larger context windows improve AI performance, especially for coding tasks, but effectiveness may plateau; Anthropic emphasizes “effective context” over sheer size.
– Anthropic will charge higher rates for prompts exceeding 200,000 tokens, at $6 per million input tokens and $22.50 per million output tokens.
Anthropic has significantly expanded Claude AI’s capabilities, now allowing enterprise users to submit prompts up to 1 million tokens, equivalent to roughly 750,000 words or 75,000 lines of code. This major upgrade positions Claude Sonnet 4 ahead of competitors like OpenAI’s GPT-5, which currently supports a 400,000-token context window. The enhanced capacity is particularly valuable for developers working with extensive datasets or complex coding projects.
The expanded context window enables Claude to process and retain far more information in a single interaction, making it especially useful for long-term coding tasks where maintaining project continuity is critical. Unlike consumer-focused AI models, Anthropic primarily targets enterprise clients through its API, with major platforms like GitHub Copilot and Cursor relying on Claude’s capabilities.
While some competitors boast even larger context windows, Google’s Gemini 2.5 Pro handles 2 million tokens, and Meta’s Llama 4 Scout claims 10 million, studies suggest diminishing returns beyond a certain point. Anthropic emphasizes that Claude’s strength lies not just in raw capacity but in its “effective context window,” ensuring the AI comprehends and utilizes the information efficiently.
Pricing adjustments accompany the upgrade, with API costs increasing for prompts exceeding 200,000 tokens. Input tokens now cost $6 per million, while output tokens are priced at $22.50 per million, reflecting the higher computational demands of processing larger datasets.
The move comes as competition in the AI space intensifies, with OpenAI’s GPT-5 gaining traction among developers. Despite this, Anthropic remains confident in its enterprise-focused approach, recently enhancing its flagship model, Claude Opus 4.1, to further solidify its position in AI-assisted coding.
For businesses handling large-scale projects, Claude’s expanded memory and processing power could streamline workflows, reducing the need for fragmented queries and improving accuracy in long-horizon tasks. As AI models continue evolving, effective context utilization may prove just as critical as raw capacity in determining real-world performance.
(Source: TechCrunch)





