Anthropic Cuts OpenAI’s Access to Claude AI

▼ Summary
– Anthropic revoked OpenAI’s API access to its models for violating terms of service, specifically prohibiting use to build competing products or reverse-engineer services.
– OpenAI was using Claude’s API to test its coding and safety capabilities against its own models, including GPT-5, which is rumored to excel in coding.
– Anthropic stated it will still allow OpenAI API access for benchmarking and safety evaluations, despite the current restriction on Claude’s API.
– Revoking competitor API access is a common industry tactic, with past examples involving Facebook, Salesforce, and Anthropic itself (e.g., blocking Windsurf).
– Anthropic recently imposed rate limits on Claude Code due to high usage and ToS violations, signaling stricter enforcement of its policies.
Anthropic has abruptly terminated OpenAI’s API access to its Claude AI models, marking a significant escalation in tensions between the two leading artificial intelligence firms. Multiple industry sources confirm the access was revoked after OpenAI allegedly breached Anthropic’s terms of service by using Claude’s capabilities to develop competing technologies.
The decision comes amid speculation about OpenAI’s upcoming GPT-5 release, which is expected to feature advanced coding functionalities. Anthropic’s spokesperson emphasized that using Claude to train or benchmark rival AI systems violates explicit contractual restrictions. “Our tools are designed to assist developers, not fuel direct competitors,” the company stated.
Reports indicate OpenAI had been leveraging Claude’s API for internal testing, analyzing its performance in coding, creative tasks, and safety protocols. This benchmarking process, while common in AI development, crossed into prohibited territory when used to refine OpenAI’s own models. Anthropic clarified it will still permit standardized safety evaluations, but broader access remains suspended.
The move mirrors past tech industry clashes, where companies restrict rivals from critical APIs, Facebook’s block on Vine and Salesforce’s Slack data limitations being notable examples. Anthropic previously took similar action against AI startup Windsurf when rumors surfaced of an OpenAI acquisition.
Just before cutting OpenAI’s access, Anthropic imposed stricter usage limits on Claude Code, citing unprecedented demand and policy violations. The timing suggests a broader strategy to protect proprietary advancements as competition intensifies in generative AI. OpenAI responded by underscoring its commitment to open benchmarking while expressing disappointment over the restriction.
The dispute highlights the fine line between collaboration and competition in AI development, where access to cutting-edge tools can dictate market leadership. With both firms racing to dominate next-generation AI, such conflicts may become increasingly frequent as they guard their technological edges.
(Source: Wired)





