Skip to main content
  1. Articles/

Anthropic Tests Pulling Claude Code From Pro — And Gets an Instant Lesson in Developer Trust

·1058 words·5 mins·

On Tuesday, April 22, developers started comparing notes. The Anthropic support page that used to say “Using Claude Code with your Pro or Max plan” now read “Using Claude Code with your Max plan.” The Pro plan’s feature list had swapped its checkmark for an explicit X next to Claude Code. No announcement. No email. No changelog entry.

Within hours, the backlash was loud enough that Amol Avasare, Anthropic’s head of growth, posted a social media clarification: it was “a small test on ~2% of new prosumer signups.” Existing Pro and Max subscribers, he emphasized, were unaffected.

The test may have been small. The documentation rewrite was comprehensive.

What Actually Changed
#

The short version: as of April 22, new Pro subscribers ($20/month) appear to lose access to Claude Code. Access now starts at Max 5x — $100/month.

The longer version is that this was probably inevitable. Avasare’s post acknowledged it directly: “when Max launched a year ago, it didn’t include Claude Code, but since then we bundled Claude Code into Max and it took off after Opus 4.” The compute math is straightforward. Every Claude Code session chains together dozens of model calls — context reads, edits, file writes, test runs — across a full agent loop. A single focused coding session can burn what a month of casual Claude chat costs. Offering that at $20/month was, in retrospect, a promotional loss leader.

The Register noted that Anthropic has “recently struggled to keep up with demand for its AI models,” and that quota exhaustion complaints have been a recurring theme since late 2025. Bundling Claude Code into Pro without adjusting the plan’s compute budget was always going to create friction.

The Real Problem Isn’t the Price
#

The jump from $20 to $100 is real money, and some developers will legitimately feel priced out. But the market for serious agentic coding tools was never going to live at $20/month indefinitely. At $100, you’re still paying less than a single hour of a mid-level contractor.

The actual problem is the execution pattern. This is not Anthropic’s first quiet change that required community pressure to surface:

  • The “effort” default controversy in mid-April documented a silent downgrade to medium thinking depth that affected 73% of sessions before anyone noticed
  • The OpenClaw ban was announced without a migration path, then partially walked back
  • Quota limits have tightened multiple times without proactive communication to paying users

Developers who depend on Claude Code for production work — people for whom it has genuinely become load-bearing infrastructure — are accumulating a list of instances where Anthropic moved quietly on things that mattered. Trust is not built from benchmark scores. It’s built from predictability, transparency, and treating your users as partners rather than test subjects.

What “A/B Test” Actually Means Here
#

An A/B test of pricing is a normal, reasonable thing to run. Showing 2% of new sign-ups a different pricing tier to measure conversion and retention is standard product practice.

What is not standard is simultaneously updating every public-facing documentation page to reflect the experimental state as if it were the new default. When the support documentation, the pricing page, and the feature comparison table all change — while the communication strategy is “say nothing and see what happens” — that’s not an A/B test. That’s a rollout with a plausible deniability clause.

Developers noticed because they pay close attention to tools they depend on. The Wayback Machine is public. Changelog diffs are visible. The community watches.

The Max Plan Is Actually the Right Home for Claude Code
#

Setting aside the communication failure, the pricing structure that emerges from this test makes sense. Claude Code is not a chat add-on. It is a software development platform that:

  • Runs multi-step agent loops across entire codebases
  • Invokes Claude Opus 4.7 (which is itself priced at $5/$25 per million tokens) for reasoning-heavy tasks
  • Integrates with MCP servers, local tools, CI/CD pipelines, and git worktrees
  • Requires persistent context and multi-session management

The Max plan at $100/month includes higher usage limits, Opus access by default, and the enterprise-adjacent features (auto mode, xhigh effort, Routines) that serious users actually want. For anyone using Claude Code more than a few hours per week, the Max plan is not an upsell — it’s the right product.

The Max 20x tier at $200/month exists for teams that need it. Neither of those price points is unreasonable for a tool that demonstrably ships production software.

The Pro plan at $20 was always better understood as Claude-the-assistant: chat, analysis, writing, document work. Bundling a full agentic coding environment into that tier was generous. It was probably also strategic — to grow adoption. That strategy worked. Claude Code now represents a significant fraction of Anthropic’s revenue, and the community it built is large and vocal enough to notice when pricing pages change overnight.

What to Watch
#

Avasare’s framing as a test implies a decision hasn’t been finalized. A few scenarios worth tracking:

If the change sticks: Anthropic is signaling that Claude Code is a $100+ product. Developers who’ve been casual users will have to decide whether it earns that spend. For most professionals who are actually shipping with it, it does.

If it gets rolled back: Anthropic faces the harder question of how to make the economics work at $20 while demand scales. Tighter per-session limits on Pro are the obvious answer — but require the kind of transparent communication that hasn’t been Anthropic’s strong suit lately.

The competitive angle: Every hour Claude Code is associated with pricing uncertainty is an hour that OpenAI Codex Desktop, GitHub Copilot Autopilot Mode, and Cursor 3 are running their own marketing. The developer audience watching this episode is the same one those products want to convert.

Anthropic has the best agentic coding product on the market. The Stanford AI Index data, the SWE-bench results, the JetBrains survey loyalty metrics — they all point the same direction. The company’s biggest recurring risk is not a competitor closing the technical gap. It’s an accumulation of quiet-change incidents that erode the trust of the developers who chose Claude Code precisely because they wanted something better.

The pricing test, in isolation, is defensible. The pattern surrounding it is what needs to change.


Sources: The Register, Ed Zitron / Where’s Your Ed At, The New Stack, AIToolly, XDA Developers, Startup Fortune

Related