---
title: "Anthropic's Silent 'Effort' Default: A Reasonable Decision, a Transparency Failure"
date: 2026-04-16
tags: ["Claude Code","Anthropic","Opus 4.6","Performance","Trust","Adaptive Thinking"]
categories: ["AI Tools","Industry"]
summary: "On March 3, Anthropic quietly changed Claude Opus 4.6's default effort level to 'medium' without telling users. An AMD executive's analysis of 6,852 sessions showed a 73% drop in visible thinking depth. Fortune, VentureBeat, and The Register covered the fallout. Here is what actually changed, why Anthropic did it, and what it means for developers who depend on Claude Code for serious work."
---


On April 13, 2026, a GitHub issue went viral in the AI tools community. Stella Laurenzo — a machine learning engineer at AMD — posted an analysis of 6,852 Claude Code session files, 17,871 thinking blocks, and 234,760 tool calls, and the picture she painted was damning. Median visible thinking length had dropped from 2,200 characters in January to 600 characters in March. API calls per task showed up to 80 times more retries. Claude's behavior had shifted from "research first, then edit" to "edit first, ask questions later."

The thread blew up. Fortune covered it. VentureBeat ran "Is Anthropic nerfing Claude?" The Register reported that Claude was getting worse — according to Claude itself, apparently.

The core complaint was not just that performance had degraded. It was that nobody had been told.

## What Actually Changed

In early February 2026, Anthropic introduced "adaptive thinking" to Claude Opus 4.6. Instead of applying a fixed reasoning budget to every query, the model would decide how much thinking to apply based on the task. A simple request gets a light touch. A complex refactor gets extended reasoning. The idea is resource efficiency — don't spend 2,000 tokens thinking through "what is 2+2."

That part was announced and documented.

On March 3, 2026, Anthropic did something else: they changed the default effort level for Opus 4.6 to "medium" (internally, effort level 85 out of 100). This was not in a changelog. It was not in a release note. Boris Cherny — Anthropic's executive lead on Claude Code — eventually explained the decision publicly after the April 13 backlash: Claude had been consuming too many tokens per task, users had been complaining about that, and medium effort was the best balance across intelligence, latency, and cost for most users.

That explanation is probably correct. It is also completely beside the point.

## The Transparency Problem

The complaint that matters is not "Anthropic made Claude worse." The complaint is "Anthropic changed the behavior of a tool that developers have built workflows around, and didn't say so."

This distinction is important because it determines what kind of problem this is. If the issue is pure quality degradation, the fix is improving the model. If the issue is communication, the fix is better process. The evidence suggests it is primarily the second.

Here is why the communication failure hit so hard. Developers building on Claude Code do not experience it as a product with version numbers and changelogs — they experience it as a partner with a certain personality and work style. When that work style changes, the mental model breaks. Is Claude worse? Did I change something in my prompts? Is there a problem with my CLAUDE.md file? Am I hitting rate limits? The invisible nature of the change made debugging impossible.

A developer who sees a new model version knows to re-benchmark. A developer who sees no version change but degraded output has no signal to act on. That is a trust problem, not a performance problem.

## The Compounding Factor

The March effort change did not happen in isolation. It came on top of adaptive thinking (February), which itself had already shifted Claude's visible reasoning in ways that users noticed. Then Claude experienced service disruptions in April — including a significant outage on April 15 that generated thousands of user complaints about login failures and degraded output.

When multiple things change or break in close succession, users cannot attribute degradation to any single cause. The cumulative effect is a generalized distrust: Claude is less reliable than it used to be, for reasons that are not fully explained. The Laurenzo analysis was compelling precisely because it provided data that confirmed what developers had been sensing for months.

Anthropic also compounded the problem by being slow to respond. By the time Cherny's explanation appeared, the narrative had already calcified. Users who had spent weeks debugging their Claude Code setups — adjusting prompts, restructuring CLAUDE.md files, switching models, blaming themselves — were not receptive to "we made a reasonable product decision."

## What Anthropic Should Have Done

The decision itself was defensible. An effort level of 85 probably is the right default for most tasks. Token consumption had been a genuine problem — Anthropic publicly acknowledged in early April that users were hitting usage limits "way faster than expected." Managing that is a legitimate product concern.

The correct approach was a changelog entry and a user-facing setting. Something like: "We've changed the default effort level to medium to optimize for token efficiency. Power users running complex autonomous tasks can set effort to high in their settings, or use `/think` for per-query extended reasoning."

One sentence. One documented setting. The trust gap does not open.

Instead, users discovered the change through statistical analysis of their session data. That is not how you treat developers who are depending on your tool to run production workflows.

## What Power Users Can Do Now

The good news is that the effort level is not locked. There are several ways to get more reasoning depth when you need it:

**Per-query extended thinking**: Use `/think` in Claude Code to trigger extended reasoning on a specific task. This works well for complex, high-stakes tasks where you want Claude to work through the problem carefully before acting.

**Effort setting**: In Claude Code settings, you can set `effort: high` to override the medium default globally. This increases token consumption, but if you are on Max or Enterprise and need the depth, it is worth it. Be aware that this affects your usage limits.

**Ultraplan**: For complex planning tasks, `/ultraplan` spins up a dedicated Opus 4.6 session with extended compute specifically allocated for planning. If you are architecting a major refactor or designing a system, this is the right tool rather than fighting the default effort level in a standard session.

**Well-scoped prompts**: Adaptive thinking uses the complexity of the request as a signal. Vague prompts get lighter thinking. Specific, complex prompts with clear constraints get more. This was always true, but it matters more now that the default effort ceiling is lower.

## The Bigger Concern

Beyond this specific incident, there is a structural issue worth naming. As AI tools become more central to how developers work, the standards for how vendors communicate changes need to increase, not decrease. A model behavior change that affects thousands of production workflows is functionally a breaking change. It deserves the same communication standards as a breaking API change.

This is not a niche concern. The JetBrains survey from April 13 showed Claude Code growing 6x in adoption over eight months. The Pragmatic Engineer survey found it the most-loved tool among professional developers. As the user base grows, so does the number of people who have built genuine workflow dependencies on Claude's behavior.

Anthropic knows this. Their response to the backlash has been measured — acknowledging the communication failure, explaining the reasoning, and committing to more transparency about effort defaults going forward. That is the right posture.

But the episode is a useful reminder that even the best AI tools are only as trustworthy as the vendor's operational practices. Capability without transparency is a fragile foundation for infrastructure.

---

**Sources**

- [Is Anthropic 'nerfing' Claude? Users increasingly report performance degradation as leaders push back — VentureBeat](https://venturebeat.com/technology/is-anthropic-nerfing-claude-users-increasingly-report-performance)
- [Anthropic is facing a wave of user backlash over reports of performance issues with its Claude AI chatbot — Fortune](https://fortune.com/2026/04/14/anthropic-claude-performance-decline-user-complaints-backlash-lack-of-transparency-accusations-compute-crunch/)
- [Claude is getting worse, according to Claude — The Register](https://www.theregister.com/2026/04/13/claude_outage_quality_complaints/)
- [Claude code performance under scrutiny after viral 67% drop claim — Cryptonomist](https://en.cryptonomist.ch/2026/04/13/claude-code-performance/)
- [Anthropic admits Claude Code users hitting usage limits 'way faster than expected' — DevClass](https://www.devclass.com/ai-ml/2026/04/01/anthropic-admits-claude-code-users-hitting-usage-limits-way-faster-than-expected/5213575)
- [Claude Code Drama: 6,852 Sessions Prove Performance Collapse — Scortier Substack](https://scortier.substack.com/p/claude-code-drama-6852-sessions-prove)

