# Trust

- [Three Bugs, Six Weeks, One Lesson: Anthropic's Claude Code Postmortem](https://sdd.sh/2026/05/three-bugs-six-weeks-one-lesson-anthropics-claude-code-postmortem.md): On April 23, Anthropic published an engineering postmortem admitting three overlapping changes caused weeks of Claude Code quality degradation. All three were caught by user complaints, not internal evals. The story matters less for what it says about three bugs than for what it reveals about the risks of depending on black-box AI infrastructure.
- [Anthropic's Silent 'Effort' Default: A Reasonable Decision, a Transparency Failure](https://sdd.sh/2026/04/anthropics-silent-effort-default-a-reasonable-decision-a-transparency-failure.md): On March 3, Anthropic quietly changed Claude Opus 4.6's default effort level to 'medium' without telling users. An AMD executive's analysis of 6,852 sessions showed a 73% drop in visible thinking depth. Fortune, VentureBeat, and The Register covered the fallout. Here is what actually changed, why Anthropic did it, and what it means for developers who depend on Claude Code for serious work.
- [84% of Developers Use AI Code Tools. Only 29% Trust What They Ship.](https://sdd.sh/2026/04/84-of-developers-use-ai-code-tools.-only-29-trust-what-they-ship..md): Stack Overflow's developer survey exposed a paradox: AI coding tool adoption is at an all-time high, but trust in AI-generated code just hit an all-time low. The gap isn't irrational — it's diagnostic. And it points directly to what's broken about the autocomplete paradigm.
