Skip to main content
  1. Articles/

Claude Code FAQ: Everything You'd Actually Ask

·2434 words·12 mins·
Author
Florent Clairambault
CTO & Software engineer
Table of Contents

This is the question-and-answer compendium for Claude Code — Anthropic’s terminal-native agentic coding tool. Written from a year of using it, watching it, and writing about it. Updated for the May 2026 state of the world: Claude Opus 4.7 as the default model, Claude Code on Max plans only (Pro removal April 22), Routines and Code Review GA, Bedrock + Mantle for enterprise.

If a question you have isn’t here, open an issue on GitHub and we’ll add it.

Basics
#

What is Claude Code?
#

Claude Code is an agentic coding tool built by Anthropic that runs in your terminal. You describe what you want; it plans the work, edits files, runs shell commands, executes tests, fixes failures, and iterates until the task is done — with as much or as little human supervision as you choose.

The architectural distinction is “terminal-native, not editor-embedded.” Cursor, Copilot, and Windsurf live inside an IDE and ask for approval at every step. Claude Code lives in your shell and runs autonomously when you let it. That difference is the whole product. See the complete deep dive for the long version.

How is Claude Code different from Cursor, Copilot, or Windsurf?
#

The other tools are AI-assisted coding inside an editor. Claude Code is AI-autonomous coding from a terminal. The editor-embedded tools optimize for “the human is in every loop, the AI helps.” Claude Code optimizes for “the AI does the loop, the human reviews the result.”

Concretely: you can hand Claude Code a multi-hour refactor, walk away, and come back to a finished branch with passing tests and a commit log. You cannot do that with Cursor or Copilot — their architecture assumes you stay in the seat. See Cursor vs Copilot vs Claude Code vs Windsurf for the full comparison and GitHub Copilot CLI Goes GA for why Microsoft eventually agreed.

Do I need a Pro or Max subscription?
#

As of April 22, 2026, Claude Code requires a Max plan for new users. The $20 Pro plan was tested in an A/B that’s now permanent for new signups; existing Pro subscribers retained access. Max comes in two tiers — Max 5x ($100/month) and Max 20x ($200/month) — referring to usage allowance multipliers vs the legacy Pro tier.

Why the change: Claude Code’s compute costs were unsustainable at $20. The Max tier is the actual home for serious agentic coding. Background and analysis: Anthropic Tests Pulling Claude Code from Pro.

API users (pay-as-you-go) and enterprise (Teams, Bedrock, Vertex) have separate billing paths and are unaffected.

What’s the difference between Claude Code, the Claude API, and Claude.ai?
#

Three distinct products:

  • Claude.ai is the chat interface at claude.ai. Conversational, web-based, no code execution.
  • Claude API (developer platform) is the raw model API. You build your own application around it.
  • Claude Code is a CLI agent that wraps the API with a terminal interface, file/shell tools, project context (CLAUDE.md), MCP integration, agent teams, scheduling (Routines), and a managed loop. It runs locally; the model lives in Anthropic’s cloud.

You don’t pick between them — Claude Code subscribers also get full Claude.ai access on the same plan.

Getting started
#

How do I install Claude Code?
#

npm install -g @anthropic-ai/claude-code

Then claude auth login opens a browser for OAuth. On a headless machine (WSL2, SSH, container), use claude auth login with paste mode (added in v2.1.126). Run claude in any project directory to start a session.

What is CLAUDE.md and why does it matter?
#

CLAUDE.md is a project-root markdown file that Claude Code reads at the start of every session. It’s where you tell the agent the things that aren’t in the code: build commands, naming conventions, where tests live, what frameworks you’re using, what NOT to do.

A well-written CLAUDE.md cuts a 30-minute onboarding-by-Q&A session down to thirty seconds. For shared codebases, it’s the single highest-leverage artifact you can author. The trade-off: if you commit secrets or sensitive paths there, they’re exposed to anyone running the agent. See The CLAUDE.md Trap (CVE-2026-21852) for what can go wrong.

Should I use Sonnet or Opus?
#

Default to Opus 4.7 (the opus API alias since April 23, 2026). It scores 87.6% on SWE-bench Verified and 64.3% on SWE-bench Pro — the highest of any model — and ships with one-third the tool-call errors of Opus 4.6. Over a 25-step agentic loop, one-third the per-step error rate compounds to a dramatic difference in completion rate.

Use Sonnet when latency matters more than depth (rapid file edits, lots of small back-and-forth, simpler refactors) or when you’re rate-limited on Opus. Both are on the same plan; switch with /model. Details: Claude Opus 4.7 release.

What’s the right starting workflow for a new project?
#

  1. Write a CLAUDE.md with build commands, conventions, and “do not” rules.
  2. Write a spec — what you’re building, the contract, what success looks like. See The Spec File as Source of Truth.
  3. Run claude and hand over the spec.
  4. Use /ultraplan (or /plan) for non-trivial tasks. Review the plan before letting the agent execute.
  5. Let it run with auto mode for execution if you trust the plan; otherwise use the default approval mode.
  6. Review the diff, run the tests, commit.

Skip steps 1-2 only for one-off scripts. Skip them on real projects and you’ll spend the saved time correcting drift.

Core features
#

What is “auto mode” and when should I use it?
#

Auto mode (formerly --enable-auto-mode, now the default for Max plans) lets Claude Code execute shell commands, file edits, and tool calls without prompting for approval at every step. The safety layer still screens for prompt injection and dangerous operations.

Use it when: the spec is clear, the work is contained (a worktree, a branch, a sandbox), and the failure mode of a wrong action is “I’ll review and revert.” Don’t use it when: you’re operating on production state, your repo lacks branch protection, or you haven’t pinned the agent’s working directory. See Claude Code Auto Mode.

What are skills, plugins, and MCP servers?
#

Three layers of extension:

  • Skills are reusable instructions — markdown files in ~/.claude/skills/ or your project — that teach Claude Code how to do specific tasks (deploy a service, run a particular test framework, query your DB). Sharing skills across an org is how teams scale agentic workflows. See Scaling Claude Code Skills.
  • Plugins package skills, MCP servers, and slash commands as installable units (/plugin install <url>).
  • MCP servers are external processes that expose tools to the agent over the Model Context Protocol. Anthropic-built servers cover GitHub, filesystem, Slack, Linear; the Pinterest blueprint shows what production MCP looks like at scale.

What is /ultrareview and how is it different from /review?
#

/ultrareview runs a fleet of reviewer agents in a remote sandbox to find bugs in your branch or PR. Five parallel agents look at architecture, logic, security, performance, and maintainability separately and merge findings. Pro and Max subscribers get three free runs per billing cycle; additional runs are billed.

/review is a local single-agent review — fast, free, less thorough. Use /review for routine work, /ultrareview before big merges. Background: Claude Code April 2026 power-user features.

A separate “Code Review” feature (GA at Code with Claude SF, May 6) integrates with GitHub PRs and runs automatically on push — billed per PR ($15-25). Different product, same lineage.

What are Routines? Can I run Claude Code without my computer on?
#

Yes. Routines (research preview, April 14; expanded since) let you schedule Claude Code agents to run on Anthropic’s cloud — on a cron schedule, via API trigger, or in response to GitHub events. Your laptop can be off; the agent runs, edits, commits, and pushes from Anthropic’s infrastructure.

Use cases: nightly dependency updates, weekly security scans, “fix CI failures” on PR creation, scheduled documentation regeneration. See Claude Code Routines.

Can I run multiple Claude Code agents in parallel?
#

Yes — three different patterns:

  1. Multiple sessions in the desktop app’s sidebar (since the April 14 redesign), each in its own git worktree.
  2. Agent Teams — a coordinator + specialized sub-agents within a single session, talking via mailbox architecture.
  3. Routines + Cloud agents — many independent tasks running on Anthropic’s infrastructure simultaneously.

For a guided tour: Parallel AI Agents. For the architecture story: The Orchestrator Seat.

Cost and billing
#

How does Claude Code billing work?
#

Two paths:

  1. Subscription (Max 5x or Max 20x): flat monthly fee, capped usage. Hits the cap, you’re throttled until reset. Best for individual developers and small teams.
  2. API / pay-as-you-go: token-priced (currently $5/$25 per million input/output tokens for Opus 4.7). No cap, you pay what you use. Best for variable workloads or teams.

Enterprise (Teams, Bedrock, Vertex) is invoiced separately and includes admin controls, RBAC, and the Analytics API.

What’s the difference between Max 5x and Max 20x?
#

Max 5x ($100/month) gives roughly 5× the legacy Pro plan’s usage allowance; Max 20x ($200/month) gives 20×. The May 6 SpaceX-Anthropic Colossus deal doubled the five-hour rate limits across all tiers and removed the peak-hours reduction.

Pick based on actual session counts: a developer running a couple of long-form agentic sessions a day is fine on Max 5x; someone running parallel Routines and /ultrareview regularly should consider Max 20x. The Analytics API gives precise per-developer numbers if you’re choosing for a team.

How do I track usage and cost?
#

/usage in Claude Code shows your current session’s token cost, plan consumption, and remaining cap. For organization-level visibility, the Analytics API (Admin API key required) returns per-user, per-day metrics on commits, PRs, lines of code, sessions, tool acceptance rates, and token costs. Pipe it into BI / OpenTelemetry / SIEM. Setup details: Claude Code Analytics API.

Enterprise
#

Can I run Claude Code on AWS Bedrock or Azure / GCP?
#

Yes on AWS via Bedrock (GA, v2.1.94). The Mantle backend gives zero-operator-access — Anthropic engineers cannot reach the inference layer, which is the enterprise air-gap story compliance teams want. Setup is interactive (claude --setup-bedrock). See Claude Code on Bedrock with Mantle.

Vertex AI (GCP) and Azure Foundry support exist via the broader Claude API; native Claude Code interactive setup for those two is in progress as of May 2026.

Does Anthropic train on my code?
#

For paid plans (Pro, Max, Teams, Enterprise, API): no, by default. Anthropic doesn’t train on customer data unless you opt in via feedback flows. This is the explicit policy, contractually backed for Teams/Enterprise.

Compare with GitHub Copilot’s April 24 default opt-in for Free/Pro/Pro+ users.

What’s the Analytics API?
#

A REST API on the Admin API key that exposes per-user, per-day rollups of Claude Code activity: commits authored, PRs opened, lines added/removed, session counts, tool-call acceptance rates, token spend. Designed for the “prove the ROI” conversation. Integrates with OpenTelemetry, SIEM, and standard BI tools. Full coverage here.

Trust and security
#

Is --dangerously-skip-permissions safe?
#

It’s safe in the same sense that rm -rf is safe: it does what you asked, very fast, with no second look. The flag bypasses the per-tool approval prompts that auto mode usually injects. Use it inside disposable sandboxes (a worktree, a container, a fresh VM, a dedicated repo) where the cost of the worst-case action is “throw away the sandbox.”

Don’t use it on production checkouts, in directories with secrets, or on shared infrastructure. Anthropic’s safety layer still runs — prompt-injection screening, dangerous-command detection — but the human approval gate is gone.

What was the CLAUDE.md trap (CVE-2026-21852)?
#

A patched supply-chain vulnerability where a malicious project config could escalate the agent’s permissions silently — bypassing user-defined deny rules and exfiltrating credentials. Fixed in v2.1.90. The lesson: a CLAUDE.md cloned from a stranger’s repo is executable trust. Treat it like you treat npm install from an unverified package. Full breakdown: The CLAUDE.md Trap.

How do I handle secrets safely with Claude Code?
#

Three rules:

  1. Never put secrets in CLAUDE.md. The file lives in the repo and ends up in the agent’s prompt context.
  2. Use environment variables and reference them by name in CLAUDE.md (use $DATABASE_URL), not by value.
  3. Add deny rules for paths the agent shouldn’t read: .env, secrets/, credentials.json, ~/.aws/. Configure in ~/.claude/settings.json under permissions.deny.

If you’re on enterprise, Mantle’s zero-operator-access architecture handles the upstream half — your secrets never leave your AWS account.

Comparisons
#

Claude Code vs Cursor — which is better?
#

For autonomous, long-horizon agentic work: Claude Code, by a wide margin. Cursor’s editor-embedded architecture has a ceiling: it assumes the human is in the loop, which is a feature for some workflows and a fundamental constraint for others. The autonomy ceiling analysis and Cursor SDK launch cover why.

For inline, real-time coding inside an IDE — Cursor is genuinely excellent and the better choice if you stay in the seat.

The honest answer for many teams is “use both, in different layers.” See The Three-Layer AI Coding Stack.

Claude Code vs GitHub Copilot CLI?
#

GitHub Copilot CLI reached GA in February 2026 with autopilot mode and multi-model support. It is, materially, GitHub adopting the architectural model Anthropic pioneered.

Where Copilot CLI wins: tight GitHub-platform integration (PRs, Actions, Issues) and the bundled cost story for teams already paying for Copilot. Where Claude Code wins: more mature agent loop, deeper safety/context-management research, the broader MCP and Routines ecosystem, and Opus 4.7’s lead on the harder benchmarks.

If you’re heavily on GitHub and using Copilot anyway, Copilot CLI is now a credible second tool. For autonomy depth, Claude Code is still the lead.

Is the free Gemini CLI from Google enough?
#

For low-volume, latency-tolerant work — yes, surprisingly often. Gemini 3.1 Pro reaches 80.6% on SWE-bench Verified (within 1 point of Opus 4.6) and Google gives you 1,000 free requests per day. See Gemini CLI honest assessment.

Where it falls short: 50% slower task completion in head-to-head tests, no equivalent of CLAUDE.md project memory, no Routines, much smaller MCP ecosystem, no Agent Teams. For serious work, the speed and ecosystem gaps add up.

Can I use Claude Code with Cursor or OpenAI Codex?
#

Yes — and a growing number of developers do. The pattern is composition: Cursor for orchestration and inline edits, Claude Code for execution of larger tasks, Codex (or /ultrareview) as a review layer. The codex-plugin-cc plugin makes this concrete — Codex reviews diffs that Claude Code produced. See The Three-Layer AI Coding Stack.

The world isn’t consolidating into one winner. It’s stratifying into composable layers, and the best teams are using whichever tool wins each layer.


Have a question that should be here? Open an issue at github.com/fclairamb/sddsh.

Related