There is a moment every engineering leader eventually faces after deploying Claude Code at scale: someone in finance asks what the productivity gain actually is, in numbers they can put in a spreadsheet.
Until recently, the honest answer was “hard to measure.” Developers loved the tool, velocity seemed up, but connecting AI coding activity to concrete output metrics required custom instrumentation, opinion surveys, or rough estimation.
Anthropic’s Claude Code Analytics API closes that gap. Released in early 2026 and extended significantly with the Cowork GA launch in April, it gives enterprise organizations programmatic access to daily aggregated usage metrics per developer — commits created through Claude Code, pull requests opened, lines of code added and removed, session counts, tool acceptance rates, token usage by model, and estimated cost. No manual reporting. No developer self-assessment. Direct API access to what actually happened.
What the API Tracks#
The Claude Code Analytics API returns records aggregated at the per-user, per-day level. Each record contains:
Productivity signals
- Lines of code added via Claude Code
- Lines of code removed via Claude Code
- Commits created through Claude Code’s commit functionality
- Pull requests created through Claude Code’s PR functionality
- Number of distinct Claude Code sessions
Tool usage signals
- Tool call acceptance rates (how often developers approve vs. reject Claude’s suggested actions)
- Tool call rejection rates (leading indicator of prompt quality or task mismatch)
- Breakdown by tool type (file edits, bash commands, web fetches, etc.)
Cost and model signals
- Token usage broken down by Claude model
- Estimated cost per user per day
- Customer type and terminal type metadata
The Enterprise Analytics API — a related but broader endpoint — also captures per-user engagement: conversation counts, messages sent, projects created, files uploaded, artifacts created, skills invoked, connectors used, and the Claude Code-specific metrics above rolled up for org-level reporting.
Data is available for up to 90 days of history (with records beginning January 1, 2026). Activity appears in the API within approximately one hour of completion, though the API excludes data newer than one hour to ensure pagination consistency.
Getting Started#
Access requires an Admin API key — a distinct credential from standard API keys, provisioned through the Claude Console. Only organization members with the Primary Owner role can mint Admin API keys.
Once you have one, the Claude Code Analytics API is accessed via GET /v1/organizations/{org_id}/usage/claude_code with standard date range parameters:
curl https://api.anthropic.com/v1/organizations/{org_id}/usage/claude_code \
-H "x-api-key: $ADMIN_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-G \
-d "start_date=2026-04-01" \
-d "end_date=2026-04-13"The response is paginated JSON with one record per (user, date) pair. Each record contains the full metric set described above. Standard REST pagination applies — iterate through pages until you have the full dataset for your date range.
For the Claude Enterprise Analytics API (conversation-level metrics), the endpoint is separate: GET /v1/organizations/{org_id}/usage/users.
Why Tool Acceptance Rates Matter More Than You Think#
Most teams focus on the headline metrics — commits, PRs, lines of code. But tool acceptance and rejection rates are among the most valuable signals in the dataset, and they’re easy to overlook.
When Claude Code proposes a file edit, a bash command, or a web search, the developer either approves or denies it. In Claude Code’s Auto Mode, many actions are approved automatically. In standard mode, the developer reviews each one.
High rejection rates on specific tool types indicate something important:
- Claude is proposing actions the developer doesn’t trust (prompting or spec quality issue)
- The task type is poorly suited to autonomous execution
- The developer’s CLAUDE.md configuration is too permissive for their comfort level
High acceptance rates indicate the inverse: Claude is operating in a zone of trusted, predictable behavior. This is the configuration state you want for autonomous workflows.
Teams that instrument their rejection rates typically discover that new developers reject significantly more tool calls than experienced Claude Code users — useful data for onboarding and training programs. They also discover that certain task types have structurally low acceptance, which surfaces where autonomous workflows need better spec design before they’re handed off.
Building the ROI Case#
Let’s be specific about what this data enables.
Commits per developer per week: The single most comparable metric across teams. If your developers averaged 8 commits per week before Claude Code and are now averaging 14, that is a 75% increase in observable output. Finance can work with that.
Cost per commit: Total Claude Code spend (available from the cost metrics) divided by total commits. This gives you a cost-per-unit-of-output number that can be compared against the alternative: developer time cost per commit, including review cycles.
High-value developer leverage: Sort users by Claude Code session count and compare to commit output. Developers with high session counts and high commit rates are getting full value from autonomous workflows. Developers with high session counts but average commit rates may be using Claude Code conversationally rather than agentically — an onboarding opportunity.
Model cost optimization: The token breakdown by model shows whether your team is over-indexed on Opus when Sonnet would suffice for specific task types. This is typically a 2-5x cost difference per token. At scale, model routing based on task complexity pays for itself.
OpenTelemetry and SIEM Integration#
For security-conscious enterprise deployments, the analytics story goes beyond the API. Claude Code v2.1.94 and Claude Cowork GA introduced expanded OpenTelemetry support on Team and Enterprise plans.
OpenTelemetry events are emitted for:
- Tool calls (what was invoked, by which user)
- File modifications (what was changed)
- Whether AI-initiated actions received manual or automatic approval
These events plug directly into standard SIEM pipelines — Splunk, Cribl, Datadog, and equivalents. The result is that Claude Code activity becomes auditable in the same systems where you audit SSH access, code deployments, and database queries.
For organizations with compliance requirements, this is significant. You are not trusting Claude Code activity on faith — you are treating it as a first-class operational event, logged and auditable like any other privileged action.
The Adoption Curve Problem (and How Analytics Solves It)#
Grassroots adoption of Claude Code creates a specific enterprise problem: some developers are running full autonomous workflows while others are treating it as a better autocomplete. Both appear as “active users” in a license count.
The Analytics API exposes the actual distribution. A team of 50 developers with 50 active licenses might show:
- 12 developers in autonomous mode (high sessions, high commit output, high acceptance rates)
- 23 developers in conversational mode (high sessions, moderate output)
- 15 developers in minimal use (low sessions, low everything)
The 12 autonomous users are generating the ROI. The 23 conversational users have potential. The 15 minimal users are an onboarding problem.
You cannot build that picture from a license dashboard. You can build it from the Analytics API.
This is what separates “we use Claude Code” from “we have deployed Claude Code effectively.” The former is an adoption metric. The latter is an outcome metric. Analytics converts one into the other.
Connecting to GitHub and BI Tools#
The Claude Code Analytics API outputs standard JSON over REST, which means it pipes into any BI tool without custom connectors. Typical integrations:
- Looker / Tableau / Power BI: Pull daily via scheduled API calls, load into a warehouse, build dashboards against it.
- GitHub Actions: Compare Claude Code commit metrics against total repository commit volume to calculate attribution percentage.
- Notion / Confluence: Automated weekly reports generated from the API and posted to engineering wikis.
- PagerDuty / OpsGenie: Alert on anomalous rejection rate spikes (often indicates a bad CLAUDE.md push or a prompt regression in a shared skill).
The 90-day history limit means you have approximately three months of runway before data starts rolling off. If you need longer retention, build the export pipeline early and store your own copy.
The Bottom Line#
The Claude Code Analytics API is not a feature for power users. It is a feature for organizations that have moved past “should we deploy Claude Code?” into “how do we optimize the deployment we already have?”
At $30 billion ARR and 1,000+ enterprise customers spending over $1M per year, Anthropic is clearly talking to engineering organizations that need this kind of accountability layer. The API reflects that understanding: it gives you the data to answer the productivity question, the cost question, and the adoption quality question in the same place.
If you are managing a Claude Code deployment of more than 10 developers and you are not pulling from this API, you are operating on intuition. That works until someone with a spreadsheet asks you to defend the budget. Build the pipeline first.
Sources
- Claude Code Analytics API — Claude API Docs
- Track team usage with analytics — Claude Code Docs
- Claude Enterprise Analytics API Reference Guide — Claude Help Center
- Access engagement and adoption data with the Analytics API — Claude Help Center
- How to Use Claude Code Analytics via API — Apidog
- Claude Cowork Reaches GA with 6 Enterprise Management Features — Lilting Channel