---
title: "OpenAI Lands on Amazon Bedrock — The Cloud That Already Houses Claude"
date: 2026-04-30
tags: ["OpenAI","AWS","Amazon Bedrock","Claude","Anthropic","cloud","enterprise","Codex"]
categories: ["AI Tools","Industry"]
summary: "After Microsoft's exclusivity expired on April 27, OpenAI moved its models, Codex agent, and a new jointly built Bedrock Managed Agents runtime onto AWS. Amazon now hosts both Anthropic and OpenAI. Here's what the infrastructure power shift means for the AI coding landscape."
---


The tech business story of the week sounds almost absurd on its face: the same cloud provider that just committed $25 billion to Anthropic is now also the home of OpenAI's flagship models and coding agent. Welcome to the post-exclusivity era of AI infrastructure.

On April 28, 2026 — one day after OpenAI's exclusivity agreement with Microsoft expired — Amazon Web Services announced that GPT-5.5, GPT-5.4, Codex, and a new jointly built agent runtime called **Amazon Bedrock Managed Agents powered by OpenAI** were all landing on Bedrock in limited preview.

If you thought the cloud wars were confusing before, you ain't seen nothing yet.

## How We Got Here

For the better part of three years, Microsoft had a stranglehold on OpenAI. Azure was the only hyperscaler allowed to host GPT models commercially — a condition baked into the multi-billion-dollar investment agreement that kept OpenAI solvent through its turbulent 2023-2024 period. Enterprise customers who wanted GPT had to go through Azure. Full stop.

That arrangement expired on April 27, 2026. Microsoft and OpenAI amended the agreement: Azure retains first-mover rights (OpenAI ships there first, unless Microsoft can't support the required capability), but the license is now non-exclusive. The old revenue-share arrangement where Microsoft paid OpenAI has ended, though OpenAI still routes payments to Microsoft through 2030.

The day after exclusivity lapsed, AWS had three OpenAI products live on Bedrock. That turnaround was not an accident — this was planned and ready to ship the moment the legal window opened. AWS CEO Matt Garman confirmed the partnership had been in negotiation for months.

## What's Actually on Bedrock

Three distinct offerings shipped together:

**OpenAI Models on Bedrock** — GPT-5.5 and GPT-5.4 are now available through the standard Bedrock model access interface, alongside Claude Opus 4.7, Gemini, Llama, and the growing zoo of frontier models AWS already hosts. Usage counts toward existing AWS cloud commitments, which matters enormously for enterprises with pre-committed cloud spend.

**Codex on Amazon Bedrock** — OpenAI's coding agent is accessible via the Codex CLI, desktop app, and VS Code extension, all routing through Bedrock infrastructure. This is straightforward: enterprises that standardized on AWS can now run Codex without setting up separate OpenAI credentials or going through Azure.

**Amazon Bedrock Managed Agents powered by OpenAI** — This is the interesting one. Rather than simply making OpenAI's existing agent harness available, AWS and OpenAI jointly built a new managed agent runtime that runs on AWS infrastructure but uses OpenAI's frontier models and agent harness. It's designed to be AWS-native: memory, orchestration, and agent lifecycle are managed through familiar Bedrock controls and IAM policies. Critically, this product is an AWS exclusive by design — you can't get the jointly-engineered runtime on Azure.

## The Amazon Paradox

The strategic picture here is genuinely novel. Amazon has committed more capital to Anthropic than any other single investor — $25 billion with an additional $100 billion AWS infrastructure commitment over ten years, announced just six days ago. Anthropic's models, including Claude Opus 4.7, are deeply integrated into AWS: there's a native Claude console in the AWS dashboard, Claude Code runs on Bedrock with Mantle's zero-operator-access security model, and Amazon is building Trainium3 chips partly to serve Claude inference.

And yet, as of April 28, AWS is also the home of OpenAI's most powerful models and a jointly built OpenAI agent runtime.

This is not a contradiction from Amazon's perspective. AWS has always been a neutral marketplace for compute and services — it hosts competitors' products constantly. The cloud business model rewards volume and commitment, not exclusivity. If enterprises want to run GPT-5.5 workloads on AWS infrastructure rather than Azure, Amazon captures that spend. If those same enterprises also run Claude workloads on Bedrock, Amazon captures that too.

But it does create a peculiar situation: **Amazon Bedrock Managed Agents** now ships in two flavors — one powered by Claude, one powered by OpenAI — with the same underlying AWS infrastructure, IAM policies, and enterprise tooling. The sales pitch becomes "bring your own frontier model, we'll handle the agent runtime."

## What It Means for Claude Code and Bedrock Deployments

For teams running Claude Code on Bedrock (the v2.1.94 GA release covered in April), the immediate answer is: nothing changes. Claude Code's terminal-native architecture doesn't care what other models are available on Bedrock; it routes to Claude Opus 4.7 and inherits all the enterprise controls — Mantle's zero-operator-access, Bedrock's compliance certifications, IAM policies — that were already in place.

The longer-term question is organizational politics. Security teams and procurement departments that previously ran Claude Code and wanted a second coding agent option would have had to go to Azure for GPT or set up a separate OpenAI account. Now everything is under one cloud contract. For mixed-model shops — which represent a growing majority of large enterprises — this is unambiguously convenient.

There's also a benchmark reality check worth keeping in mind. Claude Opus 4.7 scores 64.3% on SWE-bench Pro and leads the field on multi-agent autonomous tasks. GPT-5.5 hits 58.6% on the same benchmark. For organizations that have been running Cursor or Codex workflows and want to evaluate Claude Code alongside them, having both available through a single Bedrock contract makes that evaluation substantially easier.

## The Azure Angle

Microsoft isn't catastrophically damaged here. Azure retains first-mover rights on new OpenAI models, and the GitHub Copilot stack — deeply integrated with Azure DevOps, GitHub Actions, and enterprise SSO — still runs on Azure. The vast majority of Fortune 500 companies that use GitHub Enterprise aren't going to abandon that stack because GPT-5.5 is now also available on Bedrock.

But Azure loses its moat. The single biggest competitive advantage Azure had in the AI era — you want OpenAI, you come to us — is gone. Every hyperscaler can now offer comparable model menus. AWS already hosts Claude (better coding benchmarks), Gemini 3.1 Pro (Google's multimodal flagship), Llama (Meta's open-weight series), and now GPT. Google Cloud has its own native Gemini stack plus Claude on Vertex. Azure still leads on GitHub integration but is no longer the exclusive GPT gateway.

The implication for the broader market: frontier model access is rapidly commoditizing at the infrastructure layer. The differentiation is moving up the stack — toward agent runtimes, orchestration frameworks, developer tooling, and the quality of the harness the agent runs inside.

## The Harness Is the Product

Here's the insight that gets buried in the partnership announcement hoopla: **the model matters less than the harness it runs in.**

Amazon Bedrock Managed Agents powered by OpenAI is notable not because it offers GPT-5.5, but because AWS and OpenAI jointly engineered the agent runtime — the scaffolding that handles context, memory, tool execution, and task orchestration. That's a product bet on agent infrastructure, not on a specific model.

Claude Code made the same bet two years ago, just from the other direction: Anthropic built the harness first (terminal-native, CLAUDE.md invariants, Agent Teams, Routines, MCP integration), and the model powers it. The model and harness are vertically integrated by design, which is why Claude Code's agentic workflows tend to outperform Codex or Copilot setups even when raw benchmark numbers are closer than people expect.

The Bedrock Managed Agents dual-flavor approach — Claude edition and OpenAI edition — is intriguing precisely because it treats the infrastructure layer (AWS) as neutral and the agent runtime as the product. Whether that separation proves durable or whether integrated harnesses (Claude Code, OpenAI Agents SDK) ultimately win is the central question of the next 18 months of agentic AI infrastructure.

## Bottom Line

OpenAI's arrival on Bedrock is a big deal for the business of AI, and a significant setback for Azure's moat. For developers and engineering teams evaluating AI coding tools, it's mostly good news: everything is now available under one cloud contract, procurement gets simpler, and vendor comparisons get easier.

For the deeper question of which coding agent is actually better at autonomous software development, nothing changed on April 28. The benchmarks haven't shifted. Claude Code's 64.3% SWE-bench Pro advantage over Codex's 58.6% is the same today as it was yesterday. The difference is that enterprise buyers can now reach both without a multi-cloud contract negotiation.

That's a win for flexibility. Whether flexibility beats depth of integration is a question every engineering team will have to answer for itself.

---

**Sources:**
- [Amazon Bedrock now offers OpenAI models, Codex, and Managed Agents — AWS](https://aws.amazon.com/about-aws/whats-new/2026/04/bedrock-openai-models-codex-managed-agents/)
- [OpenAI models, Codex, and Managed Agents come to AWS — OpenAI](https://openai.com/index/openai-on-aws/)
- [OpenAI brings models to AWS after ending exclusivity with Microsoft — CNBC](https://www.cnbc.com/2026/04/28/openai-brings-models-to-aws-after-ending-exclusivity-with-microsoft.html)
- [OpenAI's models land on Amazon Bedrock, one day after Microsoft exclusivity ends — GeekWire](https://www.geekwire.com/2026/openais-models-land-on-amazon-bedrock-one-day-after-microsoft-exclusivity-ends/)
- [OpenAI ends Microsoft legal peril over its $50B Amazon deal — TechCrunch](https://techcrunch.com/2026/04/27/openai-ends-microsoft-legal-peril-over-its-50b-amazon-deal/)
- [An Interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman — Stratechery](https://stratechery.com/2026/an-interview-with-openai-ceo-sam-altman-and-aws-ceo-matt-garman-about-bedrock-managed-agents/)

