Skip to main content
  1. Articles/

OpenAI Just Built an IT Services Company. That's an Admission.

·1069 words·6 mins·
Author
Florent Clairambault
CTO & Software engineer

On May 11, 2026, OpenAI did something IBM would recognize immediately. It launched the OpenAI Deployment Company, a majority-owned subsidiary backed by more than $4 billion in initial capital from 19 investment firms, consulting houses, and systems integrators — including TPG, Goldman Sachs, SoftBank, Bain Capital, Brookfield, and McKinsey & Company.

Alongside the launch, OpenAI acquired Tomoro, an AI consultancy with roughly 150 engineers that counts Mattel, Red Bull, Tesco, and Virgin Atlantic among its clients.

IT services stocks dropped the same day. That reaction tells you everything about what OpenAI just entered.

The “Forward Deployed Engineer” Model
#

The OpenAI Deployment Company’s pitch is straightforward: it will embed “Forward Deployed Engineers” (FDEs) directly into enterprise customer teams. These specialists identify high-impact AI use cases, redesign workflows around them, and — in OpenAI’s framing — “turn those gains into durable systems.”

If that sounds like Accenture or McKinsey Digital, it should. This is the professional services model that has been the dominant route to enterprise software adoption for 40 years. Sell the platform, then sell the integration. Sell the model, then sell the deployment.

The Tomoro acquisition seeds the business immediately. Tomoro was formed in 2023 in alliance with OpenAI, which means it was already building on GPT-4/GPT-5 stacks for clients. The ~150 engineers come with relationships, playbooks, and institutional knowledge about where enterprise AI gets stuck — and it isn’t usually the model quality.

Why OpenAI Needed This
#

OpenAI’s revenue has been growing — its API and enterprise business reportedly exceeds $3 billion ARR — but converting large enterprises into durable, high-spend customers has proven harder than converting startups.

The gap isn’t model quality. GPT-5.5, Codex, and the Agents SDK are technically excellent. The gap is the last 100 meters: integrating AI into legacy workflows, training employees who didn’t choose this, navigating procurement, satisfying legal and security reviews, and maintaining systems that don’t break when a new model drops.

That work doesn’t happen through an API. It happens through people. The Deployment Company is OpenAI’s bet that it can own that layer before the Accentures and Deloittes of the world do.

The 19 investors include Bain & Company, Capgemini, and McKinsey as “consulting and systems-integration partners” — which means they’re not just writing checks, they’re potential co-delivery partners (or competitors who decided it was better to be inside the tent).

What This Means for the Market
#

The IT stock reaction was telling. Shares in traditional systems integrators fell on the news, because the OpenAI Deployment Company is essentially entering their market — except with access to frontier AI capabilities that no legacy SI can match in-house.

For smaller AI tool vendors, this changes the enterprise sales dynamic. If OpenAI’s FDEs are already embedded in a customer, that’s a natural GPT-5 moat. The Deployment Company’s presence at an enterprise account will make it harder for Claude Code, Cursor, or any other tool to get a foothold unless the customer explicitly requests multi-vendor diversity.

That said, the IT services model has a structural weakness: it doesn’t scale like a product.

The Product-Led vs. Services-Led Fork
#

Here is where the OpenAI Deployment Company diverges sharply from what Anthropic has built.

Claude Code is a product. An individual developer downloads it, authenticates via Claude.ai or an API key, and starts using it in their existing terminal. Adoption spreads laterally through engineering teams through demonstrated productivity gains, not through enterprise procurement cycles and embedded consultants.

The OpenAI Deployment Company is services. You get the FDEs if you can afford the engagement. The technology compounds at the speed of human consulting capacity.

Anthropic’s $30B ARR was built largely on this product-led flywheel: Claude Code’s $2.5B ARR component grew from $0 to $2.5B in roughly 18 months, driven by developer adoption that preceded formal enterprise procurement. By the time enterprise IT signed the contract, engineering teams had already made the decision for them.

OpenAI’s approach can generate large-ticket deals faster — an FDE engagement might be a $2M annual contract from day one. But the ceiling is the headcount of the Deployment Company. A product’s ceiling is how good the model is.

The Deeper Admission
#

There’s something worth sitting with here. OpenAI built the most capable models in the world, captured the popular imagination with ChatGPT, and is still finding that selling AI to enterprises requires a professional services arm backed by Goldman Sachs and McKinsey.

That’s not a failure. It’s a realistic read of how large organizations adopt new technology. But it’s also an admission: models alone are not enough.

This creates a tension at the heart of OpenAI’s strategy. If FDEs are what makes the difference, then the value of the AI is partly in the deployment expertise, not just in the model. Which means OpenAI is competing simultaneously in two businesses with very different economics: foundation models (high margin, scales with compute) and professional services (low margin, scales with headcount).

Every IT services company that scaled past $10B learned this tradeoff the hard way. The Deployment Company’s investors — who include people from TPG and Brookfield, not just AI enthusiasts — know this too. The $4B bet is that the combination can work; that having the best models and the best deployment organization is an unassailable position.

What to Watch
#

The Deployment Company’s first major test will be whether its FDE engagements produce the kind of measurable, auditable productivity gains that justify the price tag. Tomoro’s clients — Mattel, Virgin Atlantic, Tesco — are consumer and retail brands, not deep-tech enterprises. Scaling that playbook to financial services, healthcare, and government is a different challenge.

The Anthropic counter-argument writes itself: Mercado Libre targeted 90% autonomous coding across 23,000 engineers by Q3 2026 using Claude Code. No FDEs required. The engineers themselves became the deployment mechanism, because the tool was good enough to adopt without a consultant explaining how.

Watch whether OpenAI’s FDE model generates documented ROI numbers comparable to what Claude Code’s analytics API produces — and whether those numbers hold up after the FDEs leave.


Sources:

Related