Google Cloud Next 2026 was the company’s loudest statement yet that the enterprise AI race is no longer about models — it’s about orchestration, interoperability, and scale. The event delivered a stack of meaningful announcements: the Agent2Agent (A2A) protocol is now running in production at 150 organizations, Jules is out of beta and broadly available, and Vertex AI has been renamed and restructured into the Gemini Enterprise Agent Platform. For enterprise buyers, it was impressive. For developers who care about autonomous agents that can actually ship code unattended, it’s a more complicated picture.
A2A Hits Production at Scale#
The headline number from Cloud Next is A2A: Google’s agent interoperability protocol, announced in early 2026, has moved from experiment to infrastructure. 150 organizations are running it in production — not pilots — routing real tasks between agents built on different platforms. Microsoft, AWS, Salesforce, SAP, and ServiceNow are all live.
Version 1.2 of the spec is now out, with signed agent cards using cryptographic signatures for domain verification. Governance has moved to the Linux Foundation’s Agentic AI Foundation, putting it on a trajectory parallel to MCP’s own standardization path. Native A2A support is now built into LangGraph, CrewAI, LlamaIndex Agents, Semantic Kernel, and AutoGen.
The framing Google is pushing — and it’s actually accurate — is that A2A and MCP are complementary, not competing. MCP handles how an agent connects to tools and data sources. A2A handles how agents communicate with each other across organizational and platform boundaries. A Salesforce agent can hand off a task to a Google agent on Gemini Enterprise, which can query a ServiceNow agent for IT asset data, all through A2A without any of the three systems needing to understand each other’s internal architecture.
For teams running multi-vendor agent deployments, this is genuinely useful. The days of bespoke agent-to-agent glue code are numbered.
Jules: General Availability, Plus a Next-Generation Preview#
Jules, Google’s async coding agent, exited beta at Cloud Next and is now available to all users, integrated into Google AI Pro and Ultra subscriptions. The model: Gemini 3.1 Pro. The workflow: Jules reads your repository, accepts a task, works asynchronously in an isolated branch, and returns a diff with a full explanation of its plan and reasoning.
That async model has always been Jules’ distinguishing feature — and its limitation. Jules is deliberately not interactive. You can’t course-correct mid-task. You fire and wait. For well-scoped, well-defined tasks, that’s fine. For the kind of iterative, exploratory work that characterizes real software engineering, it’s a significant constraint.
The more interesting signal from Cloud Next is what Google is building next. An internal project named Jitro is described as Jules V2, and it’s designed around outcome-based goal-setting rather than task-based prompting. The idea: developers define desired outcomes — better test coverage, lower error rates, improved accessibility compliance — and the agent figures out the path. KPI-driven development, in Google’s framing. Whether Jitro ships as described is an open question, but the direction is notable: even Google recognizes that task-level prompting is a ceiling.
Gemini Enterprise Agent Platform#
Vertex AI has been rebranded to the Gemini Enterprise Agent Platform — a consolidation move that folds Agentspace, Agent Studio, and the underlying runtime into a unified product. The platform ships with a revamped Agent Runtime delivering sub-second cold starts, support for multi-day autonomous workflows, and a low-code interface (Agent Studio) for building agents via natural language description.
Workspace Studio, Google’s no-code agent builder for business users, is now generally available. The numbers cited at the keynote: 3.5 million monthly active users, 170 million tasks automated in a single month. That’s a consumer and SMB signal more than an enterprise engineering signal, but it demonstrates that Google’s distribution advantages — Gmail, Docs, Sheets, Drive — are a real moat for horizontal agent adoption.
For developers, the platform’s most relevant new capability is the combination of A2A routing and the Gemini Enterprise runtime: you can now build agents on Google infrastructure that interoperate with agents running on AWS Bedrock, Azure AI Foundry, or Salesforce Agentforce without custom protocol work. That’s table stakes in an increasingly heterogeneous enterprise environment.
The Editorial Read: Integration Breadth vs. Autonomy Depth#
Here’s the honest assessment: Google’s stack at Cloud Next 2026 is the best demonstration yet of AI agents as enterprise integration infrastructure. A2A in production across five major cloud and CRM platforms, Workspace Studio’s 170M monthly automated tasks, Jules available to every Google AI subscriber — these are real adoption numbers.
But there’s a distinction worth maintaining: integration breadth is not the same as autonomy depth.
Jules, at GA, is still an async task executor. It doesn’t iterate. It doesn’t push back on underspecified requirements. It doesn’t hold a context window across a multi-session refactoring effort. The Gemini Enterprise Agent Platform is, at its core, a workflow orchestration and connector platform — extraordinarily useful for enterprise automation, but not the model for autonomous software engineering.
Claude Code’s architecture is different in kind. It runs in your terminal, manages its own context across sessions, integrates directly with your filesystem and shell, and operates with the kind of tight feedback loop that real software development requires. The upcoming agent memory and multi-session work in Claude Code points in a direction Jules hasn’t reached yet.
A2A is actually good news for Claude Code users. As A2A becomes standard infrastructure, Claude Code agents will be able to interoperate with Google-hosted agents, Salesforce agents, and enterprise data systems without bespoke integrations. Anthropic has not announced A2A support explicitly, but the protocol is open standard and built on HTTP/JSON-RPC — integration is a matter of when, not if.
What Developers Should Take Away#
Three things from Cloud Next 2026 matter for working developers:
A2A is becoming table stakes. If you’re building multi-agent workflows that span organizational systems, start treating A2A the same way you treat MCP — as infrastructure rather than an interesting experiment. 150 production deployments in under a year is a strong signal.
Jules is worth revisiting. If you’ve written off Jules as a beta toy, it’s now worth a second look for well-scoped async tasks: fixing specific bugs, adding test coverage, implementing a clearly defined feature against a stable API. It’s not Claude Code, but it’s also not nothing.
Google’s integration moat is real, but narrow. Workspace Studio at 3.5M MAU is impressive, but those are business users automating Gmail and Sheets workflows — not engineers shipping production code. The enterprise automation market and the developer tools market are increasingly adjacent but still distinct, and Google’s dominance in the former doesn’t translate directly to the latter.
Google Cloud Next 2026 was a strong event for enterprise AI infrastructure. The A2A protocol achieving production scale and Jules graduating to GA are genuine milestones. But the companies that will define autonomous software development are still the ones building toward deeper autonomy, not wider integration.
That race is still open.
Sources:
- Google Cloud Next 2026 recap — Google Cloud Blog
- Google Cloud Next 2026: AI agents, A2A protocol, Workspace Studio — The Next Web
- Announcing the Agent2Agent Protocol — Google Developers Blog
- Introducing Gemini Enterprise Agent Platform — Google Cloud Blog
- Jules — An Autonomous Coding Agent
- Google Cloud Next 2026: Every Major Announcement — Oplexa
- 10 more Workspace announcements at Cloud Next 2026 — Google Workspace Blog