---
title: "Meta Avocado Is Closed-Source. The Llama Era Might Be Over."
date: 2026-05-04
tags: ["Meta","open-source","Llama","AI models","industry","open-weight"]
categories: ["Industry","AI Tools"]
summary: "Meta's next flagship model has been delayed twice, benchmarks below GPT-5.5 and Claude Opus 4.7, and unlike Llama — it won't be open-sourced. Meta is reportedly considering licensing Google Gemini as a stopgap. The open-source AI story Meta spent two years building is quietly unraveling."
---


Meta's next flagship model, codenamed Avocado, was supposed to ship in March. Then May. Now it's drifting toward June, internally benchmarking between Gemini 2.5 and Gemini 3.0 — well below Claude Opus 4.7 and GPT-5.5 "Spud" — and carrying a decision that might matter more than any benchmark: **it won't be open-sourced**.

This is a big deal. The Llama ecosystem is one of the most significant things to happen to AI infrastructure in the past three years. Llama 3, 3.1, 3.2, Llama 4 — Meta's open weights powered thousands of fine-tuned models, local deployments, enterprise AI stacks, and the entire open-source supply chain for language models. Avocado's closed-source designation signals that Meta is pulling back from that commitment at exactly the moment when the open-source community has come to depend on it.

## What We Know About Avocado

The details are thin but consistent. Avocado is being developed by **Meta Superintelligence Labs**, the unit under Alexandr Wang — Scale AI co-founder, now Meta's Chief AI Officer. The model was originally targeted for March 2026, has slipped twice, and the current expectation is a May or June release.

Internal benchmark results are landing between Gemini 2.5 and Gemini 3.0. That puts Avocado meaningfully behind Claude Opus 4.7 (87.6% SWE-bench Verified, 64.3% SWE-bench Pro) and GPT-5.5 "Spud" (82.7% Terminal-Bench 2.0, 58.6% SWE-bench Pro). For a model that's months late and positioned as Meta's frontier play, that's a difficult position to launch from.

The closed-source decision appears strategic rather than temporary. Unlike Llama — where open weights were a deliberate policy choice, a way to build ecosystem dominance and put pressure on OpenAI's proprietary model — Avocado is being positioned as a competitive enterprise product. Meta appears to have calculated that releasing frontier weights is now a liability rather than a brand asset.

## The Gemini Option

Perhaps the most striking detail: Meta reportedly discussed temporarily licensing Google Gemini as an interim model while Avocado catches up.

Let that sink in. The company that built Llama — the model that became the backbone of open-source AI — discussed licensing a competitor's proprietary model because its own frontier offering isn't ready. This is not a company that's winning the AI race. This is a company in the middle of a painful recalibration.

No decision has been announced; Meta may have already ruled it out. But the fact that it reached the conversation stage says something about how badly internal momentum has slipped.

## The Pattern: A Second Closed-Source Turn

Avocado isn't the first sign of Meta's retreat from openness in 2026. In April, [Muse Spark — Meta's next-generation creative AI platform](/posts/meta-muse-spark-closed-source-open-source-ai/) launched as a closed-source product with no announced plans for open weights. The stated reason was protecting competitive advantage in a specific product area. Avocado suggests that reasoning has generalized to Meta's model strategy.

The trajectory is clear:

- **Llama 4** (April 2025): Open, under a community license
- **Muse Spark** (April 2026): Closed, no weights, no timeline for release
- **Avocado** (expected May/June 2026): Closed, enterprise-positioned

The progression isn't subtle. Meta is exiting the open-source frontier — at least for its most capable models.

It's worth understanding why. For the first three years of the Llama era, openness was cheap. Llama 2 and 3 were capable but not at the frontier — releasing them cost Meta little competitive ground while earning enormous ecosystem goodwill, research citations, and developer loyalty. As models get better and the gap between "open-source capable" and "frontier capable" narrows, releasing weights becomes a genuine cost. You're handing competitors a foundation they can fine-tune and deploy without paying your API prices. The math changed.

## What This Means for the Ecosystem

The Llama ecosystem is substantial. Hugging Face hosts hundreds of Llama-derived models. AWS Bedrock, Azure AI, and Google Vertex all offer Llama inference. Thousands of enterprises have built fine-tuned applications on Llama weights. The open-source AI coding stack — tools like Aider, Continue.dev, and similar — relies on Llama models as the affordable local option.

Llama 4 and older versions aren't going anywhere. The weights are already out there. But the open-source community has been operating on an implicit assumption: that Meta would continue releasing capable open weights at or near the frontier. If Avocado is closed and whatever follows is also closed, that supply dries up.

The practical consequences:

**Local and fine-tuning pipelines break down.** If Avocado's capabilities exceed Llama 4's — which they should, given the training investment — closed weights mean engineers can't fine-tune on domain-specific data. The model is a rented API, not owned infrastructure.

**Cost changes.** The "free" model you could run on a GPU cluster becomes a metered service. For cost-sensitive applications that currently self-host Llama, Avocado's API is a new line item.

**Lock-in returns.** The entire value proposition of open weights — control, privacy, no vendor lock-in, deploy anywhere — evaporates for Avocado users. They're back in the same position as GPT-4 customers in 2023.

## The Beneficiaries

**Anthropic.** Claude on AWS Bedrock, with Mantle's zero-operator-access architecture, already offers the enterprise air-gap story that Avocado will attempt to compete on — but Anthropic got there first. The [$25B Amazon investment](/posts/amazon-anthropic-25-billion-aws-100-billion-deal/) secured the infrastructure partnerships, compliance certifications, and Claude Code integration that make Bedrock a genuine development platform, not just a model API. If enterprise customers are evaluating closed frontier models, Claude Opus 4.7's benchmark lead (64.3% SWE-bench Pro vs. Avocado's unconfirmed but below-GPT-5.5 numbers) is a hard argument for Meta to counter.

**The open-source labs.** If Meta vacates the open-source frontier, the gap won't stay empty. [DeepSeek V4-Pro](/posts/deepseek-v4-open-weight-frontier-huawei-ascend/) (MIT license, 80.6% SWE-bench Verified, at 1/6th the cost of Opus 4.7) and [GLM-5.1](/posts/glm-5-1-open-source-beats-frontier-models-swe-bench-pro/) (MIT, 58.4% SWE-bench Pro from Z.AI) have already demonstrated that frontier-adjacent capability doesn't require a hyperscaler budget or a proprietary license. Chinese labs in particular have shown willingness to release powerful weights openly — partly for ecosystem reasons, partly because the regulatory environment makes closed models less commercially valuable domestically.

## The Honest Assessment

Meta's AI ambitions are real and its resources are massive. Avocado may ship, may surprise, may even get open-sourced eventually if the competitive situation shifts. Alexandr Wang and the Meta Superintelligence Labs team are serious people building serious systems.

But right now, in May 2026, the signals are discouraging. A model that's months late, benchmarking below its main competitors, under consideration for a third-party licensing bridge, and no longer carrying the open-source commitment that made the Llama brand significant — that's a difficult story to tell as a win.

The Llama era gave developers something genuinely valuable: a credible, capable, free alternative to proprietary models. Whether Avocado marks the end of that era — or just a chapter break before Meta recaptures frontier performance and reverts to openness — is the question Meta hasn't answered yet.

The open-source AI community, watching closely, would like that answer soon.

---

**Sources:**
- [Meta postpones Avocado AI model launch — MLQ.ai](https://mlq.ai/news/meta-postpones-avocado-ai-model-launch-to-may-amid-performance-gaps-with-competitors/)
- [Meta Muse Spark closed-source — sdd.sh](/posts/meta-muse-spark-closed-source-open-source-ai/)
- [Amazon $25B Anthropic investment — sdd.sh](/posts/amazon-anthropic-25-billion-aws-100-billion-deal/)
- [DeepSeek V4 open-weight frontier — sdd.sh](/posts/deepseek-v4-open-weight-frontier-huawei-ascend/)
- [GLM-5.1 open-source beats frontier — sdd.sh](/posts/glm-5-1-open-source-beats-frontier-models-swe-bench-pro/)
- [Meta Superintelligence Labs announcement — Meta](https://about.fb.com/news/2025/09/meta-ai-superintelligence-labs/)

