GitHub has been quietly shipping meaningful improvements to Copilot’s coding agent all through early 2026 — cross-agent memory, security scanning baked into the agent’s workflow, Jira integration, a model picker. Taken together, they represent a genuine leap in what the coding agent can do.
Then, on March 25, GitHub announced it would start using interaction data from free, Pro, and Pro+ users to train its AI models — effective April 24.
The improvements and the policy change arrived within 24 hours of each other, which is either coincidence or timing. Either way, they’re worth looking at together, because the tradeoffs are now explicit.
What’s Actually New#
Cross-Agent Memory#
Memory went on by default for Pro and Pro+ users on March 4, 2026. The concept is straightforward: knowledge that the coding agent acquires is stored and shared across sessions, across tools (coding agent, CLI, code review), and across time.
Practically, this means the agent can learn that your test suite is slow, that a particular module has unstable tests, that your team prefers a specific error-handling pattern — and apply that knowledge in future tasks without being told again. Memories are repository-scoped and validated against the current codebase before being applied, so stale knowledge doesn’t silently cause problems. They auto-expire after 28 days.
GitHub ran A/B tests on the feature. The results: 7% increase in PR merge rates for coding agent sessions with memory (90% vs. 83% without), and a 2% bump in positive code review feedback. Both results were statistically significant (p < 0.00001). They also tested adversarial conditions — deliberately seeding memory with false information — and found agents caught and corrected contradictions rather than propagating bad data.
For Business and Enterprise plans, memory is off by default and must be enabled in org settings. The likely reason: organizations that require audit trails need to understand what the agent remembers before turning it loose.
Security Scanning in the Agent Workflow#
Since March 18, 2026, the Copilot coding agent runs a security validation layer before opening a pull request — automatically, with no configuration needed. The agent’s output goes through:
- CodeQL scanning: Static analysis for code vulnerabilities
- Secret scanning: Detecting API keys, tokens, and credentials in new code
- Dependency vulnerability checks: New packages are checked against the GitHub Advisory Database for malware advisories and CVSS High/Critical CVEs
These run for free, whether or not a team has GitHub Advanced Security. Repository admins can configure which validation tools run from repo settings.
This is significant. One of the legitimate concerns about AI-generated code is that it can introduce security issues that slip past human reviewers — hallucinated API calls that expose data, dependencies pulled from typosquatted package names, credentials hardcoded because the agent didn’t know better. Running CodeQL and secret scanning inside the agent’s loop, before the PR is even opened, addresses that concern at the right layer.
Model Picker#
The coding agent now lets developers select the model for each task. Faster models for routine work (writing unit tests, renaming variables); more capable models for complex architectural changes. An “Auto” option delegates the choice to GitHub.
The model picker was available to Pro/Pro+ users earlier; it extended to Business and Enterprise users in February 2026. GPT-5.4 was added to the picker on March 5 — GitHub reported it “consistently hits new rates of success in agentic software development.”
Jira Integration#
In public preview since March 5, 2026: you can assign Jira issues directly to Copilot, and it will open a draft PR in the corresponding GitHub repository. No context-switching between Atlassian and GitHub. For teams where planning lives in Jira and code lives in GitHub — which is most enterprise teams — this closes a workflow gap that previously required either manual handoffs or custom tooling.
Self-Review Before Opening PRs#
Before the coding agent opens a pull request, it runs Copilot code review against its own changes, incorporates the feedback, and iterates. The PR that lands in your review queue has already gone through a round of automated self-criticism. GitHub still recommends human review, but the signal-to-noise ratio of what reaches humans should improve.
The Data Policy Change#
On March 25, GitHub announced that starting April 24, 2026, it will use interaction data from Copilot Free, Pro, and Pro+ users to train AI models. Interaction data includes: inputs, outputs, code snippets, context, accepted suggestions, chat interactions, and feedback.
Copilot Business, Enterprise, and student tier users are exempt. Data may be shared with Microsoft affiliates but not third-party AI providers. Users can opt out in account settings; prior opt-outs carry over.
GitHub’s stated rationale is “more intelligent, context-aware coding assistance,” citing improved suggestion acceptance rates from internal testing.
The developer reaction has been predictably polarized. Some frame it straightforwardly: if you’re on a free or consumer tier, the product is partly funded by your usage data. That’s how consumer software works. Others argue that code is proprietary by default — developers on Pro plans who’ve written internal tooling, personal projects, or client code didn’t sign up to have their work become training data, even if GitHub claims it’s non-identifying.
A few things worth noting:
Business and Enterprise users are exempt. The policy draws a clear line between individual developers and organizational deployments. Organizations paying for Copilot Business or Enterprise have explicit data protection commitments; individuals on consumer plans do not, or at least not the same ones.
Opt-out exists but requires action. Users who care must find the setting and disable it. Most won’t notice the change. This is a deliberate design choice.
The timing is notable. Releasing major capability improvements — memory, security scanning, Jira integration — and then announcing a data policy change in the same week is a reasonable PR strategy: lead with value, bury the controversy.
How to Think About the Package#
GitHub Copilot’s coding agent, in March 2026, is meaningfully more capable than it was six months ago. Cross-agent memory reduces repetitive context-setting. Security scanning addresses a real gap in AI-generated code quality. The model picker gives developers control over cost and quality tradeoffs. Self-review reduces the noise that reaches human reviewers.
The data policy change doesn’t negate those improvements. But it does clarify the terms.
If you’re building anything sensitive on a Pro or Pro+ plan — client code, proprietary algorithms, internal tooling — you should either opt out, upgrade to a Business plan, or reconsider what you share with the coding agent. Not because GitHub is necessarily doing something malicious with the data, but because “used for AI model training” is a broad category with unclear boundaries, and your code is yours until you decide otherwise.
The improvements are real. The tradeoff is real. Now you can make an informed choice.
Sources:
- What’s New with GitHub Copilot Coding Agent — GitHub Blog
- Configure Copilot Coding Agent Validation Tools — GitHub Changelog
- Copilot Memory Now On by Default — GitHub Changelog
- Building an Agentic Memory System for GitHub Copilot — GitHub Blog
- GitHub Copilot Coding Agent for Jira — DevOps.com
- GitHub to Use Copilot Data for AI Training from April 24 — Roboin