Last week, PostgreSQL shipped versions 18.2, 17.8, 16.12, 15.16, and 14.21. Eleven security vulnerabilities fixed in a single quarterly release. For context: PostgreSQL typically ships one to four CVEs per release. The project has a 30-year track record of quiet, disciplined engineering. Eleven is not normal.
But it’s not an anomaly either. It’s the new baseline.
The Numbers Across the Stack#
PostgreSQL is one data point in what NIST now confirms is a structural shift. CVE submissions in Q1 2026 were 33% higher than Q1 2025 — and 2025 was already a record year. NIST enriched nearly 42,000 CVEs in 2025, more than any prior year, and still could not keep pace with submissions.
The per-project numbers are harder to ignore:
| Project | Change (YTD 2026) |
|---|---|
| Chrome | +563% |
| GitHub | +476% |
| Apache | +170% |
| Mozilla | +157% |
| Spring Framework | 17 CVEs in all of 2025 → 30 in 2 months of 2026 |
| Linux kernel | 3 local-root privilege escalation CVEs in the same code area, weeks apart |
Spring Security released emergency patches on April 21, 2026 fixing multiple CVEs, including an infinite recursion OOM in Spring Cloud Function and a filter-expression injection in Spring AI. The Linux kernel disclosed Copy Fail (CVE-2026-31431), then Dirty Frag (CVE-2026-43284 / CVE-2026-43500), then Fragnesia (CVE-2026-46300) — three separate local-privilege-escalation vulnerabilities in related kernel code, each allowing any unprivileged user to reach root via a public proof-of-concept, each disclosed within weeks of the last.
This is not the fingerprint of a sudden regression in code quality. These projects haven’t gotten worse. The tooling for finding what was already broken has gotten dramatically better.
AI Found the Bugs. AI Is Also Looking for Them on the Other Side.#
The proximate cause is well-documented at this point. CSO Online reported in early 2026 that AI tooling had uncovered 20-year-old bugs in PostgreSQL and MariaDB — latent vulnerabilities that had been sitting in plain sight through dozens of human security audits. In April 2026, Anthropic disclosed that Claude Mythos Preview had identified thousands of zero-day vulnerabilities across major operating systems and browsers.
The economics have inverted. A skilled security researcher running manual analysis might audit one component of one project in a week. An AI model can sweep an entire codebase in minutes, flag plausible vulnerability patterns across every execution path, and do it again tomorrow after the next commit. Every major open-source project is now subject to continuous, automated re-examination at a scale that would have required a large, dedicated red team a year ago.
The bugs being found are real. These aren’t false positives — the PostgreSQL CVEs carry CVSS scores of 8.2 to 8.8. The pgcrypto heap buffer overflow (CVE-2026-2005), the intarray arbitrary code execution (CVE-2026-2004), the pg_trgm heap overflow (CVE-2026-2007) — all high-severity, all in extensions that have been shipped and trusted for years.
The uncomfortable flip side: the same AI capability that finds these bugs can be used to weaponize them. Barracuda Networks’ May 2026 threat report documents a measurable collapse in the time between CVE disclosure and functional exploit availability. The exploit window — historically measured in weeks — is now measured in hours for well-documented vulnerabilities. AI doesn’t just find the bug; it can write the PoC faster than the patch reaches most production systems.
The Triage Crisis Nobody Planned For#
Here is the operational problem that doesn’t make headlines: the humans responsible for validating and fixing these vulnerabilities were not resourced for this volume.
Most major open-source projects are maintained by small teams — often partially or entirely volunteers. PostgreSQL, Spring, and the Linux kernel are better-resourced than most, but even they are absorbing a materially higher triage load with the same team sizes. For the thousands of smaller open-source projects that underpin the modern stack, the math is worse.
A CVE report is not a fix. It’s a claim that requires validation: Is this actually exploitable? Under what conditions? Does the proposed patch address root cause or just the reported surface? The cost of generating a vulnerability report with AI has dropped to near-zero. The cost of verifying one has not changed.
Security teams downstream are experiencing this as an advisory flood. Two-thirds of security teams in ProjectDiscovery’s 2026 AI Coding Impact Report are already spending more than half their time manually triaging AI-generated findings rather than remediating them. That was before the current CVE surge hit its current rate.
What This Means If You Run Production Software#
The practical implications are not subtle.
Your patching cadence is now wrong. If you’re on quarterly patch cycles, you are structurally behind. PostgreSQL shipped 11 CVEs with CVSS scores up to 8.8. Linux had a local-root exploit with a public PoC. Both in May 2026. If you patched in March and your next window is June, you have a gap.
Extensions and embedded dependencies are the attack surface. The PostgreSQL CVEs weren’t in the core engine — they were in pgcrypto, intarray, and pg_trgm. The Spring CVEs included Spring AI and Spring Cloud Function. AI vulnerability discovery is thorough: it doesn’t skip the extension ecosystem the way human auditors sometimes do. Your threat surface is larger than your primary dependency list.
AI-generated code is being scanned by the same tools. If 51% of GitHub commits in 2026 are AI-assisted, and AI models generate code that contains OWASP top-10 vulnerabilities at a high base rate, then the CVE surge isn’t only about old bugs in legacy code. It’s also about new bugs in recently shipped AI-generated features. Both populations are being scanned simultaneously.
The time between disclosure and exploit is now too short for slow response. When a public PoC for a local-root Linux vulnerability is available within hours of CVE publication, the margin for “we’ll patch it in the next maintenance window” is gone. Automated patching infrastructure — KernelCare, live patch pipelines, dependency bots — stops being a nice-to-have and becomes a baseline requirement.
The Correct Response Is Not Panic#
None of this argues for slowing down your stack or auditing it into paralysis. The bugs being found are real, but most of them are also patchable. The CVE surge is, in a meaningful sense, good news: these vulnerabilities existed before AI started finding them. The only thing that changed is that we now know about them.
The practical response is architectural:
Treat your dependency update pipeline as infrastructure, not maintenance. Renovate, Dependabot, automated patch PRs — these should be running continuously and merging on green CI. A project with a working automated update pipeline will absorb the CVE surge without additional human load. A project that patches manually on a quarterly schedule will not.
Scope your exposure by extension and plugin. The PostgreSQL and Spring CVEs were concentrated in optional extensions that not everyone uses. Before a patch is available, the fastest risk reduction is confirming whether the vulnerable component is actually deployed in your environment. pgcrypto, intarray, pg_trgm — if you don’t use them, disable or remove them.
Build agentic security review into the generation loop. If AI is generating a meaningful fraction of your code, the same AI capability that finds old vulnerabilities can review new ones. A Claude Code pre-commit hook running security-focused static analysis isn’t a future aspiration — it’s a deployable pattern today. AI-generated code with an AI security reviewer in the loop produces fewer vulnerabilities than human-reviewed AI code, because the reviewer doesn’t fatigue.
Monitor disclosure feeds, not just release notes. The time between CVE publication and patch availability can be hours for some projects. If your threat intelligence is “wait for the vendor release email,” you’re reading about exploits after the fact. NIST NVD, VulnCheck, and OpenCVE all offer real-time feeds that can be piped into automated triage workflows.
The Broader Shift#
The CVE surge is the security industry’s version of the broader AI acceleration pattern: AI is increasing the rate at which consequential things happen, in both directions. Code gets written faster. Bugs get found faster. Exploits get developed faster. Patches need to ship faster.
The organizations that will absorb this well are the ones that have already automated the low-value, high-frequency work: dependency updates, basic security scanning, patch deployment. The ones that will struggle are the ones whose security posture still depends on human reviewers moving at human speed against a threat surface that is now being probed at machine speed.
Your stack is being scanned right now. Whether the results show up in a responsible disclosure report or in an attacker’s toolbox first depends partly on luck and partly on how fast your patching infrastructure runs.
Probably a good time to find out which one you have.
Sources:
- PostgreSQL 18.2, 17.8, 16.12, 15.16, and 14.21 Released — postgresql.org
- AI finds 20-year-old bugs in PostgreSQL and MariaDB — CSO Online
- AI Vulnerability Discovery and the Open Source CVE Surge — Security Boulevard
- The First CVE Wave: AI-Assisted Vulnerability Discovery — VulnCheck
- 30 CVEs in Two Months: What the Spring Numbers Tell Us — HeroDevs
- Dirty Frag Linux Kernel CVEs — TuxCare
- Fragnesia CVE-2026-46300 — AlmaLinux
- AI-Driven Vulnerability Discovery and Exploit Trends — Barracuda Networks
- NIST CVE Prioritization as AI Speeds Up Discovery — Penligent