GitHub Merge Queue Bug: What Happened and Why It Matters
On April 23, 2026, GitHub’s merge queue silently produced incorrect merges that reverted previously shipped code. Pull requests that were reviewed, approved, and queued landed on main as something completely different from what anyone approved.
Not a crash. Not downtime. The UI told you everything was fine. You hit merge. What went in was not what you reviewed.
What Actually Happened
GitHub’s merge queue takes your branch, squashes it into a single commit, and merges it to main. Standard stuff. Thousands of teams depend on it every day.
But on April 23rd, a faulty code path changed how the squash operation computed its base reference. Instead of building on top of main’s current tip, it built on a stale reference. The squash operation overwrote whatever was on main with just your changes and their stale snapshot. Everything that landed between your branch point and now? Gone from the visible history.
The root cause was an incompletely gated feature flag. A new code path for an unreleased feature was supposed to be hidden behind a flag. The gating was incomplete. The experimental behavior leaked into production for every squash merge group containing more than one PR.
658 repositories. 2,092 pull requests. That’s GitHub’s official count. The incident window ran from 16:05 to 20:43 UTC. About four and a half hours of silent corruption.
How squash merge works
What went wrong on April 23
The Detection Problem
Here’s what makes this different from a normal outage. GitHub’s automated monitoring didn’t catch it. The merge commits were structurally valid. Git didn’t complain. CI didn’t fail on the merge itself. The commits existed, they just contained the wrong code. As The Stack reported, the company didn’t even notice until customers howled.
Customer reports caught it. People started opening tickets saying “my merge doesn’t match my PR.” Think about that for a second. The system designed to protect branch integrity was the system breaking it, and the safety net was humans reading diffs after the fact.
The UI just lied to you. You looked at your PR. It looked correct. You hit merge. What actually went in was someone else’s nightmare.
The Numbers Don’t Sit Right
GitHub’s official postmortem says 2,092 PRs across 658 repos. But community reports paint a different picture. At least one commenter claimed their company had 200+ affected PRs as a single customer. If one org had 200+, the 2,092 total feels thin.
GitHub initially framed the number against total merges to make it sound small. Whether it’s 0.07% or 0.3%, that framing misses the point. If your team’s repo was one of the 658, the percentage is 100% for you. You spent the afternoon detangling your main branch.
The Week That Kept Getting Worse
The merge queue incident didn’t happen in isolation. Four days later, on April 27, GitHub’s search went down. The Elasticsearch subsystem that powers search across pull requests, issues, and projects buckled under load, likely from a botnet attack. GitHub’s CTO acknowledged both incidents in the same public statement on scaling and reliability.
Two major incidents in five days. The merge queue broke code integrity. The search outage broke workflows. Different systems, different root causes, same week.
And if you zoom out, the pattern gets worse. GitHub has been having reliability problems since early 2026. The reason is the same every time: the infrastructure can’t keep up with the traffic.
The AI Traffic Problem
Here’s the part GitHub doesn’t love talking about. Pull requests opened by AI agents more than quadrupled between September 2025 and March 2026. GitHub’s own data shows commit volume is on track to hit 14 billion in 2026, roughly a 14x increase over the previous year. Weekly commit volume hit 275 million by April.
GitHub started planning for a 10x capacity increase in October 2025. By February 2026, they realized they needed to plan for 30x. All of this is documented in their CTO’s scaling and availability roadmap.
The platform is drowning in machine-generated traffic. Agents clone repos, create branches, push code, open PRs, and trigger CI pipelines at speeds no human team ever could. Every one of those operations hits the same infrastructure that handles your git push. When that infrastructure buckles, you get merge queue bugs, search outages, and authentication failures. Not because the code was bad. Because the system was overwhelmed.
The Status Page Question
Go check GitHub’s status page for April 23rd. You’ll find an incident report there now. But the incident wasn’t caught by automated monitoring. It was detected through customer support tickets. That gap between “things are breaking” and “we know things are breaking” is the part that stings.
This wasn’t a network outage or a service degradation that trips an alert. The system was operating normally. It was just operating wrongly. And that’s a harder problem than downtime. Downtime is obvious. Silent data corruption is not.
The Force Push Question
Here’s a question that hasn’t gotten a clean answer. If the merge queue was overwriting main’s history instead of appending to it, does that mean GitHub is force-pushing to main internally?
This is speculation, not confirmed fact. GitHub’s postmortem says they “reverted the faulty code change and force-deployed the fix.” Trunk.io published an analysis of what happens when a merge queue builds on the wrong commit. But the question of how the incorrect history replaced the correct history is worth asking. In normal git, you can’t rewrite a remote branch’s history without a force push. Branch protection rules exist specifically to prevent this.
If the merge queue operates outside normal git push constraints, that’s a design decision worth documenting publicly. Not because it’s inherently wrong. Lots of internal systems operate with elevated permissions. But because the contract between GitHub and its users assumes git semantics. When that contract breaks silently, trust erodes.
It’s Not Just the Merge Queue
The force push question leads to an uncomfortable place. Less than a month before the merge queue incident, security researchers at Wiz disclosed CVE-2026-3854. It was a CVSS 8.7 command injection vulnerability in GitHub’s git push infrastructure.
The mechanism was simple. Push options weren’t being sanitized properly. They were passed into an internal service header that used a semicolon as a delimiter. By injecting a semicolon, an authenticated user with push access could execute arbitrary commands on GitHub’s backend servers.
GitHub patched it fast. Their investigation found no evidence of exploitation in the wild. But the vulnerability itself is the point.
Normal git doesn’t have this attack surface. Your local git installation doesn’t process push options through an internal service header with a delimiter-based parser. GitHub’s does. Because GitHub’s “git” isn’t actually git. It’s a massive distributed system that speaks the git protocol on the outside while running its own machinery on the inside.
The merge queue bug broke trust by silently corrupting correctness. The CVE broke trust by revealing that the platform layer can be weaponized through git commands. Two different failure modes, same root issue: the system you’re interacting with is not the system you think it is.
The Vibe Coder Defense
Here’s a fun thought experiment. Somewhere out there, developers using AI assistants hit this bug and just asked their AI to fix it. “Weird merge conflict, fix it.” And the AI did. And they moved on with their day.
For those developers, this incident was invisible. The AI absorbed the damage. Which means there are repos out there where the GitHub incident was “fixed” by an AI that didn’t understand it was fixing a platform-level corruption, not a normal conflict.
And here’s the real kicker. The next time any company gets breached or ships broken code, they now have the world’s most legitimate excuse: “GitHub silently dropped our security commit.” You can’t prove they’re wrong. Not for repos affected on April 23rd.
Almost Nobody’s Leaving
GitHub created such a dominant product that even after it stopped being git for a day, the user base won’t move. GitLab exists. Bitbucket exists. Self-hosted Gitea exists. The migration conversations happen every time there’s an outage, and they go nowhere.
Almost nobody. There’s one notable exception. Mitchell Hashimoto, co-founder of HashiCorp, announced he’s moving Ghostty off GitHub, citing the platform’s declining reliability. When the person who built Terraform, Vagrant, and Vault decides your platform isn’t reliable enough for his terminal emulator, that’s not a random complaint on Hacker News. That’s a signal.
But it’s also the exception that proves the rule. It took a co-founder of one of the most influential infrastructure companies in tech to actually pull the trigger. Everyone else is still here.
That’s not complacency. That’s network effects. Your CI/CD, your Actions workflows, your issue tracking, your code review culture, your package registry, your Dependabot, your Copilot integration. The switching cost is enormous. GitHub knows it. You know it.
GitHub Knows It Too
To their credit, GitHub isn’t pretending this is fine. CTO Vlad Fedorov published a statement with an explicit new priority order: availability first, capacity second, new features third.
Read that order again. Features are now explicitly last. That’s GitHub publicly admitting that they shipped features faster than they could keep the lights on. The 30x scaling target, the service decoupling work, the migration from Ruby to Go for performance-sensitive code. These are all signs that the company is treating this as a structural problem, not just a bad week.
Whether that’s enough depends on how fast they execute. The AI traffic isn’t slowing down. The agents are getting more capable, more numerous, and more hungry for CI cycles. GitHub is racing to rebuild its foundations while the building is on fire. They might pull it off. They’ve done hard things before.
But until they do, every git push you make to GitHub goes through a system that isn’t git. It’s a system that had a command injection vulnerability in its push pipeline. A system that silently reverted code for four and a half hours. A system that’s currently being hammered by 275 million commits a week from machines that don’t sleep.
Git is supposed to be immutable. On April 23rd, GitHub proved it’s not. Not when someone else controls the remote.