GerriScary Exploit: How a Gerrit Vulnerability Almost Compromised Google’s Core Projects

Listen to this Post

Featured Image

A Silent Threat to Google’s Development Pipeline

In an age where code collaboration fuels innovation, even minor configuration oversights can lead to major breaches. A newly discovered vulnerability in Google’s Gerrit code review system—tracked as CVE-2025-1568 and dubbed GerriScary—has spotlighted a serious security flaw. Researchers revealed that any registered user could exploit misconfigured permissions in Gerrit to inject malicious code into high-value Google projects like ChromiumOS, Dart, Bazel, and others. The flaw was critical enough to threaten Google’s continuous integration (CI) infrastructure, exposing how fragile and high-stakes modern software supply chains can be.

How the Exploit Worked: A Step-by-Step Breakdown

GerriScary’s Components Unveiled

The GerriScary exploit wasn’t a single vulnerability but a dangerous combination of three interlinked flaws. First was addPatchSet overprivileging, where registered users had default permissions to modify existing patch sets via refs/for/ in Gerrit. This setup made it possible for outsiders to interact with code changes even when they weren’t project owners or maintainers.

Second, the label persistence flaw took advantage of Copy Conditions misconfigurations. When a patch set was updated—even maliciously—previous labels like Code-Review+2 or Verified+1 weren’t reset. This allowed approvals from legitimate patches to carry over to modified versions, essentially rubber-stamping malicious changes.

Third, there was the Commit-Queue race condition. Once an exploit reached the final +2 approval level, attackers had up to five minutes before automated merge bots executed the code into the repository. That window allowed the insertion of a corrupted patch just in time for it to get merged—undetected.

Technical Phases of the Attack

The attack followed a well-orchestrated three-phase methodology:

Phase 1: Initial Access

An attacker registers on Gerrit using a free Google login. They then scan the repository for submittable changes using API calls or advanced search queries.

Phase 2: Payload Injection

They upload a malicious patch using API calls that subtly alter commit messages or inject payloads into the code base. These API interactions returned valid status codes, confirming that write access was incorrectly granted.

Phase 3: Automated Exploitation

By leveraging label persistence, malicious patches retained prior approvals. Within 300 seconds of this approval, CI bots in the commit queue would automatically merge the malicious changes into the main branch—no human review required.

Who Was at Risk?

Several major projects were marked as critically or highly vulnerable:

ChromiumOS and Bazel: Critical

Dart, BoringSSL, Android-KVM, Quiche: High

Google swiftly acted to mitigate the issue by disabling addPatchSet for regular users, revising label carryover policies, and introducing manual checkpoints in automated review pipelines. However, Gerrit instances outside of Google’s infrastructure remain exposed unless they’ve implemented similar safeguards.

What Undercode Say:

The Underlying Problem: Overtrust in Automation

This vulnerability speaks to a broader issue in modern development—overreliance on automation without adequate oversight. Gerrit’s design is centered around speed, enabling rapid peer reviews and quick merges. But when permissions are poorly configured, that very efficiency becomes a liability.

The inclusion of default permissions like push and label-Code-Review for all registered users was a fundamental misstep. These should never have been allowed without stringent validation workflows, especially in projects as critical as ChromiumOS or BoringSSL.

The Illusion of Secure Defaults

What’s most concerning is how this exploit relied not on unknown zero-days, but on misconfigurations. That means countless other organizations using Gerrit could be similarly exposed without even realizing it. The Gerrit documentation itself warns about careful permission management, yet most developers prioritize usability over security. This incident forces a rethink of what should constitute “safe defaults.”

Supply Chain Integrity at Risk

Attacks like this mirror the threat landscape of supply chain compromises, akin to SolarWinds or the Codecov breach. If an attacker had succeeded, they could have subtly poisoned Google’s production builds, slipping malware into widely-distributed binaries without detection. Given the scale at which ChromiumOS and Dart are deployed, the ripple effects could’ve been catastrophic.

DevSecOps Must Evolve

CI/CD pipelines are often regarded as technical assets, but they’re now prime security targets. Development teams must integrate real-time monitoring, label integrity validation, and time-window restrictions to prevent automated exploitation windows like the 5-minute commit queue race.

Fixes Are Not Enough Without Cultural Change

While Google has patched their systems, the root cause—trusting automation and poorly configured access controls—requires a cultural shift. Security must be embedded from the first config file, not tacked on after a breach or disclosure. Companies must treat infrastructure-as-code permissions as privileged code deserving the same scrutiny as production systems.

Broader Industry Implications

This is a wake-up call not just for Google but for any organization using Gerrit, GitHub Actions, Jenkins, or similar tools. The attack method used here could be repurposed in other systems with similar automation pipelines. What was exposed is not just a flaw in Gerrit, but a systemic issue in how we approach continuous deployment.

🔍 Fact Checker Results:

✅ The CVE-2025-1568 vulnerability has been officially acknowledged and patched by Google
✅ The exploit allowed code injection via flawed permission settings and automation windows

❌ No confirmed real-world exploitation occurred before mitigation

📊 Prediction:

Expect Gerrit and other code review platforms to roll out stricter default permission schemas over the next year 🚨. CI/CD pipelines will likely begin to feature real-time anomaly detection, particularly around automated label approvals and submission workflows. Developers and security teams will increasingly collaborate to ensure that configuration is treated as code—and audited just as thoroughly 🔐.

References:

Reported By: cyberpress.org
Extra Source Hub:
https://www.quora.com
Wikipedia
OpenAi & Undercode AI

Image Source:

Unsplash
Undercode AI DI v2

Join Our Cyber World:

💬 Whatsapp | 💬 Telegram