It is 3:00 PM on a Thursday. A developer pushes a commit for a critical feature release. The CI/CD pipeline spins up, runs the tests, and then halts abruptly. Red lights flash on the dashboard. A security scan has failed.
What happens next defines the culture of an engineering organization.
In dysfunctional teams, a game of “hot potato” begins. The developer argues it’s a false positive from a tool they didn’t configure. The DevOps engineer complains that the scanner is slowing down the build time. The security engineer, often sitting in a different building or Slack channel, insists the risk is critical but lacks the context to explain why.
Everyone stares at the finding, but nobody wants to hold it.
The question of who owns security findings in a CI/CD pipeline is one of the most contentious issues in modern software development. We talk endlessly about “DevSecOps” and “shifting left,” but these buzzwords often fail to survive contact with reality. When the pipeline breaks, high-level philosophy dissolves into a practical turf war between speed, stability, and safety.
Table of Contents
ToggleThe Trilemma of Ownership
To solve the ownership puzzle, we have to understand the three distinct perspectives colliding in the pipeline.
1. The Developer:
They own the logic. Their goal is to ship features that solve user problems. To them, a security finding often looks like unplanned work. If the finding is obscure or requires upgrading a library that breaks six other things, their natural instinct is to push back. They own the code, but they often feel they shouldn’t own the consequences of a security tool they didn’t choose.
2. The DevOps Engineer:
They own the flow. Their goal is a green pipeline that delivers code to production efficiently. A security scanner that takes 40 minutes to run or blocks deployment for minor issues is an obstacle to their primary metric: velocity. They own the pipe, but they aren’t equipped to judge the sewage flowing through it.
3. The Security Engineer:
They own the risk. Their goal is to prevent a breach. They configure the policies and select the tools, but they rarely have commit access to the application repositories. They own the finding, but they lack the power to execute the fix.
This disconnect creates a vacuum where vulnerabilities sit in limbo. A report from GitLab’s Global DevSecOps Survey highlights that confusion over responsibility is a top reason why security vulnerabilities persist in production code.

Image source: aikido.dev
The Context Problem: Not All Findings Are Equal
Part of the confusion stems from treating all security alerts as a single monolithic problem. In reality, different types of findings require different owners.
For instance, consider the difference between Static Application Security Testing (SAST) and Software Composition Analysis (SCA).
SAST flags issues in the proprietary code written by your developers, things like SQL injection flaws or hardcoded secrets. Because this is code the developer just wrote, the ownership line is clearer. The developer broke it; the developer should fix it.
SCA, on the other hand, flags vulnerabilities in open-source libraries and dependencies. This is murkier. If a vulnerability is found in a sub-dependency of a logging framework, is that the developer’s fault? Or is it an infrastructure issue?
Understanding the nuances of SAST vs SCA is crucial because it dictates who is best equipped to respond. SAST findings are logic errors (Developer domain). SCA findings are supply chain risks (often shared between DevOps and Security). When organizations fail to distinguish between these, they assign blanket ownership that inevitably fails.
Shifting Left Without Shifting Blame
The industry response to this chaos has been “Shift Left”, moving security testing earlier in the development process. The theory is sound: if developers catch bugs while coding, they fix them faster and cheaper.
However, in practice, “Shift Left” often feels like “Shift Blame.” Security teams purchase automated tools, integrate them into the CI/CD pipeline, and effectively tell developers, “Good luck with the noise.”
If a developer is flooded with 500 alerts, 490 of which are false positives or low-priority warnings, they will not feel ownership. They will feel harassed. True ownership requires that the findings are actionable, accurate, and relevant.
The OWASP DevSecOps Guideline suggests that for developers to accept ownership, security tools must behave like developer tools. They need to be fast, integrated into the IDE, and provide specific remediation advice—not just a generic PDF report generated at the end of the week.
A Model for Shared Accountability
So, who should own the finding? The answer is that ownership must be split into three distinct layers:
1. Security Owns the Policy (The “What”)The security team defines the rules of the road. They decide that “no critical vulnerabilities are allowed in production” or “all public-facing APIs must have authentication.” They own the configuration of the scanners to ensure these policies are enforced with minimal noise. Their job is to ensure the signal-to-noise ratio is high enough that developers trust the alerts.
2. Developers Own the Remediation (The “How”)Once a valid finding hits the pipeline, the developer owns the fix. They are the only ones with the context to know if a library upgrade will break the application or if input sanitization should happen at the API gateway or the database level. They don’t need to be security experts, but they need to be the mechanics who swap the parts.
3. DevOps Owns the Guardrails (The “When”)The platform team owns the implementation of the checks. They ensure scans run on every PR, that they don’t time out, and that they fail the build only when the Security Policy says they should. They are the enforcers of the contract between Security and Development.
Moving From Gatekeepers to Guides
Ultimately, the goal is to stop viewing security findings as an accusation and start viewing them as quality assurance. Just as a developer owns a unit test failure or a linting error, they should own a security finding, provided the tooling respects their time.
The organizations that solve the ownership problem are those that move security from a gatekeeping role (“Stop! You can’t pass!”) to a guidance role (“Here is a guardrail to keep you on track.”).
When security teams curate findings and provide clear context, and DevOps teams build frictionless pipelines, developers naturally step up to own the code they write, security flaws and all. The pipeline stops being a battleground and starts being what it was always meant to be: a delivery mechanism for high-quality, secure software.