This is What's Making Your DevSecOps Inefficient

AVC, DevSecOps, Vulnerability Correlation, Vulnerability Management
Rahul Raghavan | August 12, 2020

 

Development and engineering teams that actually work together. Flawless communication between security and DevOps divisions. Both teams actually seeing eye-to-eye for once.

No, stop. Wake up. That was just a dream. The DevSecOps dream. Remember that one? Remember when you actually thought it would be easy to shove a bunch of security engineers in the middle of your development pipeline and expect it to be a bed of roses?

Except that the roses were full of thorns, and achieving that kind of synergy between the teams is hard. Even when there's an overarching software security initiative or a maturity model to guide them, product engineering teams (especially > 50 members) often squabble over security reports.

All of these come at a great cost to efficiency, and it’s a phenomenon that's painfully common, even in big organisations. If you find developers or security engineers routinely falling into these traps at your company, it’s a good sign you need to change the way you operate.


1. Stop treating developers like security experts

As a security engineer, you need to know a developer’s limitations. It’s your job to know everything about vulnerabilities and security exploits, and it’s their job to fix them. The difference between good and bad vulnerability management often lies in all the little ways you make it easier for developers to understand complex vulnerability data so they can remediate them efficiently.

  • Dissonant Categorisation 
    The key to successful and faster remediation is helping engineering teams tackle vulnerabilities based on some kind of pattern. For example, the primary strategy of remediating database injection flaws would be similar.

    With mixed up datasets, developers would often need to switch between fix strategies, which both wastes time and becomes mentally draining.
    What would be ideal for a developer is a report that bundles together flaws with similar or allied remediation strategies.

  • False Positive Clutter
    While security practitioners might consider it utopian to successfully predict and eliminate false positives in an application, it would be hugely useful to declutter the wheat from the chaff. Something as simple as a sectional divider between true vulnerabilities and potential false positives would help development teams save several hours during remediation.

Read more: An online learning platform got rid of false positive results and reduced vulnerabilities by 75%

  • Specific Jargon & Phrasing

When vulnerability reports have disparate information within datasets, they become difficult for developers to understand. A common issue is descriptions (of vulnerabilities) that point to the same issue, but worded differently.

For example a common vulnerability — Sensitive Cookie Without HttpOnly Flag — is described as “Cookie without HttpOnlyFlag set” by BURP and “Cookies-flag-missing” by FindSecBugs. This kind of ‘noise' in the report usually occurs when results are directly ported from tool results. 

Read more: How vulnerability correlation simplifies tool scan results.

2 .There’s no ‘One size fits all’ solution 

Developers and Security engineers usually have specific and generic skills in the other’s domain. For example, a developer “generally” understands security threats, but knows the platform’s workflow like the back of their hand. Similarly, while a security engineer understands security threats in granular detail, they might not know how to apply that information specifically in the context of the platform. A lot of time gets wasted when the the two groups are going back and forth about minor details.

  • Prioritisation Confusion
    When there’s a fundamental difference in how the two teams perceive the impact of a vulnerability, no one can decide on one severity score. For example, a vulnerability that a security engineer thinks is “Critical” might be considered a moderate-level threat by development.

  • Not Enough Coverage
    When security engineers don't know enough about the product, the might end up missing some pretty major edge cases when testing for vulnerabilities. This is typical in cases where the application has many security-oriented functional flaws.

  • Remediation Without Context
    The security team might make a suggestion for remediation that was effective in another situation, but isn’t technically feasible to implement for that particular application.

Check it out: Advanced remediation tips and Good code vs. Bad code example

3. Reproducing vulnerabilities isn’t straightforward

So the security engineers have combed through the application, sent their shiny new set of vulnerability reports to the developers, and are now kicking back, watching Netflix. Job done, right? Well, not exactly. The developers still have to be able to reproduce those vulnerabilities in the software before they can fix them. And that creates a new set of problems.

  • Manual Testing vs. Tool Generated 
    Was the issue uncovered through manual testing, tool-assisted tests, or was it purely generated by tools? In most teams, developers are not formally trained on tools (especially DAST) for them to reproduce tool-based issues.

  • Too Many Variables and Dependencies 
    The ability to simulate vulnerabilities is often dependent on factors like the environment and associated variables. This can often result in the application behaving inconsistently even when you use the same attack vector patterns.

4. PDF reports are kind of…obsolete

Despite all the issues I’ve talked about, teams still manage to figure out their own unique solutions to routine development problemsBut a common factor that almost always aggravates the rift between security and development is the way they handle vulnerability reporting.

  • Huge, Immutable PDF Reports
    Long-drawn PDF reports are often not flexible enough to include the necessary level of detail or the ability to track and amend results. Increasingly many engineering teams are adopting security within agile, which means a lot more people are coming into contact with security reports.

See more: Tracking vulnerability data over time, across multiple parameters 

This basically means that security teams would now have to tailor security reports to a wider array of groups in engineering (QA, DevOps etc.), all of which have their own specific needs. 

  • Security Teams Need to Adapt
    Development teams prefer to use their existing platforms like JIRA to manage security vulnerabilities alongside functional defects. Not only can these tools be easy and intuitive to use, but they give developers access to additional functionalities that are common in tracking platforms.

    AppSec programs can seriously benefit from these changes, but it means security engineers need to change how they operate to keep up.

Look, I know we’ve been talking like Security and Product engineers are like spiteful siblings that are forced to stay under the same roof. It’s unfortunate, but that’s what the narrative around DevSecOps has turned into for a lot of people. Sure, integrating Security into DevOps wasn’t the plug-and-play solution we were all hoping for, but we were wrong to expect that in the first place.

What we should do instead is acknowledge that both security engineers and developers are ultimately doing the same thing: building stable and secure applications. When both teams redefine the way they interact with the other, they can operate in tandem instead of independently of each other.