The Problem in Your Vulnerability Management Lifecycle

DevSecOps, Manual Pentesting, Vulnerability Management, False Positive Management, Application Security, AppSec
Bharat Kishore | August 24, 2020

This isn’t a new story. An organisation’s developing several new applications, and they’re ready to bring out the big guns. Hundreds of developers working tirelessly, hundreds of thousands of dollars pumped into this new mega-project that’s sure to knock the socks off anyone who sees it. 

But behind the scenes, that image is kind of falling apart. The security team is small and overworked, and they’re swamped with vulnerabilities that they have no way of organising them in time. Without the right data, developers aren’t able to resolve older vulnerabilities even as new ones pop up, and release timelines are going way off schedule.

This isn’t a hypothetical situation. It’s exactly what our clients were going through just a couple of years ago.

When an organisation’s security department is lagging behind product development, it’s usually due to bad vulnerability management. And yeah, managing vulnerabilities isn’t exactly easy. That’s why they’re having problems in the first place. But even though every company has different needs and workflows, there are still general guidelines you can follow (and pitfalls you can avoid), if you want your organisation to not suck at vulnerability management.

Let’s talk about the 4 stages of your vulnerability management lifecycle, and what could be going wrong:

  • Discovery
  • Prioritisation
  • Reporting
  • Remediation

Discovery

The discovery stage involves identifying the flaws that need to be addressed by the development team. The security team has to analyse the results of the pentest they perform on the target app and infrastructure. Ideally, once vulnerabilities have been discovered, the development team should have everything they need to start remediation.

But as you can imagine, things are never that simple. Typically, the results of the assessments aren't presented in way that the dev team can consume and act on. Let me tell you why:

Usually when assessments are done, the output is a PDF report. These PDF reports are upwards of 50 pages, and in most cases, security just forwards these reports to the devs to address the flaws. The devs need to sift through the report to figure out what the security team found and then try and reproduce it in their environment to come up with the appropriate remediation. 

This tends to be a very time-consuming process, and any of the following problems might crop up:

  • Devs aren't able to simulate the flaws as their environment differs from the one that security used
  • The steps given in the report are not adequate to reproduce the bug
  • The report gives no steps to reproduce. It only gives the vulnerability name and impact

Developers are forced to deal with this, along with the fact that there's a constant pressure on them to develop and maintain the product. Ultimately, it results in security flaws being ignored and remediation never taking place.

On the security side of things, it’s not looking much better. Security teams are always inundated with a mountain of results which need to be analysed and triaged before they push the results to the devs. Security tools are notorious for reporting inconsistent results and a whole lot of false positives. Teams are have no choice but to go through all these reports in addition to manual testing results. 

Check it out: How Orchestron automatically manages false positives in your apps

This problem is further exacerbated by the push to automate security scans. Where before they just had pentest reports to contend with, now automation buries the security team with results being generated from their tool pipe. These results are riddled with false positives, incomplete information and duplicates which makes effective vulnerability discovery very hard.

Prioritisation

Security teams are not able to prioritise vulnerabilities because tool results can often be inconsistent. Tools can range from giving accurate information to no information at all regarding the flaw and its impact. Different tools also tag vulnerabilities differently. One tool may rate a bug as high severity while another tool may tag it as low. This leads to confusion and more time is spent by security in analysing the results from both tools.

Inconsistency also comes in the form of differing nomenclature for vulnerabilities. The same bug is sometimes given 3 different names for the same vulnerability, creating duplicates. False positives are always a problem. Addressing these can lead to a lot of time spent on analysis and triage, and with a completely manual process it leads to a lot of errors flowing down into the results. These imperfect results end up being sent to the dev team for remediation.

See more: How prioritisation and deduplication works on Orchestron

While prioritising, it's also important to take into account the context of the target application. Tools sometimes report a bug with ‘critical’ severity when in fact, there might be mitigating controls preset in the app that reduces the impact of the bug. 

For example, a client was once seeing hundreds of results in ‘red’ being reported by their SCA scanner. After investigating, we found that the SCA scanner was pointing to vulnerable libraries that were present in the app's code base but was not something that could be accessed or queried by the end user. Essentially, this was a lot of old code that was sitting in the app that required cleaning up. In this case, the severity being reported by the tool was inaccurate in the context of the app. 

Read about how this client and how Orchestron solved their problems.


Reporting

By now you'd agree that getting a vulnerability report isn’t as simple as running a scan and downloading the results. More often than not, reports tend to focus only on the technical details of the bugs found, which the dev and security testing teams would be comfortable looking at. 

But think about this: leadership and management also need to be clued in to what’s going on with the security of their apps. Executives need a way to compare and analyse vulnerabilities across release cycles or across different apps. That mean the security and dev teams need to communicate this to them in the form of reports.

Leadership also need to know where most of the bugs are coming from. Are static flaws being reported more, or dynamic ones? If it’s the former, it would mean that corrective measures need to be focussed more on development to ensure that coding errors are reduced, and consequently the number of SAST results go down. 

High-level summaries and comparative reports also help the leadership understand recurring flaws that are being found in every scan. All of these inputs can be very valuable to executives in making informed decisions. 

But these sound like an awful lot of reports, don’t they? For security to generate these reports manually, it takes a lot of time. Time which can be better utilised in testing. Manual reports are also error prone giving an inaccurate analysis of the results. 

Remediation

Effective remediation—although critical—is sometimes not possible due to the lack of information provided on the reported vulnerabilities. When flaws are reported, they don’t usually come with recommendations for remediation. Sometimes when a fix is provided it's just a single high-level statement that doesn't give any viable steps that can be taken for implementing the fix. It's left to the devs to figure out the remediation. This leads to delays in the remediation process, which consequently delays the release.

See more: Rich metadata and remediation suggestions with ORL

There's also no clear indication on the number of instances/parameters where the bug has occurred. So even if remediation has been done by the devs, it's incomplete and ends up being flagged in subsequent tests, leading to unnecessary friction between security and development teams.

 

Vulnerability management can be very challenging to get right, but it’s also the key to consistently effective DevSecOps that stays on schedule while still releasing secure applications.