The AppSec Waltz

AVC, Vulnerability Management
Rahul Raghavan | May 29, 2019
Blog_Development-VS-Security_V1

 


Effective application security programs in organisations is often the result of well oiled operational machinery between development and engineering teams. But getting to this stage of an almost flawless waltz between the teams isn’t easy. Whether there is an overarching software security initiative or a maturity model that govern their functioning, product engineering teams (especially > 50 members) often squabble upon security reports. In this article, we look at some common cause and effect patterns during a rift between engineering and security teams.


1. Vulnerability Aesthetics

Irrespective of the mode, scope and frequency of testing, the type of tools used in the process or the number or severity of issues found, vulnerability management is realised only when the presentation of the vulnerability hits the right chord with the developer. Common areas of concern include -

    • Categorization : The key to successful and faster remediation is helping engineering teams tackle vulnerabilities based on some kind of pattern. For example, the primary strategy of remediating database injection flaws would be similar. Hence what would be ideal for a developer is presented with a report that bundles together flaws with similar or allied remediation strategies, as opposed to a pot-pourri report. With disjoint datasets, developers would often need to switch between fix strategies that becomes mentally draining.
    • False Positive Clutter : The issue of false positives in vulnerability remediations has been an issue as old as the hills. While most security practitioners might consider it utopian to successfully predict and eliminate false positives in an application context, what would help hugely is to declutter the what looks like the husk from the wheat. Something as simple as a sectional divider between true vulnerabilities and potential false positives would help development teams save considerable time and speed up remediation.
    • Slang and Verbiage : Developers are incapacitated by reports that have a lot of noise withing vulnerabilty datasets. A common issue is descriptions (of vulnerabilities) that point to the same issue, but worded differently. This noise in the report usually occurs when results are directly ported from tool results. For example a common vulnerability -Sensitive Cookie Without HttpOnly Flag  is described as “Cookie without HttpOnlyFlag set” by BURP and “Cookies-flag-missing” by FindSecBugs.

2 . The AppSec Blindspot

Development and Security personas usually have specific and generic skills in the other’s domain. For example, a developer “generally” understands security threats, but completely excels in the nook and corners of the platform’s workflow. Similarly, while a security engineer understands granularity of security threats, there are usually gaps in the relevance and applicability in context to the platform in scope. This creates a multitude of issues -


  • Prioritisation : A constant back in forth in severity scores due to a fundamental difference in impact assertions. For example, a vulnerability that a security engineer considers to be “Critical” might not not considered so by development.
  • Coverage : Falling short in test coverage ; usually gaps in edge cases emerging from a lack of understanding deeper product workflows. This is typical in cases where the application in scope has many security-oriented-functional flaws.
  • Remediation : Suggestions from development that are effective in isolation, but contextually not feasible to implement by the development teams.


2. Reproducibility of Exploits

The developers’ feasibility / ability  to simulate documented vulnerabilities is critical in maintaining the ecosystem between product and security engineering. Factors that influence the ability to reproduce issues at the developer side include -

  • Means of the Vulnerability : Was the issue uncovered through manual testing means, assisted through tools or purely a tool generated flaw. In most teams, developers are not formally trained on tools (especially DAST) for them to reproduce tool based issues.
  • Dependencies : The simulation of vulnerabilities are often dependent on factors such as environment and associated variables that often result in inconsistent application behaviour for similar attack vector patterns.


3. The Problem of Reporting

While teams might have their own ways of either a band-aid or an organic solution some or most of the issues described above, a common factor that almost always aggravates the rift between security and development is that of conventional reporting structures.

Long drawn PDF reports that are often constrained in their flexibility to include the needed detailing or the ability to track and amend results. With current engineering teams adopting security within agile, there is a rise in the sheer number of people who consume the contents of a security report. This basically means that security teams would now have to cater to a cross section of user groups within engineering (QA, DevOps etc)  in terms of relevance and subsequent actioning. Additionally, teams prefer to use their existing platforms such as JIRA to manage security vulnerabilities alongside functional defects, thereby allowing them to leverage additional functionalities that tracking platforms bring to the forefront on AppSec programs.

In Conclusion

While development teams continue their efforts in developer training and security orientation programs, security engineering teams would need to redefine their points of interaction with their engineering counterparts. After-all, vulnerability management by security teams is only as effective as the efficiency in which development teams mitigate vulnerabilities.

Learn More