“Trying to fix all mistakes is a mistake.” — Toba Beta, Master of Stupidity
Let’s get something straight. There’s no such thing as ‘unlimited’ anything. Unlimited resources, time, and skills—they simply don’t exist. But if you think I’m overly cynical, let’s consider the flip side. Even vulnerabilities that sometimes appear bottomless in their numbers are still finite. And believe me, fixing them isn’t as impossible as you might think.
Remediating application vulnerabilities often feels like a daunting task, it seems unending in the constant cycle of testing a new build and finding yet more bugs to fix. But there are a few ways you can achieve quick or comprehensive wins with vulnerability remediation. Having worked with dozens of clients who have faced serious challenges in development, I’ve learned that there are some common appsec best practices that every dev team can employ.
Let's look at 3 of my top tips that you can use to make your vulnerability remediation both manageable and effective.
Tip #1 : Fix the source
The Fiordland National Park in New Zealand uses a catchy little phrase to remind tourists of their strict garbage disposal policy: “Pack it in, pack it out.” You literally can’t find a single dustbin there except for in the Visitors’ Centre. Whatever you bring into the park, you take out with you, so be careful how and what you pack for the trip.
It’s not so different with vulnerabilities. I specifically refer to the oodles of false positives that tools throw at us when we run vulnerability scanning, Static Analysis or Source Composition Analysis. A lot of these false positives boil down to the following factors:
- Bad tooling - The tools you use are more noise than signal
- Bad/No tuning - You haven't tuned your tool's signatures to reduce its propensity for identifying false positives
- Bad Triage/Vulnerability Management - Ineffective triaging processes that allow false positives to go unchecked, creating problems when vulnerability remediation has to happen
I see a judicious mix of all three issues in several Vulnerability Management programs. There's this inherent expectation that security tools should magically produce only true positive results. This couldn't be further from the truth. Choose your tools wisely. I tend to prefer tools that do one thing and do it well. My general rule of thumb: The fancier the tool, the less effective it is.
Good tools also tend to add additional attributes to their findings, like:
- Confidence: A (usually quantitative) score for the how confident the tool is, with respect to the finding that it has just identified
- Metadata: Extensive details relating to payload or signature that was used to identify the finding
- Potential False Positives: Some tools realise that there's always likely to be false positives and capture certain findings with a caveat that they need to be manually reviewed for the flaw to be considered a real flaw
This helps in the triaging process. In addition, this also helps create threshold metrics like "Let's not even report Low Confidence Vulnerabilities".
A bad vulnerability scanning tool can lower confidence in your overall security program. You need to evaluate well and tread carefully.
In addition, tuning signatures that the tool fires at its targets is equally important. Let's say you're performing Static Analysis against a Python Flask Application, you can tune the rules to remove signatures for other Python frameworks or non-applicable technologies (like NoSQL, etc), depending on your tech stack.
Tools often fire all possible payloads at the target, leading to false positives. Ensure that you save yourself precious time, both scanning and triaging, by tuning rules for your tool.
Tip #2: Secure Defaults
We have this established way of doing things, especially when it comes to "Security Training for Developers". We train developers on how to NOT do a lot of things, like:
- "Don't roll your own crypto"
- "Don't use blocklist validation"
- "Don't use XML parsers that auto-resolve DTDs"
This approach doesn’t help developers clearly understand what TO do. We still find the same kind of flaws, in the same apps, written by the same developers. And yet we don't change this pattern, locking us in an endless doom loop.
This needs to change. Instead of trying to fix bugs, why can't we eliminate entire classes of bugs?
Really? Eliminate, you say? How?
Well, let's start here:
- First, you find/build libraries that are secure by default. This refers to libraries that DO NOT even allow insecure behaviour in the first place. They don't even have exposure to insecure API.
- Then you train your developers on these secure by default libraries.
- Reinforce the usage of these libraries through static checks, code reviews, etc
This helps ensure that developers are not dropped into this twilight zone of doubt when it comes to implementing "secure behaviour". In addition, the developers can use the secure default without thinking, because...well, it's secure by default.
Libraries like LibSodium, Tink, DOMPurify are some great examples of secure-defaults.
Tip #3: Analyse Threats and Impacts
Threat modelling is important. In today's age of rapid-release apps, with multiple security checkpoints in the SDLC, it's actually more critical than ever. That’s because, among other things, threat modelling helps you cut through the fluff and prioritise what's important, especially when it comes to vulnerability remediation
Not all vulnerabilities are alike. This applies to the impact of these vulnerabilities as well. A vulnerability may look severe on paper, but you might realise that you already have a bunch of compensating controls and redundancies built in, where even a CVSS 9 vulnerability might be reduced to a Medium/Low Priority flaw.
When you do a good job of threat modelling, you have a good understanding of risk. And when you have a good understanding of risk, you have an idea of the:
- Existing Controls
- Redundant Security Controls
All of these go a long way helping you form a good understanding of residual risk.
This is the risk we're worried about: Residual risk. If there's significant residual risk beyond our existing controls for a particular vulnerability, then it’s clearly a case for immediate fixing. But if the residual risk is low or nearly non-existent, then even a paper-tiger might be, quite possibly, a tame housecat.
Threat modelling helps you uncover this. And continuous, story-driven threat modelling helps you uncover this faster and more effectively across releases and sprints.