Earlier this year VP of Security Strategy Mike Pittenger presented a webinar on risk-ranking open source vulnerabilities, and how that process can increase security effectiveness while maintaining developers' agility. As developers continue their rapid adoption of both containers and Continuous Integration tools, integrating static and open source analysis makes finding vulnerabilities part of their build process.
Mike presented a lot of great information in the webinar about how widely open source is used (Forrester Research recently reported that 80-90% of new application code is open source) and development processes in 2017, but primarily focused on how to prioritize vulnerabilities in open source. Considering that 3,000 open source were disclosed in 2016 (and they keep rolling in), you need to have a plan for how to address them in your organization. During the webinar Mike got some great questions that I followed up on to share the answers with you.
So you mentioned risk ranking application. Is it ever appropriate to not test an application?
Of course. The decision to test an application, and the extent to which it is tested, is really a business decision. Applications that manage sensitive customer data and have an exposed attack surface (for example, a web interface) will warrant more scrutiny than internal applications that do not manage sensitive data.
Think of this as a pyramid. Low criticality apps get some baseline of security testing — or perhaps none at all if your risk appetite supports that. The higher up the pyramid you go, the more testing you will do. At the top, the criticality may be such that the outcome of a hack/breach/failure is loss of life. In that case, significant testing is warranted.
Is there a particular vulnerability I should be paying attention to such as Heartbleed, Dirty Cow, or Drown?
It is less about a particular vulnerability than it is about:
a) whether or not an exploit is publicly available; and
b) the technical impact of a vulnerability.
For example, if an exploit is available publicly, this greatly expands the number of actors that may be able to execute the attack. You no longer need to worry just about the skilled and dedicated hacker, but also the script kiddie who can “point and shoot” an attack. The technical impact describes “how” an attack affects an application. The simple example is Twitter, which worries about denial of service attacks because those attacks block promoted tweets (revenue) and frustrate their community. On the other hand, a bank worries about attacks where the technical impact is the ability to read/modify data or escalate privileges. They'd gladly have an online banking app unavailable rather than have data stolen.
We’re starting to look at deploying apps in containers. How does that impact the vulnerabilities in risk ranking?
Containers add a second element to the triage process; the Linux stack. Containers are great in that they allow us to deploy new applications quickly, by stripping the application layer off of a base container and replacing it with a new application. A problem occurs when the base Linux stack is not well maintained. If a vulnerability exists in the Linux stack, deploying new containers based on that stack is simply propagating vulnerabilities across all applications.
Who is the right resource to triage an issue Black Duck finds in our code? Engineering or Legal?
It depends on the organization’s concerns. Legal can facilitate the triage of licensing issues and engineering/security can address triaging security issues.