Blog

CVEs: The emperor's old clothes

Author: Luke Hinds
/
9 mins read
/
Feb 21, 2024
/ Subscribe

Wandering the expo floor at any given cybersecurity conference such as Black Hat or RSA presents a clear pattern: vendor after vendor, pitching dashboards highlighting extensive lists of CVEs to underscore the urgency and severity of threats to a potential customer. CVEs are typically used as a key selling point to illustrate the effectiveness of security solutions (“Look at all these CVEs we found”). I am here to argue that the metric of the CVE, while initially sounding ominous and urgent, may not accurately reflect the actual real-world risk present to an organization. In fact, a majority of the time it's noise and rarely a threat at all. We should instead seek to leverage other signals to establish the risk of software.

This was not always my position. For a good number of years I sat on the frontline of CVE management. I spent a few years as a member of the Kubernetes security team, where we would handle all vulnerabilities reported via both researchers and a bug bounty program. I also managed security vulnerabilities for a popular networking application (OpenDayLight), along with being the elected community lead of the OpenStack Security Group. Over time I started to see a clear picture emerging: practically none of the vulnerabilities appeared or were ever heard to be actively exploitable. Finding something that could be exploited in the wild was as rare as hen's teeth.

Yet I had never really come across a data set that could confirm my suspicions, until my former employer Red Hat released their Product Security risk report 2022 (2023 is not released yet).

What the data says about CVEs

When it comes to data about CVEs, Red Hat is worth listening to. Red Hat provides commercial support to tens of thousands of open source projects and language dependencies, which means they have SLAs to their customers to triage and fix CVEs within a certain time period. So their product security teams actively monitor and track thousands of CVEs in projects such as the Linux Kernel, OpenJDK, and OpenShift (Kubernetes), along with support for Python, Java, Ruby, and Go packages (to mention just a few). They are very much on the frontline of assessing the risk of vulnerabilities within open source software and have been for two decades.

At a glance, the report might appear concerning based on the staggering number of CVEs  — there are a total of 1656 documented vulnerabilities. Yet, when we dig into the data, a mere 7 of these (a negligible 0.4%) have seen action in the wild. This statistic reveals a significant gap between the potential threat a CVE might represent and the likelihood of it being a weapon in an attacker's arsenal. The perceived risk does not align with the practical one, implying that the Emperor's wardrobe may not be as grand as perceived.

Data from Red Hat's 2022 Product Security risk report

When we segregate these flaws by severity, the numbers tell an intriguing tale. Of the 19 'Critical' CVEs, there's an exploitation rate of 10.5%, which, while higher proportionately, equates to just two actual exploits. On the other end, only 3 of the 276 in the important category were exploitable (a mere 1.1%) and none of the 275 'Low' severity vulnerabilities were exploited.

These figures challenge the prudence of allocating equal resources across the board, regardless of a CVE's severity or its chances of exploitation. If we again look at "Important" CVEs being exploited in the wild, it's high enough to merit action—yet 99% of that effort is wasted.

You can then start to see how low a risk these really are. Vulnerabilities at the end of the day are often the mistake of a developer. They are quite unlike a malicious package, crafted to cause and inflict as much damage as possible. Yet all of these large lists of unreachable CVEs cause a significant compliance load on developers and internal security teams, who spend many thousands of hours trying to play whack-a-mole and reduce the count. ("99% of my CVE efforts are wasted! The problem is that I don't know which 99%.")

We also need to factor in that IT deployments have changed significantly since vulnerabilities became the central risk indicator. The traditional delineation between trusted internal networks and potentially hostile external environments has blurred. The once-clear demarcation of trust boundaries, where everything inside was considered safe and anything beyond was perceived as a threat, no longer holds true. This shift is exemplified by the widespread adoption of "Zero Trust" principles in modern production environments. Instead of relying on perimeter-based security measures, organizations now mostly adhere to the ethos of "never trust, always verify," making CVEs even more challenging to exploit.

Vulnerability scanners don’t help matters

On top of the highlighted very low chance of exploitability is the less-than-optimal algorithms of vulnerability scanners. These tools often produce false positives and (worse still) negatives, failing to capture the nuanced context in which these vulnerabilities exist or might be exploited. The effectiveness of a security scanner should be judged not just by the quantity of CVEs it can detect, but also by its precision and the relevance of its findings to the specific environment it protects (which very rarely is the case).

Inaccuracy in CVE scanners also leads to a skewed perception of security, where the emphasis again is placed on quantity over quality. An inflated number of detected vulnerabilities can give a false sense of insecurity, or worse, a false sense of security, if critical vulnerabilities are missed or misclassified.

Mitigating these detected CVEs can also distract from other security efforts, such as adopting patterns to avoid "Broken Access Control" (the most common OWASP vulnerability). Thus, while CVE counts can be a useful data point, they are a single facet in the multifaceted domain of cybersecurity risk assessment.

Bogus security vulnerabilities

cURL

Back in August 2023, the popular open-source URL retrieval tool, cURL, received a 9.8 Common Vulnerability Scoring System (CVSS) entry - CVE-2020-19909. This was until Daniel Stenberg, founder and lead developer of cURL, disputed the risk rating of the CVE. 

The bug stemmed from an alleged integer overflow issue within cURL's --retry-delay command line option. This option dictates the duration cURL should pause before attempting a retry if the previous transfer encountered a transient error. It expects input values in seconds, which are subsequently converted to milliseconds internally by multiplying the input by 1000.  Indeed, it was a bug, but labeling it as a major security bug (with the same score as the heartbleed exploit) was an overstatement, to put it mildly. While integer overflows can be problematic, they typically pose a significant threat when an external attacker can manipulate memory allocation sizes. However, in this instance, such external manipulation is not a factor.

In an analysis of the problem, Stenburg also noted that the issue had already been identified on July 27, 2019. It was then addressed and patched in cURL version 7.66.0, which was officially released in September 2019.

Stenburg commented, “It was obvious already before that NVD really does not try very hard to actually understand or figure out the problem they grade.” As he pointed out in an earlier blog post about NVD and curl, NVD “doesn’t even ask us for help or for clarifications of anything. They think they can assess the severity of our problems without knowing curl, nor fully understanding the reported issues.”

PostgreSQL

Another popular open source project left to deal with bogus CVEs was PostgreSQL.

A CVE was raised against PostgreSQL and identified as CVE-2020-21469, it alleged that PostgreSQL 12.2 was vulnerable to a denial-of-service attack through the repetitive sending of SIGHUP signals. SIGHUP, originating from the era of serial port connections for terminals, is a signal designed to terminate a process.

Such a vulnerability would indeed be cause for concern, warranting the high severity score of 9.8 assigned to it. However, there's a crucial caveat: unprivileged users lack the capability to send SIGHUP signals or terminate PostgreSQL processes. Specifically, only PostgreSQL superusers / root, or users equipped with pg_reload_conf privileges, possess the authority to trigger a SIGHUP signal, instructing PostgreSQL to reload its configuration. In essence, individuals with the ability to exploit this "flaw" to terminate PostgreSQL could employ any standard method to achieve the same outcome, rendering this particular vulnerability redundant. 

Now let’s have a moment of silence to think of all the admins rushing to patch these flaws during the night or over a weekend. Right, that’s a cognitive load and a stress not needed.

So what to do?

CVEs are not devoid of any merit as an indicator, but they are excessively emphasized, leading to alert fatigue among developers and internal security teams.

Updating dependencies is undeniably good practice for multiple reasons, including mitigating vulnerabilities. Yet it's crucial to recognize that a mundane software bug is equally capable of causing significant issues, like system crashes or service outages that can lead to customer churn.

In this regard, I concur with Linus Torvalds' perspective that "security problems are just bugs"; they are not inherently distinct. However, the media tends to sensationalize security-specific bugs due to their association with hackers and espionage, while overlooking ordinary software bugs.

Rather than fixating solely on CVEs, it is imperative to focus on more actionable indicators. For instance, the presence of a malicious package poses a more immediate threat, as it is designed to cause deliberate harm and is readily accessible. Additionally, evaluating a project's activity level and maintenance status provides valuable insights into its reliability and security. Factors such as regular updates, robust test coverage, and continuous improvement signify a project's commitment to quality.

Moreover, monitoring the trajectory of a project's maintenance and ownership is crucial. Projects that stagnate or fall into disrepair may eventually become deprecated or susceptible to hostile takeovers by individuals with malicious intent. At Stacklok, we are looking to leverage these metrics, over the hyperfocus on CVE reporting.

We are also sponsoring and developing initiatives like VEX, enabling us to refocus a team's efforts on addressing truly exploitable vulnerabilities, rather than thrashing over unexploitable noise. Stacklok Staff Engineer Adolfo García Veytia is the technical lead of the OpenVEX and Protobom projects in the OpenSSF. We also provide our free-to-use web app, Trusty, to allow developers to understand the activity and risk profile of open source packages. In combination with our open source software supply chain security platform, Minder, we can triage the risk profile of a package within a developer's pull request.

It is time for the industry to shift its attention away from sensationalizing CVEs with no view of how reachable these vulnerabilities are, and instead increase focus to the practical assessment of package quality and security as well. 

Luke Hinds is the CTO of Stacklok. He is the creator of the open source project sigstore, which makes it easier for developers to sign and verify software artifacts. Prior to Stacklok, Luke was a distinguished engineer at Red Hat.

Stacklok has contributed Minder to the OpenSSF out of a deep belief in the power of the open source community

Luke Hinds /
Oct 28, 2024
Continue Reading
This Month in Minder - September 2024

This Month in Minder: September 2024

Stacklok /
Sep 26, 2024
Continue Reading
Flexible policy enforcement with Minder profile selectors

Flexible policy enforcement with Minder profile selectors

Dan Barr /
Sep 19, 2024
Continue Reading