Slipping through the cracks - the imperfections and nuances of CVE

Introduction

This is mostly dedicated to people working with vulnerabilities affecting commonly used software: vulnerability management teams, system administrators, red teams, incident responders, threat hunters, threat intelligence, bug hunters, security researchers and software vendors. To some extent, this subject has been bugging me for years, so eventually I decided to put all my thoughts and experiences together. The main purpose of this article is to spread awareness of not so well-known vulnerabilities.

Below, I will present various cases of security issues that are not reflected in the CVE database, as well as potential problems this phenomenon creates. For the most part, I am going to use examples of vulnerabilities that I have personally discovered and reported over the years, along with the different experiences I encountered along the way.

About CVE

Let's start with the most basic concepts. According to its official website (https://cve.mitre.org/), CVE (Common Vulnerabilities and Exposures) is a program whose mission is to "identify, define, and catalog publicly disclosed cybersecurity vulnerabilities".

Whenever an important vulnerability is brought to light, it should be assigned a dedicated CVE record. Each record is identified by a number in the format of CVE-YYYY-XXXXX, where YYYY represents the year the CVE entry is associated with the vulnerability, while XXXXX is just a natural number. The main organization responsible for managing CVE records and infrastructure is MITRE, which also serves as the main CNA (CVE Naming Authority). There are multiple other partner organizations functioning as CNAs.

As the cve.org website states, "CNAs are vendor, researcher, open source, CERT, hosted service, bug bounty provider, and consortium organizations authorized by the CVE Program to assign CVE IDs to vulnerabilities and publish CVE Records within their own specific scopes of coverage."

The main purpose of CVE is — or at least should be, in my opinion — to provide asset owners, system administrators and users with information about the risks associated with the software they use or have running on the IT assets they are responsible for. This way, whenever new vulnerabilities are discovered, they can act accordingly and address those risks. This process is commonly referred to as vulnerability management. To fulfill this purpose, CVE records must exist, be accurate and provide at least the most basic information, such as the name of the affected software product, along with relevant versions. Information such as the type of vulnerability, the related attack vector and overall risk score (CVSShttps://www.first.org/cvss/v4.0/specification-document) is almost as important, as it helps in dealing with the problem. It indicates how serious the vulnerability is and provides initial guidance on how it could be mitigated. The issues I am addressing in this article revolve around the aspects of both the existence and accuracy of CVE records.

Also, keep in mind that CVE only applies to software products that customers deploy themselves in their own infrastructures, on-premises or in cloud, or come preinstalled on hardware shipped to us. It does not apply to Software as a Service (SaaS) platforms (online services). The reason is simple: when a vulnerability is discovered in SaaS, it is not within our responsibility or even capability to fix it.

The CVE process and its challenges

The usual process for creating a new CVE entry is as follows:

1. The software vendor becomes aware of the vulnerability.

2. A CVE number is reserved (but the entry is not published yet) by the relevant CNA. If the vendor is also a CNA, they just take one of the CVE numbers from the range they received from MITRE for the current year. For smaller vendors (who are not CNAs) and open-source projects, MITRE usually acts as the CNA, and in that case, the CVE number needs to be requested from them.

3. The vendor develops a new version of the software addressing the issue.

4. The vendor releases the new version to the public, along with a release note (the release note should mention the vulnerability and the CVE number).

5. The relevant CNA gets notified about the new release. They verify the release note and update the CVE entry with information such as the product name(s) and version(s), the CVSS score and vector, and the reference pointing to the release note.

6. Everyone who relies on the CVE system gets informed about the new vulnerability, becoming aware of the risk and reasons to upgrade their instances of the affected software. Other ways to deal with the risk could include uninstalling the software, isolating the system, disabling some features, using other workarounds, deploying additional monitoring, accepting the risk, etc.

Of course, this is the ideal scenario, in which every party involved acts swiftly and responsibly. But often, things go differently for various reasons. The scenarios I am going to describe involve cases where software vulnerabilities do not receive dedicated CVE numbers despite vendors being aware of them, leaving users and organizations exposed to attacks. This is an attempt to highlight the imperfections of the CVE program and process and consider possible solutions.

EOL products

The first example in the series of issue types that tend to be inconsistently reflected by the CVE system is vulnerabilities affecting products that have reached their end of life (EOL) status, with the vendor no longer releasing security updates. Although the presence of EOL software in production alone is — and should be — considered high risk, EOL status by itself is oftentimes not treated the same way as software explicitly and visibly affected by CVEs with high CVSS scores. Even though vulnerability scans and penetration tests usually bring up the presence of EOL software as high-risk findings, administrators and asset owners tend to neglect these findings unless there is proof of exploitable vulnerabilities affecting those products. The problem is that when such vulnerabilities are eventually detected and brought to light, sometimes they receive CVE numbers and sometimes they don't.

For example, CVE-2024-3272 and CVE-2024-3273 have been assigned to Remote Code Execution issues affecting D-Link Network-Attached Storage devices, while a Local Privilege Escalation in the Intel Power Gadget 3.6 has not received a CVE assignment. Although the former undoubtedly affects more systems globally and is more impactful, the latter could as well tip the scale in a security incident and, in my opinion, should also receive a CVE number. This would provide a clear and direct incentive for those still using that product to either finally replace it with its successor or address the risk in another suitable way. Had I discovered and reported the issue just two months earlier, it would have its CVE number assigned.

This inconsistency does not only come from differences in the relevance of the two examples, but also from individual vendors' policies. While some decide to issue CVEs for vulnerabilities affecting their EOL products, others don't. Eventually, it all boils down to an arbitrary decision by the vendor who also happens to be a CNA.

We don't know exactly how many vulnerabilities of such lower profile affecting EOL products are not included in the CVE system.

Vulnerabilities in embedded dependencies (third-party components)

Now, this issue is, I believe, more significant as it happens more frequently. I have experienced it myself quite often.

What happens when a vulnerability is discovered in a library or other third-party component, used by many other software products? Well, usually (though not always), the affected component gets a CVE number assigned. Good examples are Apache log4j (CVE-2021-44228) and Flexera Install Shield (CVE-2021-41526). Then, as more and more affected software products are discovered, the relevant CVE entry should be updated with relevant references to the respective product release notes. By taking a look at both https://nvd.nist.gov/vuln/detail/CVE-2021-44228 and https://nvd.nist.gov/vuln/detail/CVE-2021-41526, we can clearly see that the list of references does not include all the affected software products, making those entries incomplete.
Many people interpret the CVE database as expecting each vulnerable software product to receive its own individual CVE record, and I believe that is the correct approach. Otherwise, the burden of associating vulnerable components with particular products is left to software vendors, administrators and vulnerability management teams. But many vendors, as well as CNAs such as MITRE or the US GOV CERT, do not follow this approach. They advise using the vulnerable library’s CVE number to track occurrences of software that depends on it, rather than assigning a separate new number.

To provide just a few examples, my team colleague Paweł Karwowski and I recently discovered a Local Privilege Escalation in QlikView and managed to register a CVE number for it (CVE-2024-29863). The root cause was the use of a known vulnerable version of Flexera Install Shield. It seems that the reason we obtained an individual CVE number for it was because QlikView did not include that detail in their release note that was provided to MITRE.

On the other hand, when we attempted to register CVE numbers for similar issues affecting MindManager23 and Lumivero's @Risk Palisade, we were told both by MITRE and the US GOV CERT to use CVE-2021-41526 instead. Another similar case involved a Local Privilege Escalation in older versions of ZScaler Client Connect, where the vulnerability lied again in an external InstallShield component.

It is very reasonable to assume that these inconsistencies are much more common.

Vendors silently patching without registering CVEs or even publishing release notes

Sadly, some vendors do not like having any CVEs associated with their products, as they try to maintain a false image of their software being flawless. They believe that making security bugs public puts them or their software in a bad light. So, they end up silently patching their products, either without mentioning the updates in their release notes or by using very vague statements such as "security update," which provide no context. This lack of transparency makes it more likely for the updates to be ignored and neglected by users, administrators and asset owners, while keeping those issues invisible to vulnerability management teams and scanners.

Some vendors quietly fix issues discovered by their internal teams and external pen-testing contractors, as well as the ones reported through bug bounty programs with nondisclosure terms. Others also try to hide vulnerabilities reported directly by security researchers or their own customers. In situations like this, a trusted third party such as the US GOV CERT can be involved to coordinate the responsible disclosure process via their VINCE portal, with mixed results.

Sometimes the entire process turns into a prolonged struggle, with so much back-and-forth that the reporting party eventually concludes that it is just not worth the effort and decides to let go and focus on something more productive. In one case I am aware of (Don't ask how I know. I just do), a vendor even threatened legal action against its own customer by claiming a potential NDA violation. This threat forced the customer to withdraw a CVE submission after it had already been sent to the US GOV CERT.

Luckily, in my experience, vendors acting this way are in the minority, but still enough to have negative impact not only on the security of their customers, but also on the future perception and attitude towards the entire responsible disclosure process from security researchers who were involved.


The risk of private release notes from vendors with paid customers only

Also flying under the radar of CVE are issues affecting fully commercial products, for which vendors issue relevant release notes that are only available to paying customers. This scenario leaves no public references for MITRE to verify. The vendors' reasoning here (that those who use the product get informed about the need to upgrade their deployments, so there’s no need to publish CVEs), creates a significant gap. More specifically, in large organizations where product administrators and vulnerability management are separate teams, this can leave the latter unaware of the risks. The former may not give the release note the importance it deserves, potentially leaving their product unpatched for an extended period.

Moreover, CVE provides standardization that enables easy automation, allowing for the quick exchange and processing of information at scale. Privately available release notes are not the information source that most vulnerability management teams rely on. The more products are maintained this way, the more difficult and ineffective vulnerability management becomes.

Inconsistencies in CVE assignments: a closer look at the process

It sometimes happens that vulnerabilities for which we could expect CVE numbers do not receive them. In 2015, I had identified a couple of Local Privilege Escalation issues in Hue 3.7.1 and Ambari 1.7.0. After reporting them to the relevant project teams and not hearing back from them for a couple of days, I posted them on https://seclists.org/fulldisclosure. Then, I requested CVE numbers from MITRE. As I had performed full disclosure before requesting them, MITRE did not reserve CVE numbers. Although I admit that I should have waited until the products were patched and release notes issued before disclosing the vulnerabilities, it raises questions about the criteria used for including vulnerabilities in the CVE catalog.

On the other hand, there are CVE entries such as CVE-2023-24044 that should have been dismissed during the request stage but were not, despite having no real security impact.

Sometimes, vulnerabilities are acknowledged and addressed by vendors, but they just get lost in the process and never make it into release notes or CVE (e.g., https://github.com/pawlokk/apexone-poc). This can be due to the large volume of cases they handle or simple mistakes.

In my opinion, these few examples demonstrate the variability in the CVE assignment process depending on CNAs. A more standardized approach could enhance the reliability of the CVE system and better serve the security community.

Prolonged process of addressing issues and releasing CVEs

Another scenario worth mentioning are prolonged fix release periods. Imagine a scenario when a vulnerability is reported to the vendor, they acknowledge it and start working on a fix, only to realize later that many more of their products are affected in the same or a very similar way. And getting all those products fixed takes a lot more time than just dealing with the one originally reported. So, the question is: should the vendor release CVE shortly after publishing new versions of products in which they already managed to address the issue, or should they wait until they manage to address each one of them? Sometimes cases like this can take more than a year. I think understand the reasoning behind waiting - not willing to potentially tip off bad actors to search for the same issue in other products while relevant patches are not available at the time. But at the same time, I believe that waiting with CVE release for prolonged periods such as half a year or longer is eventually worse for the customers, because it keeps them unaware of the risk. For a reference, Google Project Zero follows 90+30 disclosure policy (https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-policy.html).

Cases disputed by vendors

Quite often issues reported to vendors are not acknowledged by them as security vulnerabilities. Again, just from my own experience, a handful of examples were https://hackingiscool.pl/a-case-of-dll-side-loading-from-unc-via-windows-environmental-variable/ ("feature, not a bug"), https://hackingiscool.pl/cmdhijack-command-argument-confusion-with-path-traversal-in-cmd-exe/ ("expected behavior" and very rarely exploitable) or https://seclists.org/fulldisclosure/2016/Nov/67 (also very rarely exploitable). This situation is quite common with cases that involve abusing built in features by authenticated users, or undocumented/unexpected behaviors that can lead to attacks under a specific set of circumstances. In other words, such disputes are never the case with critical vulnerabilities. But at the end of the day, from the attacker's perspective it makes no difference if what they exploit is disputed and what the final verdict is.

A more interesting example is the recent local privilege escalation issue in Microsoft Xbox Gaming Service on Windows - https://github.com/Wh04m1001/GamingServiceEoP - which initially was dismissed by Microsoft as a non-vulnerability, but eventually the dispute has led to it being assigned CVE-2024-28916.



Issues resolved on the OS level


There is one more peculiar scenario I came across. What about vulnerabilities that are exploitable at the time of discovery, on the most recent version of the relevant operating system (let's take Windows as an example), and keep being exploitable when they are reported to the vendor, but before the vendor acknowledges the issue or before they register a CVE, the exploitation method gets mitigated by Microsoft? Good recent examples are msiexec-based local privilege escalation attacks involving repair mode and file operations from before msiexec was switched from using the invoking user's TEMP variable to C:\Windows\SystemTemp, effectively killing local privilege escalation vulnerabilities in a large number of MSI installer packages? Should those issues be dismissed, following the assumption that if someone does not upgrade their operating system, they accept the risk of local privilege escalation exploits anyway, or should the customers and administrators still get headsup about those? Opinions vary. While US GOV CERT suggested it could make sense, I thought it was not worth the effort.

CVE not working properly for open source, supply chain issues and much more

While writing this, I came across this very comprehensive two-part article by Mark Curphey, titled "CVE/NVD doesn't work for open source and supply chain security” Part 1 and Part 2. Not only does it confirm many of the observations I’ve raised here, but it also explains how CVE does not properly work for open-source projects and supply chain issues. It also provides many other valuable insights I had not considered. I highly recommend reading it.

Conclusion and potential solutions

Let’s start with EOL software: just don't use it. If you do, and a new vulnerability affecting it is published, the vendor will not release a fix, leaving you to deal with the risk in some other way. And it is very likely you are not even going to know about that vulnerability, as the vendor might not even release a CVE for it.

When it comes to the lack of transparency among some vendors, one way to partially reduce the risk is to install updated versions as soon as they are released. I know this is far from ideal: updating installations always requires some resources and attention, and always involves some degree of risk. It is not that uncommon to see things break after updates. Therefore, when availability is critical, it’s best to apply them first in dedicated test or acceptance environments. Only after confirming that things work as expected should we update our production systems as well. For these exact reasons, administrators tend to be reluctant to install newer versions simply because they are available. But this is exactly what I am suggesting here. It would be much better if we could instead trust our vendors to alert us about vulnerabilities and provide the full picture behind every update, so we could make more informed decisions.

This brings me to my second suggestion: we should appreciate and promote vendors who adhere to transparency when it comes to security vulnerabilities and incidents, as those vendors are fair and much more trustworthy. If a product has no CVEs associated with it, it is likely not because it has always been perfectly secure, but rather because it has not been put under enough scrutiny or because the vendor makes sure no CVEs are made public. Again, keep in mind that CVE does not apply to SaaS solutions.

It is also important to recognize that most bug hunters who follow responsible disclosure do so for free, often only expecting public acknowledgment in return. When vendors refuse to even provide a simple “thank you” (despite the fact it costs them nothing), these researchers lose the incentive to report issues, engage in responsible disclosure or research the vendor's products at all. This naturally leaves more room for malicious actors.

Now, looking at the imperfections and inconsistencies of the CVE system and program, we should be aware of these limitations and consider using additional sources of vulnerability information, such as:

·        mailing lists (e.g., https://seclists.org/fulldisclosure),

·        Twitter/X, blogs,

·        Github,

·        vendor websites,

·        release notes,

additional external non-CVE information feeds provided through paid subscriptions like https://flashpoint.io/blog/vulndb-uncovers-hidden-vulnerabilities-cve/.

No one really cares about cookies and neither do I