A critical vulnerability can send countless organizations into chaos, as security teams read up on the vulnerability, try to figure out whether it applies to their systems, download any potential patches, and deploy those fixes to affected machines. But a lot can go wrong when a vulnerability is discovered, disclosed, and addressed—an inflated severity rating, a premature disclosure, even a mixup in names.
In these instances, when the security community is readying itself for a major sea change, what it instead gets is a ripple. Here are some of the last year’s biggest miscommunications and errors in security vulnerabilities.
There are some qualifications for vulnerabilities that send shivers up the spine of the security community as a whole. A “wormable” vulnerability is used when the possibility exists that an infected system can contribute as an active source to infect other systems. This makes the growth potential of an infection exponential. You’ll often see the phrase “WannaCry like proportions” used as a warning about how bad it could get.
Which brings us to our first example: CVE-2022-34718, a Windows TCP/IP Remote Code Execution (RCE) vulnerability with a CVSS rating of 9.8. The vulnerability could have allowed an unauthenticated attacker to execute code with elevated privileges on affected systems without user interaction, which makes it “wormable,” but in the end, it turned out to be not so bad since it only affected systems with IPv6 and IPSec enabled and it was patched before an in-depth analysis of the vulnerability was publicly disclosed.
2. Essential building blocks
Something we’ve learned the hard way is that there are very popular libraries maintained by volunteers, that many other applications rely on. A library is a set of resources that can be shared among processes. Often these resources are specific functions aimed at a certain goal which can be called upon when needed so they do not have to be included in the code of the software. A prime example of such a library that caused quite some havoc was Log4j.
So, when OpenSSL announced a fix for a critical issue in OpenSSL, everybody remembered that the last time OpenSSl fixed a critical vulnerability, that vulnerability was known as Heartbleed. The Heartbleed vulnerability was discovered and patched in 2014, but infected systems kept popping up for years.
However, when the patch came out for the more recent OpenSSL issue, it turned out the bug had been downgraded in severity. That was good news all around: The patch for the two vulnerabilities is available, and the announced vulnerability wasn’t as severe as we expected. And there is no known exploit for the vulnerabilities doing the rounds.
The different interpretations for the term zero-day tend to be confusing as well.
The most accepted definition is:
“A zero-day is a flaw in software, hardware or firmware that is unknown to the party or parties responsible for patching or otherwise fixing the flaw.”
But you will almost as often see something called a zero-day because the patch is not available yet, even though the party or parties responsible for patching or otherwise fixing the flaw are aware of the vulnerability. For example, Microsoft uses this definition:
“A zero-day vulnerability is a flaw in software for which no official patch or security update has been released. A software vendor may or may not be aware of the vulnerability, and no public information about this risk is available.”
The difference is significant. The fact that a vulnerability exists is true for almost any complex platform or software. Someone has to find such a vulnerability before it becomes a risk. Then it depends on the researcher finding the flaw whether it becomes a threat. If the researcher follows the rules of responsible disclosure, the vendor will be made aware of the existence of the flaw before anyone else, and the vendor will have a chance to find and publish a fix for the bug before any malicious actors find out about it.
So, for a vulnerability to be alarming, I would argue it has to be used in the wild or a public Proof-of-Concept has to be available before the patch has been released.
As an example of where this went wrong, a set of critical RCE vulnerabilities in WhatsApp got designated as a zero-day by several outlets, including some that should know better. As it turned out, the vulnerabilities listed as CVE-2022-36934 and CVE-2022-27492 were found by the WhatsApp internal security team and silently fixed, so they never posed any actual risk to any user. Yes, the consequences would have been disastrous if threat actors had found the vulnerabilities before the WhatsApp team did, but there never were any indications that these vulnerabilities had been exploited.
Publicly disclosed computer security flaws are listed in the Common Vulnerabilities and Exposures (CVE) database as an individual number. CVE numbers are very helpful because they are unique and used in many reliable sources, so they make it easy to find a lot of information about a particular vulnerability. But they are hard to remember (for me at least). Coming up with fancy names and logos for vulnerabilities names, such as Log4Shell, Heartbleed, and Meltdown/Spectre helps us to tell them apart.
But when security experts themselves start to confuse different vulnerabilities in the same framework and researchers disclose details about an unpatched vulnerability because they think the information is out anyway, serious problems can arise.
In March, two RCE vulnerabilities were being discussed on the internet. Most of the people talking about them believed they were talking about “Spring4Shell” (CVE-2022-22965), but in reality they were discussing CVE-2022-22963. To add to the stress, a Chinese researcher prematurely spilled details about the vulnerability before the developer of the vulnerable Spring Framework could come up with a patch. This may have been due to the confusion about the two vulnerabilities.
In the end, Spring4Shell fizzled, working only for certain configurations and not for an out-of-the-box install.
Public service or not?
So, are we doing the public a service by writing about vulnerabilities? We feel we are, because it is good to raise awareness about the existence of vulnerabilities. But, to be effective, we need to meet certain criteria.
- First of all, it needs to be made clear who is affected and who needs to do something about it. And what you can do to protect yourself.
- While it is not always easy to make an assessment about the threat level, since we often don’t have the exact details of a vulnerability, it is desirable to not exaggerate the impact.
- Make it very clear whether or not a threat is being used in the wild if you have that information.
In a recent assessment, security researcher Amélie Koran said on Mastodon that the economic costs of Heartbleed were mostly due to vulnerability assessment and patching and not necessarily lost or stolen data. Not that it wouldn’t have backfired if the patch hadn’t been deployed, but it is something to keep in mind. A panic situation can do more harm than the actual threat.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.