Listen to this Post
Revolutionizing Cyber Risk Assessment
On May 19, 2025, the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) launched a new cybersecurity white paper (CSWP 41) introducing the Likely Exploited Vulnerabilities (LEV) metric. This tool proposes a smarter, data-driven approach to identifying software and hardware vulnerabilities that may already be under attack, even if they havenāt been officially reported as exploited. The innovation is set to reshape how cybersecurity teams prioritize their actions and secure infrastructure against emerging threats.
Until now, organizations have leaned heavily on the Exploit Prediction Scoring System (EPSS) and the Known Exploited Vulnerabilities (KEV) list. EPSS calculates the 30-day probability that a given vulnerability might be exploited, while KEV highlights vulnerabilities already known to be used in real attacks. The problem? EPSS doesnāt track past exploits, and KEV only catches whatās confirmed ā often too late.
LEV fills this gap by estimating the likelihood that a vulnerability has already been exploited, even if it hasnāt been publicly flagged. It blends historical EPSS data with KEV entries, relying on statistical models to infer past exploitation patterns. This allows cybersecurity teams to act earlier, prioritize smarter, and plug gaps in current defense strategies.
LEV: How It Works and Why It Matters
LEV is not meant to replace EPSS or KEV ā instead, it enhances both. Itās built on a probability model that compiles EPSS scores over time and weighs them based on historical trends. There are two versions:
LEV: A streamlined option for organizations with limited resources.
LEV2: A more complex model offering deeper insights but requiring stronger computing capabilities.
Each LEV report provides a snapshot for any CVE (Common Vulnerabilities and Exposures), including:
The estimated probability of past exploitation
Peak EPSS score with date
A 30-day EPSS history
Affected systems and software via CPE (Common Platform Enumeration)
Additionally, NIST introduced a composite model, combining EPSS, KEV, and LEV to create a well-rounded vulnerability prioritization framework.
What Undercode Say:
The LEV metric represents a much-needed evolution in cybersecurity intelligence. For years, organizations have been playing a dangerous guessing game ā relying on incomplete data, outdated KEV entries, or predictive scores that donāt reflect current realities. LEV flips the script. It anticipates exploitation based on accumulated probabilities, helping teams get ahead rather than react late.
At its core, LEV acknowledges a harsh truth in cybersecurity: by the time a vulnerability is confirmed as exploited, it may already be too late. Attackers are fast, and response teams often operate in a haze of uncertainty. LEV injects statistical foresight into this process, enabling better risk management.
Consider this scenario: a vulnerability shows a moderate EPSS score over the past month, but hasnāt made it to the KEV list yet. LEV looks at the historical trend and may reveal that there’s a 70% likelihood itās already been used. That one insight could push a security team to act quickly, patching a hole before it’s weaponized further.
This isnāt just theory. By layering LEV into existing vulnerability management tools, SecOps teams can:
Fill gaps left by KEV delays
Reduce reliance on purely forward-looking models like EPSS
Strengthen incident response and threat hunting initiatives
Whatās equally important is transparency. LEV isnāt a black box. Its foundation is statistical logic and publicly understood data like EPSS scores and confirmed KEV entries. That makes it easier for organizations to justify their prioritization decisions ā something increasingly important in compliance-heavy environments.
Of course, no system is perfect. LEVās accuracy hinges on the quality of EPSS data and the logic behind its weighting models. It doesnāt āknowā if a vulnerability has been exploited ā it only calculates the probability. But this predictive layer, if validated against real-world cases, could quickly become indispensable.
NISTās openness to collaboration signals a positive trend. Theyāre not pushing LEV as a finished product, but as a living framework, one that can evolve through feedback and real-world use. As cyber threats grow more complex, that adaptability will be crucial.
In short, LEV offers more than just numbers ā it offers foresight. And in cybersecurity, that might be the most valuable asset of all.
Fact Checker Results: ā š
The LEV model is a real metric published by NIST and CISA in May 2025.
It does not replace EPSS or KEV, but complements them with a new probability layer.
The approach is based on historical EPSS trends and aims to estimate exploitation likelihood retrospectively.
Prediction:
Over the next 12 to 24 months, LEV is likely to become an industry standard among vulnerability management frameworks. Expect integration into leading cybersecurity platforms, from SIEM tools to patch management suites. With increasing regulatory pressure to act on unconfirmed threats and reduce exposure windows, organizations will adopt LEV not just as a tool, but as a core part of their cyber defense strategy. Those who embrace it early will gain a critical edge in anticipating the next wave of attacks.
References:
Reported By: cyberpress.org
Extra Source Hub:
https://www.linkedin.com
Wikipedia
Undercode AI
Image Source:
Unsplash
Undercode AI DI v2