Essential metrics for effective security program assessment
In this Help Net Security interview, Alex Spivakovsky, VP of Research & Cybersecurity at Pentera, discusses essential metrics for evaluating the success of security programs.
Spivakovsky explains how automation and proactive testing can reveal vulnerabilities and improve overall security posture.
What are the most effective metrics for measuring the success of a security program?
The most straightforward metric is: Has your organization been breached? If the answer is yes, it’s clear there’s work to be done. If the answer is no, you’re in better shape—but it’s more complicated than that. Even if you haven’t been breached, if your underlying cybersecurity metrics are declining then your security posture is deteriorating. Here are some metrics that are important to measure success and improvement:
- Mean Time to Detect (MTTD) and Mean Time to Respond (MTTR) – These measure how quickly threats are identified and addressed, reflecting the agility and efficiency of the security operations team. Every extra minute is time the attacker has access to your environment. Run consistent testing to ensure that average MTTD and MTTR are consistently improving.
- Ability to determine risk profile vs. risk probability – Determining remediation priority involves assessing both the potential impact if exploited combined with likelihood of it being exploited within the context of your environment. A vulnerability may rank 10/10 in criticality, but compensating controls can reduce its exploitation probability to 2/10, making it lower priority than a 7/10 criticality with an 8/10 exploitation risk. Without this insight, prioritization is far less effective.
- Overall risk assessment score over time – Repeatedly measuring your overall risk profile across your complete attack surface. These assessments are based on factors like vulnerabilities, residual risk and business impact scores to create a security baseline score that you can track and improve over time.
How can organizations determine whether security programs are practical rather than just meeting compliance requirements?
The short answer is continuous security testing and validation practices. As part of the CTEM framework, Gartner recently introduced a new category called Adversarial Exposure Validation (AEV) which consolidates proactive testing technologies such as Breach and Attack Simulation (BAS) and Autonomous Penetration Testing and Red Teaming. By emulating real-world attack tactics, AEV enables organizations to actively test the effectiveness of their existing defenses.
The proactive testing reveals exploitable vulnerabilities and attack paths within the context of the environment. By focusing on proven exposure, this method ensures that the security controls organizations have put into effect are practical and effective in defending against real threats, not just ticking a regulatory checkbox.
How vital are automation and continuous testing in improving the effectiveness of security programs?
Automation and software are critical to proactive security testing because they enable testing at a frequency and scale that cannot be matched by human pentesters and red-teamers. According to our 2024 State of Pentesting survey, over 60% of enterprises manually pentests their organizations twice a year at the most. During these pentests manual pentests are able to cover up to 20% of the IT environment at most.
Based on how often modern IT environments change, especially those operating in the cloud, what this means is that the vast majority of the time a significant portion of the organization’s attack surface is untested and the effectiveness of their security has not been validated.
One very common use case we see is with EDR validation. Organizations believe their EDR coverage is 100%, but when tested, we reveal assets missing agents, or agents that are not leveraging the correct policy. We’ve seen instances where agents were configured to “monitor” instead of “prevent”. Without active prevention, you are reliant on the response time of the internal security team or external SOC, giving the hacker precious time to advance or even complete the exploit. With the frequent changes to IT environments, these types of misconfigurations are relatively common. Continuous testing of your live environments enables security teams to identify and quickly remediate the security issues so that they ensure the coverage really is 100%.
How critical is a security-first culture to the overall effectiveness of a security program?
According to the 2024 DBIR report, the human element remains the single greatest risk to organizational security, playing a factor in 68% of all successful attacks. If the only security conscious groups within an organization are the Cyber and IT teams, then your security cannot possibly work. Internal security teams tend to be small, relative to the overall organization, and cannot be everywhere; all employees need to understand how their actions can impact overall the organizational security posture and practice proper security hygiene.
The other equally important aspect of a security-first culture is executive buy-in. Pentera found that the average enterprise utilizes 53 security solutions across the organization and spends over $1m annually on their IT security. As the types of threats diversify, the cost of security is rising. If executive management is not willing to commit and truly invest in security, then the chances of a breach will rise.
There is some good news on that front. We found that over 50% of CISOs reported they share the results of their pentests with the executive team and the Board of Directors (BoD). With the rise in high profile breaches, management teams are becoming more conscious of cybersecurity and the business risk it represents. We aren’t quite there yet, but the growing awareness will hopefully open the door to more investment in cybersecurity.
What approaches can security leaders use to demonstrate the value of their security programs to executive leadership and stakeholders?
Cybersecurity professionals are increasingly expected to communicate their contributions in business terms. This helps executive management and board members, who are typically more business-focused, and less familiar with cybersecurity concepts, to better understand the impact of security initiatives. I recommend communicating value based on Return on Security Investment (ROSI), the cybersecurity equivalent of ROI (Return on investment).
The formula typically used is:
ROSI = (Losses avoided – Cost of security measures) / Cost of security measures
For example, if a security program prevents $1 million in potential breach losses and costs $250,000 to implement, the ROSI would be 3, meaning a return of $3 for every $1 spent on security.
Cost of security measures includes all investments made to implement and maintain the organization’s security defenses.
Calculating losses avoided:
Losses avoided represent the potential financial impact of security incidents that the implemented security measures have successfully prevented. Key components include:
- Single Loss Expectancy (SLE): The cost of a single security incident.
- Annual Rate of Occurrence (ARO): The estimated frequency of such incidents over a year, based on historical trends or industry benchmarks.
- Mitigation Rate is the effectiveness of the security controls in preventing incidents (0 to1) Formula: Losses Avoided = SLE × ARO × Mitigation Rate