FireEye Stories

Security Without Barriers, Part Three: Reporting Cyber Resilience to Stakeholders

The first post in this blog series explained why organizations should shift their security narrative from a prevention-focused strategy to a more flexible cyber resilience strategy. In the second post, we identified some important priorities when implementing cyber resilience in an organization.

In this final installment, we’ll explore common mistakes when reporting cyber security posture to stakeholders and suggest alternate approaches.

Why Report Metrics

Providing metrics and status reports is an important part of any successful cyber security program. CISOs need to tell the C-suite how their security investments are providing value, they need to report to boards how well IT risk is being managed, and they need to explain to other business leaders how security efforts impact their operations. It’s easy to share technical details of a program—but for many business leaders this is a foreign language. Oftentimes stakeholders simply want status boiled down to one thing: Are we secure?

Pitfalls to Avoid

The Wrong Data

One common reporting mistake is providing irrelevant metrics. It’s tempting to build dashboards showing operational data from security tools. These numbers are easy to obtain and can show how much work the security team performs every day. However, showing metrics such as the number of threats blocked by a firewall, for example, does not help stakeholders understand how well a program is performing.

Providing operational data to report the state of a security program is akin to having a home inspection report that tells how many raindrops a roof stopped. While the metrics are factual, a homeowner simply wants to know whether the roof is in good shape or needs improvement. Similarly, stakeholders want to know the effectiveness of a security program rather than the number of attacks that were blocked.

Days Without Incident

Another reporting misstep is focusing too much on no incidents as a measure of success. Tracking incidents, time to remediation and similar operating statistics is an important measure of a team’s ability to respond to attacks. However, reporting “zero incidents” as evidence of success can lead to a false sense of security and potentially even undermine a security team’s ability to get funding and support.

Reporting zero security incidents is often a sign that a security program lacks the tools and resources to detect attacks. In fact, the Mandiant Security Effectiveness Report found that 53% of attacks go completely undetected or prevented by security controls. Rather than reporting on the number of incidents, focus on reporting time to detect, contain and remediate incidents. Educate stakeholders on how these metrics show whether incident mitigation capabilities are working as intended.

Making False Assumptions

Do not report on the number of projects completed while assuming controls are working as intended. Highlighting projects or number of tool deployments completed is an important management metric showing staff workload. Unfortunately, most organizations are not getting the full value from the security tools they’ve deployed. Without evidence to support controls validation or the effectiveness of a security program, the reporting of projects becomes irrelevant.

The Mandiant Security Effectiveness Report found that 80% of security tools are under-utilized when deployed with out-of-the-box default settings. For example, a government entity’s firewall was able to block only 24% of attacks when tested. After tuning the firewall based on these test results, the entity improved firewall blocking of attacks by 74%. Had the organization assumed that by simply deploying the firewall they were protected, they would have left their network exposed to threats while also undercutting the value of their firewall investment.

Improve Reporting With Mandiant Security Validation

Assessing and reporting on cyber security can be daunting, even for experienced security leaders. Despite the challenges involved, measuring and reporting are important for success. The process helps align the organization around a shared vision of success and, done properly, helps demonstrate progress and prioritize next steps.

Mandiant Security Validation helps organizations measure and report on how well their security program prevents, detects, alerts and responds to attacks across technology, process and people. By deploying a security validation platform, organizations benefit by being able to:

  • Assess the effectiveness of existing tools and controls to establish a good known baseline and determine if tools are working as expected.
  • Optimize the configuration of their tools and processes to validate efficacy of controls and get the full value of the technologies they’ve deployed.
  • Rationalize their security investments to identify gaps and overlaps across their security tools, and to understand the impact of the removal or adjustments made to their security infrastructure. 
  • Continuously demonstrate improvement over time, particularly as unknown changes in security infrastructure might otherwise impact the performance of controls and leave the organization exposed to threats.

By leveraging Mandiant Security Validation, organizations can prove the value of their security investments to stakeholders. Validation replaces subjective security metrics with evidence-based test results showing whether security controls protect critical assets and ultimately prove cyber readiness..

Head over to our Mandiant Security Validation page for more, and watch the recent on-demand webinar from the FireEye Virtual Summit.