Standardising Cyber-Resilience Metrics

A proposal for Eurostat.

Mansfeld-Südharz, Germany - December 5, 2025

From county dashboards to Union-wide statistics: a seven-indicator draft that fits on one A4 page and still survives an audit

Statistics are only as useful as the door they unlock. For ten years Eurostat has published quarterly figures on broadband penetration and ICT investment, but it has never asked a municipality, a clinic or a nine-person carpentry shop how long they can survive when their last domain controller stops responding. The absence is not accidental: every previous attempt to quantify cyber resilience drowned in the swamp of proprietary risk matrices, each insisting that its own blend of likelihood, impact and control maturity was the one true formula. The Cyber Resilience Alliance has now drafted a seven-indicator sheet that can be filled out by a part-time accountant in twelve minutes, yet still passes the plausibility gate of an ISO 27001 auditor. We sent the draft to Eurostat’s Unit B4 last month and received an invitation to the next Working Group on Digital Indicators. This article explains why we believe the Union needs a common yardstick and how we designed one that does not become a reporting monster.

The starting point is a simple observation: existing regulations already force enterprises to keep records that contain the raw material for resilience metrics, but the data is expressed in incompatible languages. NIS2 asks for “significant incident” notifications, GDPR asks for “personal data breach” timing, and the upcoming Cyber-Resilience Act will ask for “vulnerability handling” evidence. None of the texts prescribes how to normalise the answers, so every national authority reinvents the spreadsheet. The result is a mosaic that cannot be aggregated into a European picture, which means policy makers fly blind when allocating recovery funds or calibrating aid intensity. Our proposal therefore does not introduce new questions; it translates disparate records into a common numerical grammar that can be stitched together at Union level without exposing commercially sensitive detail.

Indicator one is “Maximum Tolerable Downtime exceeded (yes/no)” recorded once per calendar year. The threshold itself remains a business decision: a dairy that needs milking parlours online every four hours may declare eight hours as intolerable, while a law firm may choose seventy-two. What matters for statistics is not the absolute number but the binary breach, because Eurostat already collects sector and size classes that allow normalisation. If 12 % of dairy SMEs in a region breach their self-declared limit, the figure is instantly comparable to the 4 % breach rate among legal services, and regional divergence can be mapped without exposing any single firm’s continuity plan. The binary design also removes the incentive to game the metric by lowering the threshold; a firm that shortens its downtime target simply raises its own breach probability, producing no net statistical benefit.

Indicator two measures “Recovery Time Objective achievement” expressed as a percentage of incidents where the actual recovery time stays below the objective. Again, the objective itself is enterprise-specific, but the percentage creates a European numerator. The Alliance’s shared SOC has piloted the definition for nine months across 127 members and finds that a simple stopwatch from first abnormal log to service restoration produces reproducible figures within a five-percent error margin, small enough for national accounts yet large enough to discourage forensic hair-splitting. The percentage is recorded quarterly, so seasonal businesses such as tourism or agriculture do not distort annual averages, and the median value is reported instead of the mean to neutralise outlier catastrophes that would otherwise dominate small-population regions.

"We do not need more indicators; we need indicators that can travel from a dairy farm in Saxony to a courthouse in Sicily without translation loss."

Indicator three is “Patch latency of critical vulnerabilities” captured as the median number of days between publication of a CVE and deployment of the remedial package. The data already exists in every WSUS or Ansible log, so no additional tooling is required. We truncate the measurement at ninety days because beyond that horizon the CVE is either mitigated by compensating controls or the asset no longer exists in the same configuration. Early pilots show that the median across Alliance SMEs drops from twelve days to four once the metric is published on a county dashboard, proof that measurement itself is a behavioural intervention. If replicated at Union scale, the indicator would provide the first cross-border velocity curve for patch management, allowing regulators to calibrate grace periods in future legislation without resorting to the abstract “without undue delay” language that currently litigates forever.

Indicator four turns to the human layer: “Security training coverage” defined as the percentage of employees who completed at least one structured cyber-hygiene course during the past twelve months. The denominator is headcount, the numerator is certified hours, and the threshold is set at eighty percent to align with the upcoming NIS2 supervisory guidance. The indicator is collected via an API that scrapes existing LMS or payroll systems, so firms are not asked to maintain yet another roster. Because the query is limited to completion status, no personal data leaves the HR database, satisfying both GDPR minimisation and national statistical secrecy. The resulting regional heat-map can be correlated with breach data to quantify the return on awareness investment, a linkage that has so far been claimed but rarely measured.

Indicator five quantifies “Third-party risk exposure” as the percentage of critical suppliers that have published a current SOC 2 Type II or ISO 27001 statement. The question sounds ambitious, yet the answer is discoverable through open-source intelligence: the Alliance crawler queries the certification databases of TÜV, DEKRA and Swiss Accreditation and returns a simple ratio. Pilots in Saxony-Anhalt show that the average SME relies on forty-three cloud services, but only eleven possess a valid certificate; the twenty-six-percent coverage becomes a baseline against which improvement can be tracked without forcing smaller vendors into audit fatigue. If the metric is adopted Eurostat-wide, certification bodies gain a market incentive to offer lightweight, low-cost schemes for the long tail of micro-providers, indirectly raising the security floor across the entire supply chain.

Indicator six is “Incident reporting timeliness” measured as the median hours between detection and first notification to the national CSIRT. The clock stops when the structured email leaves the enterprise mail gateway, a timestamp that is machine-readable and impossible to retro-edit. The indicator satisfies the policy imperative to encourage rapid disclosure while avoiding the punitive reflex that usually drives under-reporting. Because only the median is published, firms that report late are not singled out, yet the regional median still exposes systemic sluggishness. During the pilot year the median in Anhalt-Bitterfeld fell from thirty-six hours to nine, primarily because the shared SOC offers a template report that pre-fills technical fields, cutting the human hesitation loop. Eurostat has confirmed that the definition aligns with the Commission’s proposed Cyber-Incident Reporting Framework, so no future adjustment will be necessary when the regulation takes effect.

Indicator seven is “Resilience investment ratio” calculated as cyber-security expenditure divided by total ICT expenditure. The ratio is expressed as a percentage and collected annually through the existing ICT survey that Eurostat already sends to enterprises with more than ten employees. No additional questionnaire is therefore required; the numerator is simply an extra row in the same spreadsheet. The ratio captures the behavioural shift from seeing security as a compliance fine to seeing it as a capital investment. Regions where the ratio exceeds twelve percent show measurably lower breach rates in the shared SOC, providing policy makers with an evidence base to target aid or tax incentives. Because the metric is a ratio, it is scale-invariant: a 50-person bakery and a 5 000-person factory can be compared directly, solving the long-standing problem that absolute spend figures favour large firms and distort regional averages.

Taken together the seven indicators consume less than one man-day per year for a 250-employee firm, because every data point is either already stored in an existing system or can be generated by a single PowerShell script that the Alliance open-sources under EUPL-1.2. The resulting dataset is small enough to be transmitted through the encrypted channel that Eurostat uses for sensitive micro-data, yet rich enough to support regression analysis between resilience inputs and incident outcomes. Early feedback from the Commission’s B4 unit suggests that the framework could be piloted in 2027 across six NUTS-2 regions and, if stable, rolled into the regular ICT statistics cycle by 2029. That timeline would give Europe its first comparable picture of cyber resilience just as the Cyber-Resilience Act reaches full enforcement, turning measurement from a bureaucratic after-thought into a frontline policy instrument.


The Cyber Resilience Alliance is a public-private partnership established 2025, led by CypSec, Validato and the County of Mansfeld-Südharz. The Alliance operates a sovereign private-cloud security stack, a shared SOC and an cyber academy, aiming to make Mansfeld-Südharz the reference site for rural cyber resilience by 2030.

Media Contact: Daria Fediay, Chief Executive Officer at CypSec - daria.fediay@cypsec.de.

Eurostat Metrics Cyber Resilience

Welcome to CypSec Group

We specialize in advanced defense and intelligent monitoring to protect your digital assets and operations.