Top Techniques for Measuring Security Value

Choosing the right method to measure security value is important but not necessarily intuitive.

Some years ago, at the prodding of our department training expert, I developed a class teaching how to think critically while calculating information security value.  The benefits of the course are twofold.  The class helps security practitioners in creating more justifiable value assessments for their programs.  Additionally, it assists audiences of such assessments to question the validity and identify weak justifications.

I offered to teach the class once a year, internally to Intel, and figured the audience would dry up after the first class.  For some odd reason people continued to sign up year after year.  I honestly figured not many people would willingly choose to spend their time on such a dry subject.  In the first year, mostly information security professionals attended.  In subsequent years, to my surprise, a slew of people from finance, manufacturing, marketing, and product development have taken the course.  Sitting in my Inbox is my annual notification for instructing the class, with a list of students from multiple countries already signed up.  Curse you Bruce (training expert)!

With such a diverse audience, I figured I would share some of the materials with the broader community.  This is just a snippet, but one of the key chapters.  Feel free to comment (all comments will be forwarded to Bruce)

This section of the class touches on recommended methods to show value.  This is not an all encompassing list, but probably the most common to information security programs.  These are archetypes of measurement techniques, not specific questions or audits.  Most techniques in use today can be classified into one of these archetypes.  Each has a set of common characteristics with strengths, weaknesses, and applicability considerations.  Knowing these characteristics is to understand how best to validate or challenge the metric.

Information Security Metrics Archetypes

#1 Metric Type: Standards-Based Gap Analysis

Method: Compare the current state against a provided list
Measurement Scale: Nominal
Pro’s: Shows gaps against defined standards.  Can be very fast to accomplish, compared to other methods
Con’s: Does not show actual value, only alignment to a defined state
Applicability: Compliance to regulations, alignment to best-known-methods
Output: Scorecard to expected compliance, gap list of non-compliant areas
Notes: The value of compliance to a predefined standard resides in the applicability and comprehensiveness of the standard itself.  Typically, it is also specific to a particular area of risk.  Interpretation also can skew measures, if the standard is vague.

#2 Metric Type: Raw Gap Analysis

Method: Brainstorm from knowledgeable persons on what they think needs fixing
Measurement Scale: Nominal
Pro’s: Identifies the most apparent issues to correct.  May be as simple or complex as the organizer desires.
Con’s: Reliant on expertise of teams doing the analysis.  Not tied to any quantifiable savings.
Applicability: Response to incidents which already occurred, to prevent recurrence
Output: List of issues to correct
Notes: The value resides in the knowledge of the people conducting the analysis.  A mix of technologists as well as security is best, otherwise the output may lack real benefits

#3 Metric Type: Project Progress Tracking

Method: Metrics which track the start-to-finish progress of a security project
Measurement Scale: Interval
Pro’s: Shows advancement and progress of a project
Con’s: Does not tie the project to any savings or benefits
Applicability: Project management effectiveness
Output: Performance against schedule/budget metrics
Notes: This class of metric is often misused.  Progress of project completion is largely independent of what value it provides once instituted.  This can be used when a security project is a critical path item to another initiative where value is defined.

#4 Metric Type: Qualitative Risk Assessment

Method: Organized collection of concerns from knowledgeable persons on what they believe needs fixing and an explanation statement of the severity of the problems
Measurement Scale: Ordinal
Pro’s: Generates a list of areas to address with prioritized descriptions
Con’s: Reliant on the expertise of teams doing the analysis. Not tied to any quantifiable savings.  Can be time consuming.  May not be comprehensive.  May be skewed to only areas evaluated.  Personalities of the team may significantly alter the priority descriptions of items.
Applicability: Basic state of security gap analysis, scalable to an entire organization.
Output: Description of prioritized line-item gaps
Notes: This is one step above the Raw Gap Analysis method.  Best use is to identify and describe the priority of the most severe issues.  Rarely is this method comprehensive.

#5 Metric Type: Qualitative to Quantitative Risk Assessment

Method: Formal severity ranking, typically on a scale, of problems gathered from a Qualitative Risk exercise
Measurement Scale: Ordinal to Interval
Pro’s: Generates a prioritized list of areas to address, with relative values for comparison.  Can track over time to show incremental changes.
Con’s: Reliant on expertise of teams doing the analysis.  Relative values are not tied to any quantifiable savings.  Time consuming, requires tools for scalability.  Expect +/- 40% accuracy
Applicability: Advanced state of security gap analysis, scalable to an entire organization.
Output: Ranked descriptions of line-item gaps
Notes: This is one step more advanced from the Qualitative Risk assessment, giving numerical values to priority aspects (example: threat, vulnerability, consequences, etc.)

#6 Metric Type: Vulnerability Analysis

Method: Thorough inspection which documents all vulnerabilities
Measurement Scale: Interval
Pro’s: Identifies a list of vulnerabilities which exist
Con’s: Existence of vulnerabilities is not tied to losses.  Output can be overwhelming and underscores only a snap-shot in time of a rapidly changing environment.  Can be very time consuming, requires tools and interpretation.
Applicability: Applied to specific hardening initiatives or fed into a risk assessment
Output: Descriptions of potential vulnerabilities, may be ranked on severity or overall exposure
Notes: Vulnerability analysis poorly correlates to losses.  Just because a vulnerability exists, does not mean it will be exploited.  If exploited, it does not necessarily equate to a meaningful loss.  Question any vulnerability analysis, which claims specific dollar savings!

#7 Metric Type: Against Previous Performance/Operational Efficiency

Method: Statistical comparison against historical data, known costs, and trends (example: actuary tables)
Measurement Scale: Interval to Ratio
Pro’s: Uses actual data to derive the measurement.  Can show the value of a program.  Can be used to both predict value as well as derive sustaining value after project landing.
Con’s: Accuracy may suffer as historical patterns change.  Significant work to accomplish this metric.  Accuracy may be outdated quickly as the environment changes quickly.
Applicability: Before and after comparison of effects for value measurements.
Output: Historical performance and trend graphs showing relative positions.  Net Present Value (NPV) for operational spending.  Forecasts of high-level changes to risk.  Can provide a ‘value’ in terms of dollars.
Notes: Depending upon the historical data, it may not tie to actual security value.  Data trends in the security field tend to be incomplete, limited, and can be manipulated.  Operations costs may not reflect the benefit of security.  Best when used to compare data prior and after landing a security program.

#8 Metric Type: Value Calculation for a Return on Security Investment

Method: Financial model quantifying the dollar benefits of a security program
Measurement Scale: Interval to Ratio
Pro’s: Uses actual data to derive the measurement, based upon trends and control groups.  Potential to generate dollar values derived for both losses and loss prevented.  May comprehend defense-in-depth solutions, showing the individual as well as cumulative value.  Statistical predictions quantify accuracy
Con’s: Extremely difficult to produce.  Must have significant amounts of accurate data and understanding of the security environment.  Must use complex calculations and factor in unknowns.  Very difficult to scale.  Tools and processes are not well defined or mature in the industry.
Applicability: When sufficient historical data is available, an intuitive understanding of the security environment is present, and business values can be measured.  For use when justifiable estimates of dollar value of a security program is needed.
Output: Incident reduction metrics, estimated losses, and loss prevented metrics.  Single Loss Expectancy (SLE), incident and loss predictions.  Derived dollar value of individual security projects as well the value for multiple overlapping/complementary security systems.
Notes: Not for the faint of heart.  These types of analysis are ugly monsters to produce and validate.  All assumptions, calculations, and data sources must be documented.  Complete raw data sets must be provided.   May include limited aspects of other measurement archetypes to fill in gaps, thereby affecting accuracy.

Lastly, there is another choice which can be made: the decision to not measure the value of a security program.  I think this option is pursued more often than not and done for the entirely wrong reasons.  Measuring value is not easy.  It consumes time, resources, requires expertise, and once it is published the author may be under the spotlight to answer and justify the analysis for years to come.  But for all the sweat, tears, and pain, having a good understanding of the value, has merit for security programs of significant investment.

On the other hand, the simple reality is that in many cases a full blown analysis does not make sense.  For example, when a program is required to meet regulatory requirements or when the security investment is very small.  I would not do a comprehensive value assessment for justification to purchase a $10 cable lock.  Let common sense prevail.  If the value must be understood to compare to other options, articulate security posture, or justify spending, then do an assessment.  Otherwise, ask yourself if it is really needed.

Published on Categories Archive
Matthew Rosenquist

About Matthew Rosenquist

Matthew Rosenquist is a Cybersecurity Strategist for Intel Corp and benefits from 20+ years in the field of security. He specializes in strategy, measuring value, and developing cost effective capabilities and organizations which deliver optimal levels of security. Matthew helped with the formation of the Intel Security Group, an industry leading organization bringing together security across hardware, firmware, software and services. An outspoken advocate of cybersecurity, he strives to advance the industry and his guidance can be heard at conferences, and found in whitepapers, articles, and blogs.