Traps of Measuring Security Risks

We naturally take comfort in being able to quantify the vagueness of challenges in our existence.  This past week, I was again reminded the cup of information security is filled partially with the complexities of human perception and ambiguity of emotions weighing our mental models of judgment.  These can be misleading.

This is not a revelation.  I thrive in the trenches of security measures and metrics, and learned this lesson many seasons past.  But it is so easy to fall back into the comfort of measuring, calculating, estimating, and even predicting risks with first impressions, and foregoing proper data collection and dispassionate analysis.

It is in our very nature to apply our big cognitive brains in an attempt to make sense of something which causes concern for our minds when we encounter situations we fail to grapple.  We default to familiar structures of logic and experience to give some insight, even if it is invalid.  If we cannot grasp a cloud, it makes us feel better to at least measure it.

I recently travelled to the beautiful city of Shanghai.  In the sprawling city of 19 million, getting about requires the use of a local taxi.  Drivers are aggressive by American standards.  They creatively use all lanes, including those of oncoming traffic, to weave in and out between pedestrians, other vehicles, and bicycles, all at high speed.  Roadway guides such as speed signs, stoplights, and lane markers are just cosmetic.  The concept of ‘right of way’ is defined by the vehicle which gets there first.  Tens of thousands of taxi drivers vie for pole positions at every light and traffic snarl.  I counted no less than half a dozen head-on near misses the first day.

Not surprisingly I was a bit concerned for my safety.  But what was the actual risk?  It seemed high, with all the jockeying, speed challenges, and lurching in front of other cars at a moment’s notice.  In formal terms, the security risk calculation was off the map.  Keeping it simple, risk can be defined as equaling the (threat) x (consequence) x (vulnerability).  Threats were abundant and vectoring from every angle.  Vulnerabilities were painfully obvious as the situation was an example of near uncontrolled chaos heavily dependent upon human judgment and intervention.  Lastly, the consequences registered as likely life threatening.  Vehicle safety measures are not equal to US standards, with no airbags and rarely a functioning seatbelt.  My brain began to do the rough math and formed a mental model where the conclusion was somewhere near the “I’m screwed” end of the spectrum.

Over time, I started to take a different perspective.  By the end of the week, and too many close calls to count, I observed the city’s taxi’s did not show damage which would be consistent with rampant numbers of collisions.  Although chaotic and unpredictable, they found a balance in avoiding impacts.  My drivers’ never appeared nervous.  Many were happy to take calls on their cell phones while racing into oncoming traffic and weaving back into our directional flow at the last second.  Yet, they were not worried.  The pedestrians who seemed intent on walking into direct paths of vehicles always looked up at the last possible moment and jumped out of the way of an untimely demise.

The dangers were still there.  Nothing changed but my perception.  The risks were high, controls were low, but it was the incident rate that was the telling measure.  Lack of vehicle accidents in such a tremendous population meant they operated in an efficient manner which my brain could not comprehend as safe.  But it was.  My initial evaluation misled me to a wrong conclusion: an inaccurate determination of risk.  I felt safer than before.  To this day, I cannot comprehend how they do it.

In our world of information security, we must take a step back from the limitations and biases we possess and stay true to proper forms of analysis in order to see the truth.  It is far too easy for us to slip backwards and inaccurately measure risk of situations we don’t understand.  Let’s continue to remind each other of this fact and challenge risk assessments, especially in situations where concern is more prevalent than fact.

Published on Categories Archive
Matthew Rosenquist

About Matthew Rosenquist

Matthew Rosenquist is a Cybersecurity Strategist for Intel Corp and benefits from 20+ years in the field of security. He specializes in strategy, measuring value, and developing cost effective capabilities and organizations which deliver optimal levels of security. Matthew helped with the formation of the Intel Security Group, an industry leading organization bringing together security across hardware, firmware, software and services. An outspoken advocate of cybersecurity, he strives to advance the industry and his guidance can be heard at conferences, and found in whitepapers, articles, and blogs.