Bring Uncertainty Back

Written by tdmv | Published 2018/11/23
Tech Story Tags: risk-management | decision-making | decisions | bring-uncertainty-back | risk-measurement

TLDRvia the TL;DR App

We need to bring uncertainty back to risk measurements.

Suppose I ask you to measure the wingspan of a Boeing 747. Right now, wherever you may be, with the knowledge and tools you have on hand. You may say this isn’t possible, but Doug Hubbard has taught us that anything can be measured, once you understand what measurement is. With that mental hurdle out of the way, you can now measure the wingspan of a Boeing 747.

There are two different approaches to this in modern business.

Option 1: Think about the size of a passenger jet and say, “Big.”

Technically, this answers my question. There’s a problem with this answer, however — it’s neither precise nor accurate. In everyday language, the words precise and accurate are used interchangeably. In areas of science where measurements are frequently used, they mean different things. Accurate means the measure is correct while precise means the measure is consistent with other measurements.

The word “big” is an adjective to describe an attribute of something, but without context or a frame of reference to make a comparison, it’s virtually meaningless. Furthermore, using an adjective in place of a measurement is a little dishonest. It’s true that we don’t know the exact wingspan of a 747. Besides, wingspans vary by model. However, we chose a word, “big,” that conveys precision, accuracy, and exactness, but is not any of those. If that wasn’t bad enough, we’ve completely obfuscated our level of uncertainty about our ability to estimate the wingspan of a 747.

Option 2: What Would Fermi Do?

Thinkers like Enrico Fermi and Doug Hubbard approach the problem differently. They — just like us — probably don’t know the wingspan of a 747 off the top of their heads. Just like Fermi estimated the number of piano tuners in Chicago simply by thinking through and decomposing the problem, we can do the same.

  • I’ve seen a 747 and even flown on one several times, so I have some frame of reference.
  • I’m 6'2,” and I know a 747 is larger than me
  • A football playing field is 100 yards (300 feet), and I’m sure a 747’s wingspan is smaller than a football field
  • My first estimate is between 6’2” and 300 feet — let’s improve this
  • I know what a Chevy Suburban looks like — they are 18 feet long. How many Suburbans, front to back, would equal a 747? Maybe…. 7 is a safe number. That’s 126 feet.
  • I’m going to say that the wingspan of a 747 is between 126’ and 300’.
  • Am I 90% sure that the actual number falls into this range (aka confidence interval)? Let me think through my estimations again. Yes, I am sure.

Let’s check our estimation against Google.

It’s a good measurement.

Two remarkable things happened here. Using the same of data as “big” — but a different mental model — we made a measurement that is accurate. Second, we expressed our uncertainty about the measurement — mainly, we introduced error bars.

One missing data point is whether or not the level of precision is adequate. To answer this, we need to know why I asked for the measurement. Is it to win a pub trivia game or to build an airplane hangar to store a 747? Our minds are instruments of measurement. We may not be as accurate as a tape measure, which is not as accurate as a laser distance measurer, which is not as accurate as an interferometer. All instruments of measurement of have error bars. When determining the level of precision needed in a measurement, we always need to consider the cost of obtaining new information, if it’s relevant and if we need additional uncertainty reduction to make a decision.

If this seems like a nice story to you, but one that’s not too relevant — think again.

Using adjectives like “red” or “high” in the place of real measurements of risk components (e.g., probability, impact, control strength) are neither precise _nor_accurate. Even worse, uncertainty is obscured behind the curtain of an adjective feels exact, but is not. The reader has no idea if this was a precise measurement — using a mixture of historical data, internal data and many calibrated subject matter experts — or if it was made by a guy named Bob sitting in an office, pondering the question for a few seconds and then saying, “That feels High.”

Managing risk is one of the most important things a business can do to stay in business. It’s time to bring uncertainty back to risk measurements. It’s the honest thing to do.

About the author:

Tony Martin-Vegue is a writer, speaker and risk expert with a passion for data driven decision making. He brings his expertise in economics, cyber risk quantification and information security to advise senior operational and security leaders on how to integrate evidence-based risk analysis into business strategy. He has led risk teams for several Bay Area financial institutions and in the words of his eight-year-old son, has spent much of the last 20 years “Fighting criminals on the internet.” Tony is also the chair of the San Francisco chapter of the FAIR Institute — a professional organization dedicated to advancing risk quantification. Please visit www.tonym-v.com for more information.


Published by HackerNoon on 2018/11/23