Rethinking Risk
By Lee Ann Jarousse, Senior Editor, H&HN
The following is a summation of Douglas W. Hubbard's remarks at GE's "Strategies for Success in the U.S. Health Care Market," by H&HN editors who attended the event.
Many organizations may say their biggest cybersecrity threat is their employees, new technology or perhaps even vendors or organized crime. But the single biggest risk is probably something even more fundamental: how your organization assesses risk. If your risk-assessment process is flawed, that impacts everything else you’re doing in risk management.
“Your organization’s biggest risk in cybersecurity, or anything else, is flawed risk assessment,” Hubbard said.
I call risk assessment a type of “apex” process. An apex process is an executive-level activity that directly influences most other processes. For example, you have to prioritize projects within your organization. If you have a hard time prioritizing projects, what should your priority be? It should be learning how to effectively prioritize projects.
However, even though such apex processes as portfolio prioritization or risk assessment are so critical, their performance is rarely measured even in organizations that diligently measure other processes. Even if an organization routinely measures its return on projects, measures performance and assesses risks, it probably doesn’t know the performance of project selection, performance metrics or risk-assessment processes themselves. Do you know the ROI on your project management method, investment approval process, or risk management itself?
Consider what is perhaps the most widely used risk-assessment tool: the risk matrix, sometimes called a heat map. Clearly, it is meant to have some bearing on a lot of key decisions in the organizations that use it. So it would seem important to know the performance of this method in terms of the measured improvement compared with its costs. Yet, even though these methods are sometimes called a “best practice,” that doesn’t mean proponents actually measured the performance of this method.
Ideally, we should base best practices on deliberate experiments run to compare one method against another, like what is done in drug trials. This is important because, just as with new drugs, there appears to be a type of placebo effect in analysis. Too often, just by going through a structured and formal process, people will feel better about their estimates. Some people even view a structured and formal process as a positive attribute of their methodology. But merely feeling more confident in your decisions is not the goal. The goal of a decision-analysis process is that your decisions are, in fact, measurably improved.
Fortunately, measurements like this already have been done for critical components of these methods. The measured components include the performance of estimates coming from your subject matter experts, how those estimates are used, how empirical data are used, and how decisions are ultimately optimized based on these inputs. The component that may have been measured the most is the performance of the subject matter experts themselves. It has been shown that there are several ways to improve decision-making just by managing certain types of errors that are common to virtually all experts and decision-makers. Here are just a few examples:
• Research has shown that we are overconfident in their estimations; in other words, our chance of being right is much less than we think.
• We are also highly inconsistent in our estimates of risks. Measurements done by Hubbard Decision Research showed that, on average, random inconsistency accounted for 21 percent of the variation in expert judgment.
• In addition to random variations in estimates of risk, how much risk we are willing to accept also changes due to irrelevant external factors. Studies have shown how emotional states and even simply being shown smiling faces can have a significant but unconscious influence on risk tolerance.
• Finally, assessing risk is ultimately about dealing with probabilities, and people often have difficulty dealing with probability intuitively. Numerous types of cognitive errors have been documented and, without doing the math, most decision-makers are likely to commit these errors in regard to major risk-management decisions.
So, how does the most popular risk-assessment method — the risk matrix — address these issues? It doesn’t. It even appears to add new errors. Tony Cox Jr., who holds a Ph.D. in risk analysis from Massachusetts Institute of Technology, probably has studied risk matrices more than anyone else, and he has concluded that risk matrices are often "worse than useless." Worse than useless means that organizations have spent time and money and actually implemented these frameworks, making decisions that are worse than they would have been otherwise.
Fortunately, research also points to methods that control for these errors and improve overall performance. For our book, "How to Measure Anything in Cybersecurity Risk," Richard Seiersen and I interviewed more than 170 cybersecurity professionals and asked them 86 questions, including whether they had experienced any actual data breaches within the past three years, and how they assess risk. We found that if the professionals could compute the probability of different levels of losses, they observed fewer breaches. Even though this result is statistically very strong, if this were the only evidence I had, I wouldn’t put too much stock in it. But this finding also happens to agree with a lot of other research showing that even simple statistical models are outperforming human experts in fields as wide-ranging as the outcomes of sporting events, prognosis of kidney disease and the failure rates of businesses.
The good news is that we don’t have to invent anything new. The methods that work best are, unsurprisingly, those that have been used by actuaries for decades or longer. Doing the math lets us estimate a return on risk mitigation and prioritize risk-management approaches. In contrast, we don’t know how to do the math for red, yellow and green, and high, medium and low. What’s more valuable — moving four medium risks to a low risk or one high risk to a medium risk? Some object to this approach, arguing that actuaries have the luxury of lots of data points in their analysis. This is inaccurate. You need less data than you think and you have more than you think.
To summarize, here are things we need to do to make progress in assessing risk. First, our risk-assessment language ought to start using the language of probabilities. Declaring something is merely “unlikely” or that the risk is “medium” is not a meaningful basis for critical decisions. Second, base your method on components that have been shown to work in a properly measured test. Again, your organization’s biggest risk in cybersecurity, or anything else, is flawed risk assessment.
Review the full supplement, "Strategies for Success in the U.S. Health Care Market."
Douglas Hubbard is President of Hubbard Decision Research.