top of page
  • Writer's pictureJustin Suissa

Are your risk models making you a bad risk manager? (Part 1)

On my reading list for this COVID-19 summer has been a biography of Alan Greenspan. Whatever your thoughts on his role in the 2008 financial crisis, he has an impressive legacy and one of the most distinguished career histories of any economist. One interesting fact about him was his strong skepticism of models, despite the rise, during his lifetime, of econometrics and “big data.” It got me thinking about the use of risk models with our clients, and if my experience building models and risk programs fits within that skeptical viewpoint. This two-part blog entry explores that question...

an overhead image of business being conducted viewing only the participants arms examining risk models with management consultants in information technology

What risk models do for you…And what they don’t do.

First, some context. I am talking about modeling risk to information assets so that the good guys get to information when they need it, and the bad guys can’t touch it. True risk managers will tell you that risk management is optimizing decisions in the unknown, and thus risk is risk regardless of source. I strongly agree, though there may be some nuances to looking at market risk, liquidity risk, and several others that don’t come into play with looking at risks in the digital world, and vice-versa. We won’t focus on those nuances here.

As with those market risks though, we’re still dealing with a lot of data from various sources (security event logs, threat intel, incident history…to name a few). Models help us distill this data into information. They help us look for potential patterns and if we are exceeding or missing benchmarks.

That diverse data is powerful input to your decision making. Likewise, it gives us a framework to talk about a complex topic. Information security in many ways is like insurance against an invisible potential. Our brains have a hard time wrapping around that. Throw in the complexity of today’s computing systems and abstraction along with an executive’s limited attention and you create a recipe for blank stares. With a model that converts risk concepts into something that these executives understand or are familiar with -- money being a good example -- you have a fighting chance of getting your point across.

Like any tool, risk models can be misused. As I reflect on my experiences, or even just open a page from a history book, I can see this happens more often than we might care to admit. For one, in our desire to distill and report, models can oversimplify. A red, yellow, green, or a 1 -5 scale, can hide key concepts and leave the reader challenged to make accurate comparisons.

Models can also have variance and be biased. Humans build these models, and we provide the weights and innerworkings that get abstracted away. Behavioral psychologists will quickly point out that it is almost impossible not to embed some bias into how we create these models. At small scale, bias may not be wildly important, but it can quickly compound when you string assumptions and biases together, leading to severely skewed results.

Another challenge with risk models is that they are often based on untested assumptions. Car insurers have the advantage of using millions of data points (i.e. drivers, historical accidents, etc.) to improve their predictions, but doing the same in events that are less common (like a breach to your HR system) leaves us, to some extent, guessing.

So should we throw out risk models?

Clearly not. Like Alan Greenspan, we should be skeptical of them while recognizing that their advantages are clear. Part two of this series covers three keys to minimizing the risk in risk models: validation, governance and continuity planning.

bottom of page