I love this time of year, with March Madness excitement in the air and my Notre Dame Fighting Irish still in the tournament (as of the writing of this column)! More importantly – yes, more importantly – I love monitoring the 538 March Madness prediction website to see how the chances of winning change through the days, after games, and even within their 40 minutes of activity.
I like doing this because it is a better representation of how cybersecurity risk works than the way we typically think in our field. So, we can watch – even in real-time – how the chances of success (winning the game, moving on to the next round) and failure (losing) change with the variables during the game and the context outside of them (other games ). As I watch those probabilities change – sometimes swinging wildly – I think of how cybersecurity-related risk changes in a similar manner, with the real-time activity in our computing environments – sessions, messages, transactions, flows, etc. – being established or sent.
Cybersecurity risk changes in real time
I don’t want to take this analogy too far because at some point it will fall flat. Suffice it to say that cyber risk quantification will do for cybersecurity what data analysis has done for March Madness. Anyone serious about winning their office pools is going to hit the books hard!
The point here is that cybersecurity risk is changing in real-time as we identify new vulnerabilities and attacks, but also when we add or remove users, implement or retire systems, or simply use existing systems more. It can be hard to recognize that the more value your IT environment is bringing to your organization, the more you stand to lose. Who really wants to tell their execs that a flourishing company squeezing more and more value from their technology resources also includes progressively increasing risk… and by the way, that’s a good thing?
It can be easy for skeptics to take potshots at cyber risk quantification efforts. Can you really tell the difference between 40% risk and 50%? How do we even know whether these numbers are real? What they often don’t realize is that cybersecurity pros are constantly incorporating and reflecting these risk decisions in the way we allocate resources throughout our programs. You cannot ignore it, because the outcomes are continuous – legitimate or fraudulent transactions, phishing or real messages, attacker or appropriate user sessions, etc.
Cyber risk quantification requires the right models
Cyber risk quantification introduces well-known forecasting methods to the cybersecurity space. With the right models and assessment information, we can manage our risks even better than we currently do.
But nobody really wants to hear about our uncertain future – they want certainty. Alas, it doesn’t exist even when we think it does. Of course, if you aren’t evaluating these predictions, a risk assessments, more closely, you might end up like Putin did with his FSB simply telling him what he wanted to hear. With data-oriented analyzes, not only can we provide predictions, but we can evaluate those predictions over time using well-established methods.
I will be measuring the March Madness predictions from 538 using a Brier score, which provides for a feedback loop to help folks evaluate the success of their predictions and constantly update their models for accuracy. We can do that in cybersecurity as well.
So, while you are watching your favorite teams play this month, keep an eye out for these changing predictions and consider ways you can incorporate a similar approach in your cybersecurity program (cough cough, AI, cough cough).
Copyright © 2022 IDG Communications, Inc.