I want to tell you about a conversation I keep having.
It usually starts in a boardroom, or on the sidelines of a conference, or over coffee with a CISO who’s been burned before. It goes something like this:
“Oz, the problem with using behavioral data to manage human risk is that people are irrational. Unpredictable. You can’t use historical data to say anything meaningful about what a person will do next.”
And they’re right.
They’re also asking the wrong question entirely.
-----
The individual vs. the population
Here’s the thing about human behavior: at the individual level, it’s genuinely hard to predict. People are complex, context-dependent creatures. Mood, stress, distraction, perhaps the argument they had on the commute in. All of it influences what a person does at 2pm on a Tuesday when a suspicious email lands in their inbox.
I don’t dispute that. Neither does the science.
But here’s where the argument quietly falls apart. The critics are answering a question nobody knowledgeable is actually asking.
Nobody in behavioral security is trying to predict whatu will do next Tuesday.
We’re trying to understand what tens of thousands of people like you tend to do under similar pressures and contexts, when they’ve had similar experiences. And that is a completely different problem. One that data solves remarkably well.
-----
Understanding the signal and the noise
Think about how an actuary works.
No actuary on Earth knows which of their customers will die this year. Not one. And yet the entire insurance industry is built on the ability to price that uncertainty profitably, reliably, and at scale. They don’t predict individuals; they understand populations. The individual is a coin flip; the portfolio is not.
Or think about epidemiology.
No public health official predicted exactly who would contract the flu last winter. But every decent public health system used historical incidence data to model outbreak probability, allocate vaccines, and save lives. The individual is noise; the population is signal.
This is not a new insight. It’s the operating logic of every field that deals seriously with risk at scale.
And yet, in cybersecurity, we keep getting dragged back into the wrong argument.
-----
The science of uncertainty
In technical terms, we’re looking at the difference between two types of uncertainty:
- Aleatory uncertainty: Irreducible randomness (like a coin flip or a roll of the die). You can’t model it away.
- Epistemic uncertainty: Ignorance that data can reduce. With enough observations, patterns emerge, and noise averages out. The signal stengthens.
Individual human behavior contains both. However, groups at scale collapse much of the epistemic uncertainty, leaving us with a manageable probability.
This is not a philosophical claim. It’s why insurance, public health, and financial risk management exist and function.
-----
What this means for behavioral security
It means we can’t tell you whether a specific person on your team will click a phishing link tomorrow. But we can tell you that people in high-pressure roles, with low security self-efficacy, who haven’t practised reporting recently, click at significantly higher rates.
That’s not a prediction about a person. It’s a probability distribution across a population. And it’s actionable.
The goal was never to predict individuals. The goal is to reduce the aggregate risk of a population making the wrong choices, by understanding the conditions under which those choices become more or less likely.
Mass behavioral data doesn't provide prophecy; it provides better decisions, narrower confidence intervals, and a more defensible allocation of your budget and attention.
How CybSafe approaches the problem
At CybSafe, this is the work we do every day. SebDB maps over 100 real security behaviors to risk outcomes, and frameworks like MITRE ATT&CK and NIST CSF. We identify the interventions most likely to shift those behaviors.
Not because we can read minds. Because the data, at scale, tells us something real and useful that no amount of individual intuition ever could.
-----
The critics are right to be cautious about overreach. Nobody should be claiming that behavioral data gives you a crystal ball.
But the alternative — running behavioral security programs on gut instinct, anecdote, and completion rates, isn’t humility. It’s just a different kind of folly. One with worse odds.
The weather forecaster can’t tell you whether you’ll need an umbrella at 3:47pm on a specific Tuesday three months from now. But they can tell you, with confidence, that London in November is wetter than London in July.
That’s not a failure of the model. That’s the model working exactly as it should.
-----
Oz Alashe MBE is the CEO and founder of CybSafe. CybSafe identifies the behavioral patterns most likely to cause an incident, runs randomized experiments to change them, and proves it worked. If it didn’t, it tells you exactly why.