11 February 2026
You keep saying “the human side of cybersecurity.” Which one do you mean? (There are six.)
A common security phrase is slippery. CybSafe CEO Oz Alashe explores the “human side of cybersecurity” and outlines six ways people use the term

CybSafe CEO Oz Alashe examines what people mean by “the human side of cybersecurity,” why that matters, and how to surface and shows how to check your blind spots.

After a recent talk I gave, three people came up to chat about the “human side” of cybersecurity.

  • One talked about developer practices and secure coding.
  • Another talked about culture, leadership, and how we make decisions about risk.
  • The third talked about everyday behaviors that raise or reduce exposure.

They all used the same phrase. But each had a different meaning in mind (and nobody was wrong).

Same words, different worlds.

The “human side” (or indeed “human factors”, or “the human aspect” ...the people bit) is useful as a direction, but opaque as shorthand. It’s broad enough to cover almost anything involving people, decisions, or behavior. That’s useful as a signal, but as a shorthand it’s risky as f**k.

“The people bit” is a security discipline in its own right, and is relevant to so many different industries. But we’re in danger of talking past each other …because it’s such a multifaceted kaleidoscope of issues.

And without clarity, we end up misaligned, we argue while agreeing, and we leave blind spots.

To cut the cross-talk, I’ve mapped the six domains of the “human side”. I’m sharing it here because I think it’ll help. (It might even shave time off your meetings, but no promises.)

Six ways people talk about the “human side”

This part is where I break the human side into six domains. Please know that this isn’t a hierarchy, nor is it a maturity model, and the edges overlap. Each domain is valid on its own, and together they give you the fuller picture. 

I want to give you clarity so you can say which one you mean, see where it overlaps with others, and spot the blind spots before they become regrets.

1) Observed human behavior (what people do)

This is the visible stuff. The clicks, choices, and habits that move risk up or down in the real world. CISOs and security leaders reach for this meaning when they want ground truth on real-world behavior and how it tracks to exposure. Practitioners and program leads use it to keep conversations anchored in evidence and trend lines. It’s the language of “did the behavior move, and by how much?”

This looks like:

  • password reuse vs. password managers
  • MFA on or off
  • locking a device
  • handling data carefully
  • reporting something odd when it pops up
  • saying “no” to a rushed request for access at 4:59 p.m
  • using AI tools safely instead of pasting sensitive content into a public model

It’s also technical behavior like:

  • writing code with secrets kept out of repos
  • keeping dependencies current
  • configuring access thoughtfully
  • approving changes with care when pressure is high (i.e. always)

Why it matters: Outcomes live here. What people do is what you get.

2) Organizational and contextual influences (why people do what they do)

No one makes choices in a vacuum. People respond to the environment they’re in. CISOs, HR, engineering leadership, and UX leaders use this focus to align culture, incentives, workload, and tooling with risk objectives. It frames the human side as conditions and constraints that shape outcomes at scale. The conversation is about operating norms and making the secure path the easy path.

Influences include:

  • leadership signals
  • Incentives
  • time pressure
  • tool friction
  • policy clarity
  • psychological safety
  • peer norms
  • whether raising a hand gets you thanked or scolded

Developer teams feel this in sprint velocity, tooling, and the clarity of security requirements. Operations teams feel it in alert fatigue and the weight of on-call. And *everyone* feels it when the secure path takes ten clicks and the risky path takes two.

Why it matters: If you want different outcomes, change the conditions, not just the slogans.

3) Management and intervention responses (how organizations address what people do)

This is how we try to influence and manage human-related risk. This is the operating-model view. CISOs, GRC, and program owners use it to talk about mechanisms, measurement, and accountability: what gets deployed, how it’s tested, and how it’s sustained. Platform and automation teams join here to turn good practices into repeatable systems.

That might look like:

  • training and education
  • targeted campaigns
  • well-timed nudges
  • drills and practical exercises
  • measurement and analytics
  • policies that make sense (and get used)
  • insider risk and trust programs
  • human risk platforms
  • automation that removes manual chasing
  • cross-functional work with risk, compliance, and HR
  • feedback loops that learn, then improve

Why it matters: You need systems that turn intent into repeatable outcomes, not one-off heroics.

4) Human–technology interaction and AI (the emerging frontier)

Here the human side meets automation and intelligent systems. CISOs, CTOs, architects, and AI governance groups may use this focus to place human decision-making alongside automated and intelligent systems. It’s about roles, boundaries, oversight, and human-in-the-loop expectations. The focus is on a reliable handshake between people and machines.

That looks like:

  • how people rely on (or distrust) AI recommendations
  • prompt hygiene
  • data leakage risks
  • model poisoning from a well-crafted payload
  • human+AI teaming in incident response
  • oversight for machine-made decisions
  • guardrails that are strong enough to help and light enough to use

Why it matters: The surface area is growing, fast, and the human+AI handshake is now a security domain of its own.

5) Security & risk workforce, leadership, and resilience

This is the human side as it applies to the people doing the work. CISOs and people leaders use this when talking about reading the human side through the teams doing the work. Capacity, capability, leadership pipeline, and team health sit at the center. Collaboration with IT and engineering shows up here as a core part of sustained performance.

It looks something like this:

  • hiring and growing talent
  • team culture
  • diversity and belonging
  • leadership development
  • burnout and stress
  • crisis response and recovery
  • collaboration with IT, engineering, HR, and the business
  • the real constraints that shape performance when everything’s on fire

Why it matters: Healthy, capable teams make better security decisions, consistently.

6) Governance, ethics, and accountability

This is the leadership layer. Boards, executives, legal, privacy, audit, and the CISO converge on this angle to define ownership and trust. It’s where responsibility is assigned, guardrails are set, and transparency is agreed. The conversation is about aligning human-risk oversight with strategy and values.

The focus here is on:

  • who owns human cyber risk
  • how decisions get made
  • how monitoring is done, and how transparent it is
  • where privacy lines are drawn
  • how fairness is upheld
  • how strategy and values show up in real controls

Why it matters: Trust is an outcome, too. If people don’t trust the system, they route around it.

Why we talk past each other

Each domain is legitimate. But if we don’t say which one we mean, we get crossed wires.

One team says “the human side” and wants to talk about daily behavior. Another hears the same phrase and starts discussing leadership, incentives, and culture. A third heads straight to AI oversight. Everyone is correct, no one is aligned.

The result might sound familiar to you: ambitious goals, fuzzy scope, patchy ownership, and reports that count activity while missing outcomes. It’s not bad intent. It’s fuzzy language. 

The fix is effective and, I’ll admit, a little boring: 

Define terms at the start and keep the map in view.

Then whoever you’re in discussion with knows exactly where you’re coming from. In your next meeting, name your focus.

Name your focus, acknowledge the rest

You don’t need to cover all six domains to do great work. You do need to know which ones you’re addressing and how they relate.

If you’re focused on observed behavior, say so. Measure the behaviors that move risk.

If you’re shaping context and conditions, say so. Show how incentives and friction change outcomes.

If you’re building management responses, say so. Prove what works and automate it.

And if you’re working in the other three domains (that is, AI interaction, workforce and leadership, governance and ethics), call that out. 

So, where on the map does CybSafe sit (and why)?

For us, most of our effort lives in the first three domains:

1. Observed human behavior, because outcomes depend on what people actually do.

2. Organizational and contextual influences, because capability, motivation, and opportunity shape those choices.

3. Management and intervention responses, because you need systems that identify risky patterns, test what changes them, and automate what works at scale.

That’s where we spend our time understanding real behavior, influencing it with evidence, and turning good interventions into repeatable workflows. And, as you now know, it’s not the whole human side of cybersecurity, but it’s a critical part of it, and it connects into the others rather than replacing them.

How to use the map in practice

Now you have the map, you can:

  • Say which domain you mean when you use “the human side.”
  • Name one behavior or condition you want to change.
  • Pick one small intervention and a simple, credible metric.
  • Test it, keep the mover, automate it, and share the outcome.
  • Note the neighbor domains that’ll amplify or constrain your impact (culture, tooling, leadership, governance).

Work on clarity first, then progress and scaling will follow more easily.

If it helps…

We’ve written the Ultimate HRM guide for a deeper dive on human risk. It covers what human risk management is, how it actually works, the behaviors that drive risk, the metrics leaders care about, and the first steps to put it to work.

For more thinking like this blog, the Behave Hub is a good place to hang out.

And remember, I’m with you on this journey, one clear definition and one real outcome at a time.