Most security teams start with education and training to tackle human cyber risk. It’s familiar. It feels responsible. And it’s often required.
But here’s the truth: education and training alone don’t reliably change behavior—especially at scale.
They’re just two tools in a much bigger toolbox. And by focusing only on these, organizations miss out on more effective ways to drive secure behavior and reduce risk.
This guide explores the broader range of behavior change interventions available to security teams. It draws on proven behavioral science frameworks—like the Behavior Change Wheel, COM-B model, and BCT Taxonomy—and shows how each can be applied in a cybersecurity context.
If you want to go beyond awareness and start actually changing behavior, this is your starting point.
And, if you are particularly interested in the history, research basis and use of interventions like nudging, we’d suggest having a look at our research-based white paper, Nudging The (gentle) art of persuasion.
1. Intervention functions
Source: The Behavior Change Wheel (BCW), Michie et al., 2011
Intervention functions are broad categories of activity aimed at changing behavior. They answer the question: What kind of intervention is this? The BCW identifies nine intervention functions:
1. Education
- Application: Providing people with information to build their understanding of cybersecurity threats and how to respond to them. Education helps increase awareness and corrects misconceptions about risk.
- Example: Hosting webinars, sharing explainer videos, and distributing tip sheets on spotting phishing emails, using strong passwords, or managing data securely.
2. Persuasion
- Application: Using communication techniques to influence attitudes, perceptions, and emotional responses. This function aims to make secure behaviors feel more valuable and compelling.
- Example: Launching campaigns that highlight personal stories of cyber incidents—or positive outcomes from secure behaviors—to reinforce why it matters.
3. Incentivization
- Application: Introducing rewards that motivate people to adopt secure behaviors. This can help turn one-off actions into consistent habits by reinforcing what “good” looks like.
- Example: Recognizing team members who consistently report phishing, use password managers, or complete optional security training—with public shoutouts, rewards, or points.
4. Coercion
- Application: Establishing deterrents to discourage risky or non-compliant behaviors. Coercion works by attaching meaningful consequences to insecure actions.
- Example: Making it clear that repeated policy violations (like sharing passwords) could lead to formal warnings, revocation of access, or disciplinary measures.
5. Training
- Application: Building practical skills to improve security decision-making and behavior. Training provides a hands-on way to prepare people to deal with real threats.
- Example: Running phishing simulations, interactive exercises, or live workshops where staff practice identifying and handling suspicious activity.
6. Restriction
- Application: Putting in place controls that limit opportunities to behave insecurely. Restrictions help remove high-risk options from everyday decision-making.
- Example: Preventing installation of unapproved software, enforcing multi-factor authentication, or limiting access to sensitive data by role.
7. Environmental restructuring
- Application: Adjusting the physical or digital environment to promote more secure choices. This function leverages design to make secure behaviors easier or more natural.
- Example: Using privacy screens on desks, auto-lock settings on devices, or interface changes that guide people toward safe file-sharing options.
8. Modeling
- Application: Demonstrating secure behavior through respected or influential individuals. Modeling helps normalize best practices by making them visible and relatable.
- Example: Leaders regularly talking about security in team meetings—and visibly practicing good behaviors like logging out of shared systems.
9. Enablement
- Application: Removing obstacles that prevent people from acting securely. Enablement ensures people have the tools, support, and capacity they need to follow best practices.
- Example: Providing easy-to-use password managers, help desk support for suspicious emails, or one-click access to report a security concern.
2. Policy categories
Source: The Behavior Change Wheel (BCW), Michie et al., 2011
Policy categories represent the mechanisms that support and enable interventions. They are the levers organizations can pull to make intervention functions effective:
- Guidelines – e.g. documented best practices for safe data handling
- Environmental/social planning – e.g. creating quiet zones for secure work or digital defaults that nudge secure behavior
- Communication/marketing – e.g. brand-aligned security awareness campaigns
- Legislation – e.g. mandating compliance through external laws (such as General Data Protection Regulation (GDPR))
- Service provision – e.g. offering secure communication tools
- Regulation – e.g. compliance and policies that set out how employees should manage passwords, report phishing, or complete regular security training
- Fiscal measures – e.g. investing budget in behavioral tools or user training
These categories act as systemic enablers for behavior change interventions.
3. Behavior change techniques (BCTs)
Source: Behavior Change Technique Taxonomy (v1), Michie et al., 2013
Where intervention functions are categories, BCTs are the granular components of change—specific methods shown to shift behavior. There are 93 identified techniques. Below are five examples frequently used in cybersecurity contexts:
- Prompt/cue – e.g. real-time nudges during risky actions
- Goal setting – e.g. “report 3 phish this week” challenges
- Feedback on behavior – e.g. dashboards showing users their risk scores and what led to them
- Social comparison – e.g. showing team leaderboard for safe behaviors
- Action planning – e.g. step-by-step guides for secure app use
See Appendix A for a curated list of the 20 most relevant BCTs for cybersecurity and human risk.
4. Nudges - Six principles of good choice architecture
Source: Nudge theory, Thaler & Sunstein, 2008, and Choice architecture, Thaler, Sunstein & Balz, 2014
NUDGES is a practical framework for structuring environments and decisions so people are more likely to make better choices, without removing their freedom to choose. It stands for:
- N – iNcentives – Align rewards and costs with the desired behavior.
- Example: Offer recognition or small rewards for quickly reporting phishing attempts.
- U – Understand mappings – Make it easy for people to see the link between their choice and the outcome.
- Example: Show how enabling MFA reduces the risk of account compromise by a clear percentage.
- D – Defaults – Set the secure option as the default so inaction leads to a safer outcome.
- Example: Configure systems so MFA is automatically enabled for all new accounts.
- G – Give feedback – Provide clear, timely information on performance.
- Example: Show users their phishing simulation results immediately after completion.
- E – Expect error – Design systems to be resilient to mistakes.
- Example: Add confirmation steps before sending sensitive data externally.
- S – Structure complex choices – Simplify security decisions into clear, manageable steps.
- Example: Provide a guided wizard for setting up secure home networks.
By applying NUDGES, security practitioners can design interventions that make secure behavior the path of least resistance.
5. Persuasive system design (PSD)
Source: PSD Model, Oinas-Kukkonen & Harjumaa, 2009
The PSD model identifies digital design principles that influence user behavior and motivation. Key persuasive strategies include:
- Tailoring – e.g. personalizing nudges or security content by risk profile
- Self-monitoring – e.g. letting users track their own secure behavior over time
- Reminders – e.g. prompting users periodically to complete actions or review behaviors
- Social support – e.g. using teams or peer reinforcement to drive security culture
These elements can be embedded into apps, workflows, and digital interfaces.
6. COM-B model
Source: The Behavior Change Wheel (BCW), Michie et al., 2011
The COM-B model underpins the BCW and offers a diagnostic lens to identify what needs to change. It states that behavior (B) is influenced by:
- Capability – e.g., do people have the knowledge and skills to behave securely (e.g., using MFA)?
- Opportunity – e.g., are there environmental and social enablers to secure behavior (e.g., regularly backing up documents)?
- Motivation – e.g., do they want to or see value in behaving securely (e.g., using strong passwords)?
Example: Assessing whether employees have the skills, tools, and incentives to follow secure practices, then tailoring interventions accordingly.Interventions should map back to one or more of these domains to be effective.
7. Human factors interventions
Source: Human Factors and Ergonomics discipline (Reason, 1990; Dekker, 2011)
This approach focuses on designing systems to reduce human error and increase reliability:
- Usability testing – e.g. ensuring security tools don’t create unnecessary friction
- Workload optimization – e.g. avoiding alert fatigue or excessive multi-step processes
- Human-machine interface design – e.g. clear, intuitive feedback in security warnings
This perspective is especially useful when secure behavior fails due to poor system design, not poor intention. It emphasizes usability, workload management, and human–machine interface design.
Example: Simplifying security tool interfaces to reduce friction or designing alerts that clearly indicate risk.
These strategies aim to make the secure action the easiest and most natural choice.
8. MINDSPACE
Source: MINDSPACE - Influencing behaviour through public policy, UK Cabinet Office & Institute for Government, 2010
A framework of nine powerful influences on human behavior: Messenger, Incentives, Norms, Defaults, Salience, Priming, Affect, Commitments, Ego. It’s often used in public policy and marketing, but applies directly to security.
- Messenger – People are more influenced by who communicates information than by the message itself.
- Incentives – Rewards and penalties shape decisions.
- Norms – People tend to do what they perceive others are doing.
- Defaults – People tend to go with pre-set options.
- Salience – People focus on what’s novel and relevant to them.
- Priming – Exposure to cues influences later behavior.
- Affect – Emotional responses shape decisions.
- Commitments – People like to follow through on promises.
- Ego – People act in ways that make them feel good about themselves.
Cybersecurity examples:
- Using a respected CISO (Messenger) to deliver security updates.
- Setting MFA as the default (Defaults).
- Sharing stats that “90% of staff report suspicious emails” (Norms).
- Framing phishing reporting as part of being a trusted professional (Ego).
This framework is often used in public policy and marketing, but applies directly to security.
Example: Using a respected CISO (Messenger) to deliver updates or framing phishing reporting as part of being a trusted professional (Ego).
MINDSPACE works by harnessing subtle psychological cues to steer behavior without removing choice.
9. EAST
Source: EAST - Four simple ways to apply behavioural insights, Behavioural Insights Team, 2014
EAST distills behavior change into four practical principles: Easy, Attractive, Social, Timely.
- Easy – Reduce friction and complexity.
- Attractive – Make the desired behavior appealing or rewarding.
- Social – Harness social norms, peer influence, and group identity.
- Timely – Deliver interventions at the right moment.
Cybersecurity examples:
- Easy: One-click “report phish” button in email.
- Attractive: Gamify security challenges with leaderboards.
- Social: Team-based security challenges with public recognition.
- Timely: Send phishing prevention tips immediately after a simulated failure.
EAST offers a quick design checklist for interventions likely to succeed.
Example: Adding a one-click “report phish” button (Easy) or sending tips right after a simulated phishing failure (Timely).
This framework makes it simple to design security behaviors into everyday workflows.
10. Fogg Behavior Model
Source: A Behavior Model for Persuasive Design, BJ Fogg, 2009
The model states that Behavior = Motivation × Ability × Prompt. All three must occur together for behavior to happen.
- Motivation – Desire to act (e.g., avoid risk, gain reward).
- Ability – Capacity to act (skills, time, resources).
- Prompt – The trigger or cue to act.
Cybersecurity examples:
- Motivation: Showing employees how phishing prevention protects their clients.
- Ability: Providing a password manager to make secure credentials easy.
- Prompt: Pop-up warning when a suspicious link is clicked.
Example: Providing password managers (Ability), showing how secure credentials protect clients (Motivation), and giving pop-up warnings before risky clicks (Prompt).
The model highlights that missing any one factor will significantly lower behavior adoption.
11. Knowledge–Attitudes–Behaviors (KAB)
Source: Health education & communication theory
KAB suggests behavior change happens in three stages:
- Knowledge – People first learn about an issue.
- Attitudes – They form opinions and values around it.
- Behaviors – They act on those attitudes.
Cybersecurity examples:
- Knowledge: Training module on risks of password reuse.
- Attitudes: Campaign showing real-world consequences of a breach.
- Behaviors: Adoption of password managers across the company.
Example: Delivering password security training (Knowledge), showing breach impact stories (Attitudes), and encouraging password manager adoption (Behaviors).
This model reinforces the need to move beyond knowledge alone to influence real-world outcomes.
Summary table
Intervention type |
Source framework |
Cybersecurity examples |
Intervention functions |
BCW (Michie et al., 2011) |
Training, restriction, enablement |
Policy categories |
BCW (Michie et al., 2011) |
Guidelines, planning, regulation |
Behavior change techniques |
BCT Taxonomy v1 (Michie et al., 2013) |
Feedback, social comparison, action planning |
Nudging |
Thaler & Sunstein, 2008 and Thaler, Sunstein & Balz, 2014 |
Incentives for reporting, visual mappings of action-outcome, default MFA, feedback on risk, error-proofing, structured setup wizards |
Persuasive design |
PSD Model (Oinas-Kukkonen, 2009) |
Tailoring, reminders, social support |
COM-B model |
BCW (Michie et al., 2011) |
Diagnostic lens: capability, opportunity, motivation |
Human factors |
Reason, 1990; Dekker, 2011 |
Usability testing, interface design |
MINDSPACE |
Cabinet Office & IfG, 2010 |
Messenger, defaults, incentives |
EAST |
Behavioural Insights Team, 2014 |
Easy, attractive, social, timely interventions |
Fogg Behavior Model |
BJ Fogg, 2009 |
Motivation, ability, prompt convergence |
KAB |
Health communication theory |
Knowledge → attitudes → behaviors |
_____
Appendix A: Top 20 behavior change techniques for HRM
The 20 techniques listed here are drawn from the 93 in the Behavior Change Technique Taxonomy (v1) but have been selected for their particular relevance to human risk management in cybersecurity. They are among the most widely used and evidence-supported BCTs in behavior change research, and they map well to the kinds of interventions security teams can realistically deploy in organizational settings.
These techniques are not ranked in order of importance. Their effectiveness depends heavily on context, audience, and how they are combined with other interventions. The same BCT can be transformative in one setting and less effective in another.
Source: BCT Taxonomy v1, Michie et al., 2013
BCT Name |
Description |
Cybersecurity example |
Feedback on behavior |
Provide data or information about the user’s actions |
Weekly dashboards showing phish reporting or secure behavior rates |
Prompts/cues |
Trigger behavior with well-timed reminders |
Just-in-time nudge when a risky file is about to be downloaded |
Goal setting (behavior) |
Encourage people to set specific behavioral goals |
“Report 5 phishing emails this month” or “Enable MFA by Friday” |
Self-monitoring of behavior |
Allow users to track their own actions |
Risk scorecards or personal progress trackers on security habits |
Social comparison |
Present others' performance for context or motivation |
Team leaderboard showing top reporters of phishing |
Action planning |
Help people plan when, where, and how to act |
Step-by-step guide for sharing sensitive info securely |
Info about social/environmental consequences |
Explain broader impacts of behavior |
“One weak password can expose the entire department” |
Behavioral practice/rehearsal |
Create opportunities to try the behavior |
Phishing simulations or password-setting walkthroughs |
Instruction on how to perform the behavior |
Give clear how-to guidance |
Video or micro-content on how to use a password manager |
Demonstration of the behavior |
Show what good looks like |
Short video of a peer securely transferring a document |
Credible source |
Information from a trustworthy figure |
Security leaders or external experts sharing advice or stories |
Incentive (material/social) |
Provide rewards for secure behavior |
Recognition schemes for secure habits (e.g. prize draw) |
Framing/reframing |
Change the way choices are described |
“Strong passwords protect people, not just data” |
Salience of consequences |
Highlight the significance of potential outcomes |
Real case studies of incidents caused by minor missteps |
Restructuring physical/social environment |
Change the environment to support behavior |
Lock screens after inactivity or removing risky defaults |
Monitoring by others without feedback |
Let users know their behavior is observable |
“Security logs are monitored for unusual data transfers” |
Identity associated with changed behavior |
Reinforce identity linked to behavior |
“Cyber Champions” program or secure behavior badges |
Adding objects to the environment |
Introduce tools that support behavior |
Installing password managers or secure file transfer tools |
Habit formation |
Encourage repetition in the same context |
Daily practice of checking links before clicking |
Reduce prompts/cues (for bad behavior) |
Remove triggers of insecure habits |
Disabling links in suspicious emails or blocking risky URLs |
For a complete list of all 93 BCTs, see: https://www.bct-taxonomy.com/pdf/BCTTv1_PDF_version.pdf