What the industry really thinks — and what it means for your 2026-27 strategy.
Crystal-ball gazing? Not quite.
You're reading this because you want the real picture — not recycled LinkedIn takes and AI hype dressed up as insight.
For the fourth year running, CybSafe and the SebDB Community surveyed the people actually doing the work: the CISOs, human risk managers, security awareness leads, and GRC professionals navigating 2026 in real time. This is what they said.
Three things every security leader needs to know right now:
AI is now the number one priority — and the number one anxiety. 47.2% of respondents believe AI will increase human cyber risk. Only 13.6% think it will reduce it. The consensus is in: governance is the gap, and 2026 is the year it gets filled.
Security teams are being locked out of AI decisions. Over two thirds say AI adoption is being driven by IT or senior leadership — yet only 15.6% say security has the authority to restrict AI tools. The people most responsible for risk have the least say.
The maturity gap is real — and closing it matters. Just 1 in 3 organisations has an active human risk strategy. 44.4% describe their maturity as 'developing'. The firms that move now will define what good looks like.
47.2%
say AI will increase human cyber risk — the report's defining finding, second year running
<1 in 5
say security teams have any oversight over AI adoption in their organisation
1 in 3
firms has an active, reviewed human risk strategy
60.4%
expect their security awareness investment to increase in 2026
Only 13.8%
are very confident their metrics reflect real human cyber risk — 86% are somewhat confident, not confident, or unsure
28.6% vs 52.5%
regulatory compliance as the top leadership metric — down sharply from last year
50.4%
grew their security awareness and human risk teams this year — only 14% shrank
1 in 5
security teams still have no formal human risk program at all
47.2%
say AI will increase human cyber risk
Only 12.6%
feel very prepared for AI-related risks — 63.4% are only somewhat ready
<1 in 5
say security has the authority to restrict AI tools in their organisation
66.8%
say AI adoption is being driven by IT or senior leadership — security is barely in the room
Q: Do you believe AI will ultimately reduce or increase human cyber risk? — n=500
AI will increase human cyber risk
47.2%
Too early to say
28.2%
AI will reduce human cyber risk
13.6%
I'm not sure
7.8%
Neither / no change
3.2%
Question wording:Do you believe AI will ultimately reduce or increase human cyber risk?
44.4%
describe their human cyber risk maturity as 'developing'
Only 1 in 3
organisations have an active, reviewed human risk strategy
Only 13.8%
are very confident their metrics reflect real human cyber risk
1 in 5
teams still review metrics on an ad-hoc basis — no structured cadence
Q: How would you describe your organisation's overall maturity in managing human cyber risk? — n=500
Developing
44.4%
Defined
24.4%
Optimised
19.0%
Ad-hoc
5.4%
Leading
3.8%
I'm not sure
3.0%
Question wording:How would you describe your organisation's overall maturity in managing human cyber risk?
27.8%
top 2026 priority: addressing AI-driven risk — second year at number one
44.2%
say AI governance is the must-have capability for security teams in 2026
60.4%
expect investment in security awareness to increase in 2026
28.6% vs 52.5%
regulatory compliance as the top leadership metric — a major year-on-year drop
Q: What will be your top security awareness and/or human risk management priority for 2026? — n=500
Addressing AI-driven risk
27.8%
Embedding security culture
25.2%
Reducing phishing risk
20.0%
Meeting regulatory requirements
11.2%
Measuring behaviour accurately
7.4%
Supporting remote workers
5.2%
I'm not sure
2.6%
Question wording:What will be your top security awareness and/or human risk management priority for 2026?
This report is for:
CISOs, CIOs, and data protection officers
Human risk and security awareness managers
Cyber risk analysts and GRC professionals
Security teams navigating AI governance for the first time
Anyone who's tired of the fluff and wants the actual data
This report covers:
Who's really driving AI adoption in your organisation — and who should be
How prepared security teams actually are for AI-related human risk
The metrics leadership cares about most (and the ones they're ignoring)
Where human risk maturity stands across the industry in 2026
What security teams say would actually improve their programs
Unfiltered expert commentary on regulation, culture change, and what comes next