Surge in cybersecurity threats as employees:
- Struggle to identify Generative AI content, with only 21% believing they can discern an AI generated piece of text from human written text.
- Admit to sharing sensitive information with AI tools that they wouldn’t divulge in a conversation with friends in the bar.
- 69% of respondents believe the benefits of Generative AI tools outweigh the security risks. This figure jumps to 74% for US respondents..
- A significant percentage of respondents from both countries would definitely or probably continue to use AI tools, even if their company banned them.
Boston, US, 25th July 2023 –New research conducted by CybSafe, the behavioral science and data analytics company that manages organizational human cyber risk, reveals a concerning lack of awareness and education around the cybersecurity risks associated with Generative AI tools at work.
A study of 1000 office workers across the US and UK shows half of us already use AI tools at work, one-third weekly, and 12% daily. But companies are falling short. They aren’t helping workers understand the cybersecurity risks. With any emerging cybersecurity threat or change in behavior in the workplace, companies become vulnerable. As AI cyber threats rise, businesses are in danger. From phishing scams to accidental data leaks, employees need to be informed, guided, and supported.
Risky behavior change
CybSafe’s research highlights that 57% of US office workers use AI tools. They’re mostly used for research (44%), writing copy (like reports) (40%), and data analysis (38%), writing code (15%).
The data reveals a worrying work shift. 64% of US office workers have entered work information into a generative AI tool, and a further 28% aren’t sure if they have. A total of 93% of workers are potentially sharing confidential information with AI tools. 38% of users of GenAI in the US admit to sharing data they wouldn’t casually reveal in a bar to a friend. This behavior poses significant risks.
“The emerging changes in employee behavior also need to be considered,” says Dr Jason Nurse, CybSafe’s director of science and research and current associate professor at the University of Kent. “If employees are entering sensitive data sometimes on a daily basis, this can lead to data leaks. Our behavior at work is shifting, and we are increasingly relying on Generative AI tools. Understanding and managing this change is crucial.”
Moreover, the research indicates a significant number of employees are not confident in their ability to discern AI-generated text from human-written text. Over 60% in both the UK and the US are either unsure or doubt their ability. This uncertainty could make them more prone to AI-led cyber attacks.
Dr. Jason Nurse, CybSafe’s director of science and research and current associate professor at the University of Kent said: ”Generative AI has enormously reduced the barriers to entry for cyber criminals trying to take advantage of businesses. Not only is it helping create more convincing phishing messages, but as workers increasingly adopt and familiarize themselves with AI-generated content, the gap between what is perceived as real and fake will reduce significantly.
“As Generative AI infiltrates the workplace, it’s building a cyber-superhighway for criminals. Half of us are using AI tools at work, and businesses aren’t keeping pace. We’re seeing cybercrime barriers crumble, as AI crafts ever more convincing phishing lures. The line between real and fake is blurring, and without immediate action, companies will face unprecedented cybersecurity risks.”
The change will keep coming: Companies need to adapt to their people’s behavior:
A range of benefits were reported by office workers using GenerativeAI. US respondents say they gain more benefits from Generative AI than UK respondents across all areas. The top benefit in both countries? Increased productivity. However, more US respondents – 73% – reported this benefit than their UK counterparts, who came in at 69%.
Despite the risks, optimism around the benefits shines through in the research. 69% of respondents believe the benefits of Generative AI tools outweigh the security risks. This figure jumps to 74% for US respondents.
But, there’s a glaring gap in company support. More than half of respondents in both countries say their companies haven’t taken steps to educate about these emerging cybersecurity threats. A significant proportion don’t even know if such measures exist in their companies.
People naturally gravitate toward what works for them. Just as the BYOD revolution had to be derisked by organizations, so too must GenerativeAI. Productivity gains seem to outweigh perceived risks, with 32% of users of Generative AI in the UK and 33% in the US saying they’d probably continue using AI tools, even if their company banned them.
In light of these findings, Oz Alashe, MBE, CEO of CybSafe, emphasizes, “This is not about blame, this is about giving your people the tools to succeed. It is about understanding the changing nature of human behavior and helping people when and where they need it, without having to remember endless hours of training and engage with numerous phishing simulations. It requires every organization to develop an understanding of their people’s security behaviors and measurement of risk reduction.”
Of those not using GenerativeAI tools now, 22% state they will start using them in the near future. CybSafe’s research highlights the pressing need for companies to address this cybersecurity gap by implementing and communicating measures to educate and support their employees about the potential risks associated with Generative AI. As the adoption of Generative AI continues to grow, understanding its implications and managing its risks will be critical for maintaining cybersecurity and ensuring the success of businesses in the digital age.
CybSafe helps over 350 organizations across 15 countries, including Credit Suisse, BDO, HSBC, and major healthcare providers. Using behavioral science, data analysis, and reporting metrics, CybSafe improves security professionals’ effectiveness. At its core is SebDB, a comprehensive security behavior database, which enables preemptive action against emerging security threats. This delivers improved security behaviors and fewer incidents. CybSafe is committed to advancing cybersecurity research and practice using behavioral science, driven by its expert team of behavioral scientists, psychologists, cybersecurity, and cybercrime professionals.
CybSafe is the human risk management platform designed to reduce human cyber risk in the modern, remote, and hybrid work environment, by measuring and influencing specific security behaviors.
CybSafe is powered by SebDB—the world’s most comprehensive security behaviors database—and built by the industry’s largest in-house team of psychologists, behavioral scientists, analysts, and security experts. An award-winning, fully scalable, and customizable solution, it’s the smart choice for any organization.
- 91% Reduction in high-risk phishing behavior
- 55% Improvement in security behaviors
- 4x More likely to engage in cybersecurity initiatives
+44 208 819 3170