The world’s first globally accessible archive of research into the human aspect of cyber security and behavioural science as applied to cyber security awareness and online behavioural change.
To see the latest studies from pioneering academics, scroll down.
Cybersecurity controls are deployed to manage risks posed by malicious behaviours or systems. What is not often considered or articulated is how cybersecurity controls may impact legitimate users (often those whose use of a managed system needs to be protected, and preserved). This characterises the ‘blunt’ nature of many cybersecurity controls. This study presents a synthesis of methods from cybercrime opportunity reduction and behaviour change. It illustrates the method and principles with a range of examples and a case study focusing on online abuse and social media controls, relating in turn to issues inherent in cyberbullying and tech-abuse. The framework descr
As organizations continue to invest in phishing awareness training programs, many chief information security officers (CISOs) are concerned when their training exercise click rates are high or variable, as they must justify training budgets to organization officials who question the efficacy of awareness training when click rates are not declining. This paper argues that click rates should be expected to vary based on the difficulty of the phishing email for a target audience. Past research has shown that when the premise of a phishing email aligns with a user’s work context, it is much more challenging for users to detect a phish. A Phish Scale is thus proposed so
While technical controls can reduce vulnerabilities to cyber threats, no technology provides absolute protection and we hypothesised that people may act less securely if they place unwarranted trust in these automated systems. This paper describes the development of a Trust in Technical Controls Scale (TTCS) that measures people’s faith in four of these technical controls. In an online study (N = 607), Australian employees demonstrated a greater degree of trust in firewalls and anti-virus software than they did in spam filters and social media privacy settings. Lower scores on the four item TTCS were related to better information security awareness (ISA) and higher
Formally adopted security policies, well-defined security governance, and clear security-related roles in the business are prerequisites for a successful security program. But in the background behind the visible security governance and security program machinery is the organization’s security culture. A security culture is the part of an organization’s self-sustaining patterns of behavior and perception that determine how (or if) the organization pursues security. A positive security culture can provide your best opportunity to secure the business; a negative one can be your greatest vulnerability.
Security awareness and education programmes are rolled out in more and more organisations. However, their effectiveness over time and, correspondingly, appropriate intervals to remind users’ awareness and knowledge are an open question. In an attempt to address this open question, we present a field investigation in a German organisation from the public administration sector. With overall 409 employees, we evaluated (a) the effectiveness of their newly deployed security awareness and education programme in the phishing context over time and (b) the effectiveness of four different reminder measures – administered after the initial effect had worn off to a degree that
The challenge of changing user cybersecurity behaviour is now in the foreground of cybersecurity research. To understand the problem, cybersecurity behaviour researchers have included, into their studies, theories from the Psychology domain. Psychology makes use of several behavioural theories to explain behaviour. This leads to the question, which of these theories are best suited to firstly understand cybersecurity behaviour and secondly to change the behaviour for the better. To answer this question, as a prelude to the current paper, previous publications have 1) established a definition for the different categories of cybersecurity behaviour, 2) identified and
Managing how new digital technologies are integrated into different contexts has become a key component needed for effective international security management. This chapter focuses on rethinking our approach to the integration of digital technologies within (cyber)security work. Most analyses of security take for granted a problematic split between technologies involved in securing specific contexts and the humans involved with or operating such devices. By shifting to a practice theory approach, we offer a more holistic view of security by examining not only the implementation of technologies or human factors but also how this affects the meaning these practices ho
In today’s competitive world, business security is essential. To secure the business processes and confidential data, organizations have to protect the system by implementing new policies and techniques to detect the threats and control it. Threats for cybersecurity are classified into two types, outsider and insider threats. Both threats are very harmful to the organization. These may convert into a severe attack on the systems upon future. Outsider threats have to take more effort to break the security system. But inside users are those who are privileged to access the system within the organization. As data form is digital, it is straightforward to transfer from
Security breaches nowadays are not limited to technological orientation. Research in the information security domain is gradually shifting towards human behavioral orientation toward breaches that target weaknesses arising from human behaviors (Workman et al., 2007). Currently, social engineering breaches are more effective than many technical attacks. In fact, the majority of cyber assaults have a social engineering component. Social Engineering is the art of manipulating human flaws towards a malicious objective (Breda et al., 2017). In the likely future, social engineering will be the most predominant attack vector within cyber security (Breda et al., 2017). Huma
The workforce shortage and gender disparity in cybersecurity profession pose a greater risk to the digital economies from cyber adversaries. The global efforts and initiatives for women to pursue career in cybersecurity field tend to be lesser than men along with various societal barriers, which consequently result in their underrepresentation and underutilization in cyber industry. The G20 states and other nations equally share the cyberspace and therefore need to collaborate and complement efforts to address gender disparity in cybersecurity profession. Providing education, training, entrepreneurship, and equal opportunities to women in cybersecurity would help to
Acknowledging the importance of information and communication technologies (ICT) in relation to the functioning of contemporary societies, the states of the European High North have endorsed information and/or cybersecurity strategies which aim to safeguard both information and information infrastructure. However, the strategies neither fully recognise the challenges and threats associated with the use of ICT in everyday life nor acknowledge regional peculiarities within the different states. This chapter elaborates the enabling and constraining effects of digitalisation at the regional level. It discusses how a human-centred security approach to digitalisation coul
In this paper, researchers applied gamification techniques to the development of an Augmented Reality game, CybAR, which was designed to educate users about cybersecurity in an effective and entertaining way. This research incorporates decision-making style into Technology Threat Avoidance Theory (TTAT) of CybAR game use. This paper particularly focuses on the role of decision-making style in avoidance of risky cybersecurity behaviour based on factors derived from Technology Threat Avoidance Theory (TTAT). A cross-sectional survey was conducted among 95 students at Macquarie University to assess the effect of individual differences, namely, decision-making style, as
The ‘human’ element of any digital system is as important to its enduring security posture. More research is needed to better understand human cybersecurity vulnerabilities within organizations. This will inform the development of methods (including those rooted in HCI) to decrease cyber risky and enhance cyber safe decisions and behaviors: to fight back, showing how humans, with the right support, can be the best line of cybersecurity defense. In this paper, we assert that in order to achieve the highest positive impactful benefits from such research efforts, more human-centric cybersecurity research needs to be conducted with expert teams embedded within industria
Since information security (InfoSec) incidents often involve human error, businesses are investing greater resources into improving staff awareness and compliance with best-practice InfoSec behaviours. This research examined whether employees who feel that they may be personally affected by workplace InfoSec incidents are more likely to behave in accordance with those best-practice behaviours. To further understand this, we also examined organisational commitment and risk perception. Data collection involved an online questionnaire measuring these constructs in relation to three workplace cyber threats: phishing, malware, and mobile devices. The questionnaire was co
Cyber crime is rising at an unprecedented rate. Organisations are spending more than ever combating the human element through training and other interventions, such as simulated phishing. Organisations employ “carrots” (rewards) and “sticks” (sanctions) to reduce risky behaviour. Sanctions (such as locking computers and informing one’s line manager) are problematic as they lead to unintended consequences towards employee trust and productivity. This study explored how organisations use rewards and sanctions both in their campaigns and specifically following simulated phishing. We also assessed what factors (such as control over rewards, tendency to blame users) infl
There has been an increasing prevalence of global cyber attacks. Because of the possible breaches in information security, it has become pertinent that organisations change organisational and individual cultures to become more secure. However, there are challenges regarding the implementation of these processes within organisations. Organisations have become dependent on information systems, which stores large quantities of data and can be considered as one of an organisation’s greatest assets. Whilst employees are considered as the next important asset, their negligence, whether intentional or not, and due to their possible lack of knowledge regarding information s
There is a lack of consensus when using the term “cyberspace” . Computers and network devices are prominent in definitions of cyberspace; less common is the essential and inclusion of human users. However, the human user is both implicitly integral to and actively part of the cyberspace. A new human-centric model of cyberspace is proposed (the HCCM), with the user as a physical and integral entity, together with recognition of the cognitive representation of cy
Technological development towards automation has been taking place for years and a wide range of autonomous systems (AS) have been introduced in homes and retailing spaces. Although these AS seem to be riskless, if they are exploited they can endanger private information of users, which opens a new stage for the security of AS. Humans have an initial and positive bias towards automation that might lead to errors related to unintentional actions or lack of actions. Therefore, the effective adoption of AS relies on users’ attitudes, like the propensity to take risks and the calibration of human trust to avoid situations of mistrust, over trust, and distrust, increasin
A construct for intentional habit formation is suggested as possible mitigation to the disparity between user capability and systems requirements. The importance of usable security is well represented in early discussions (Sasse 2001). Twenty years after M. S. Ackerman provided a significant discussion of the “gap” between what humans need and what computers can support, the “social-technical gap” in privacy and security management continues. Humans, for many reasons, cannot make good, consistent decisions regarding security. Current and foundational theoretical understandings of human limitations are outlined, in both an individual and social context. The differenc