Skip to content
TitanHQ

Human-centric Cybersecurity: How to make ‘People Matter’

Too many organizations still either ignore the “human risk factor” in their organizational resilience or apply outdated or compliance driven ’tick-the-box’ approaches to training their employees about cyber security.” – Professor Phil Morgan, Director of the Human Factors Excellence Research Group (HuFEx) at Cardiff University,

Summary

Increasingly, cybercriminals are turning to the ‘human factor’ to execute a cyber-attack. Therefore, cybersecurity is much more than a technology problem. Attacks focus on finding ways to exploit human behavior to then begin a chain of events that results in stolen credentials and other data, ransomware infection, DDoS attacks, and general IT chaos.

The human being in the attack chain is seen as an easy, low-hanging no-hack option, to infiltrate a network and carry out a destructive and often financially lucrative cyber-attack.

  • In 2021, every 11 seconds a company became a victim of a ransomware attack.
  • 88% of security breaches have human error at their core, according to research from Stanford University.
  • Research shows that 96% of data breaches start with a phishing email.
  • Human factors play a major part in cybersecurity risk; a study found that “careless or uninformed staff” are the second most likely cause of a security breach

This report looks at how a people-first approach to cybersecurity can turn your employees into the ultimate cybersecurity warrior.

Part One: Understanding the Human Factor in The Cybersecurity Threat Landscape

We must influence the behaviors of people who see Information Security as something technical, and not something they can influence themselves.” - Professor Phil Morgan

Cybercriminals use human behavior as a pawn in their game. Phishing is a perfect example of how behavior is exploited to carry out a cyber-attack. Phishing relies on a human to click a malicious link or download an infected attachment to switch the big red attack button to on.  Human psychology, cognitive factors, and applied social cognition must become part of the security professional lexicon. In ensuring cyber-security, the industry must take on a ‘human-centric’ or socio-technical cyber-security approach. The key component of this approach is us, the human.

To achieve this human-centric approach, organizations need to embrace a sub-discipline of psychology known as ‘Human factors’. This emergent area links human behavior to engineering and elements of computer science to develop a holistic approach to cyber-risk management.

Cyberpsychology and ‘Human Factors’

Human Factors

Research into the impact of human Factors on the use of tools, methods, and theories, merge psychology, engineering, and other traditional disciplines to better understand human interaction with systems. 

The aim is to improve the symbiosis between people and the technology that they use. In doing so, this optimizes that interaction, and therefore, reduces the number of human errors that lead to cyber-attack success.

“10 years of research has shown me that we will never get close to developing effective solutions in cyber-security without disciplines like psychology and its sub-disciplines like human factors research. A socio-technical approach is crucial. A technical only approach can achieve quite a lot but will not result in seamless cyber-security where humans are part of the systems.“

In conversation with Professor Phil Morgan:

"Human factor research aims to change the behavior of people. This means that the human operator must understand their role in influencing technologies. A human-centric approach to cybersecurity is one of education, understanding, and changes the dial from a purely technical fix to one that more addresses that type of threats that modern cyber-attacks use."

Cyberpsychology

Cyberpsychology is a merger of cyber-security and humans. The basis of the discipline is to develop human-oriented systems for a given type of technological area.  Cyberpsychology applies to home, work and Work from Anywhere environments.

The challenge is to ensure that anyone who interacts with the digital realm understands how cybersecurity works.  The problem stems from the ‘plug-and-play’ view of technology, instilled in human behavior as digital technology has evolved over the decades. Cybersecurity behavior is then seen as a ‘blocker’, something that adds an extra burden on the human user.

In conversation with Professor Phil Morgan:

“Poor cyber-security behavior has often got more to do with our affinity to the technology we use, our perception of the organization(s) that we work for / are affiliated with, and the desire to protect the systems that we work with."

Create a Feeling of Affinity with a Device

Research in cyberpsychology has found that cyber risky behavior may be because when working with devices in a workplace setting because they are seen as not belonging to the individual, they take less care. The worker has a lower affinity with the device. Experimentation around this issue by Phil Morgan and his team has found that personalization of a device can increase a person’s affinity with the device and improve a feeling of ‘symbiosis’ with the device.

This symbiosis is a key factor in developing stronger cyber security behaviors and reducing cyber risk.

Human-centric Security and Learning from Mistakes

The idea that humans are irrational and keep doing irrational things is quite wrong. Humans can be one of the strongest lines of defense against a cyber-attack.” - Professor Phil Morgan

To build a human-centric approach to cybersecurity attack mitigation requires three vital factors:

  • appropriate knowledge
  • awareness
  • understanding

To build this tripartite of factors in readiness use a human-centric-enabled cyber-risk management program an organization must accept its mistakes and learn from them. These mistakes can empower an organization and its employees. Mistakes can be reported and used to motivate positive security behavior.

To achieve these three factors an organization and its employees must be ready to take the path of socio-technical cyber-security; this involves understanding that humans make mistakes, some of the time, and take these mistakes and learn from them.

In conversation with Professor Phil Morgan:

“Let's not try to pretend that we will not make mistakes in a cyber-security setting – or that we just must continue to accept that people are never going to be very good at cyber-security.”

Reporting Mistakes

Cyberpsychology and human-centric cybersecurity go beyond security awareness training.

Even with security awareness training, people continue to show a fear of cyber-security; they fear being blamed, shamed, and pulled out as a weak link.

Organizations that experience this must resolve this situation, to prevent staff from hiding mistakes.

Research from Phil Morgan and his team has shown that having a transparent and open culture around security mistakes is more effective in mitigating risk. Phil Morgan said, “A far better approach is to speak-up about security mistakes at the time to allow fast-rectification and to increase awareness of what the issues are – including factors leading to the mistake (or even near miss) being made in the first place.”

In conversation with Professor Phil Morgan:

"Rather than seeing a security mistake as shameful, we should see it as something that is human. We should provide support mechanisms so that employees can do something about the problem."

"Security awareness and training on the basis that if their staff have been told how to behave securely, then they will adapt their behaviors. That is clearly an underestimation of what happens."

The reporting of mistakes should be used as an opportunity to build confidence, collaboration, and motivation. It is important to create a ‘no blame culture’, to give employees the confidence to speak up. This type of ‘black box’ thinking and reporting provides actionable insights into security behavior at a point in time that has led to risky behavior. This approach has shown success in sectors such as aviation, healthcare, and competitive sports such as motor vehicle racing.

Using Cyberpsychology to Mitigate Cyber-risk

Cyberpsychology reframes human behavior within a cybersecurity context. Using the tenets of the discipline a one-size-fits-all’ approach to training and awareness does not work. Cyberpsychology recognizes that human behavior has commonalities but also distinct differences. It is the differences between humans that cyberpsychology can capitalize on to create more structured and effective programs of behavioral change. 

In conversation with Professor Phil Morgan:

“While we can be grouped according to various characteristics, every single person is different. We need to tailor interventions towards individuals or personas.”

The Three Pillars of Cyberpsychology

Pillar one: recognize the inherent differences in behavior and tailor interventions

Pillar two: security awareness training alone isn't enough. Improved security behaviors need to be continuously worked upon. Also, recognize that training needs vary between roles and departments and a one-size-fits-all approach is unlikely to work.

Pillar three: invest in the objective measurement of human cyber behaviors. Avoid inherent flaws, including biases, in how we ‘rate’ ourselves. An objective approach involves the collection of behavioral data in academic research. Examples include how we interact with systems, what's happening in terms of visual processing (e.g., eye movements, fixations, and the like), using an EEG kit to measure brain activity, and various other tools. Bringing these different data sources together to triangulate the data can help to develop better solutions.

Discover how SafeTitan can help employees to become the ultimate cybersecurity warrior.

Book Free Demo

Part Two: Factor in Design for Better Cybersecurity

Four Factors in the Design of the Human Interface (HMI)

Human-machine interface design can have important effects on cyber behavior. Often, the HMI is designed without expert input and tested against small subsets of a test population. These sub-optimal solutions are then adopted on a massive scale. There are four key factors when designing human-machine interfaces:

Factor One: Accessibility

Solutions must be fully compatible with accessibility requirements. An example is proving interaction options such as keyboard, voice, touch, etc.

Factor Two: Usability

The interface should be usable by everybody that needs to use it.

Factor Three: Functionality

Getting the level of functionality just right is a key factor; not too much, not too little.

Factor Four: Adaptability

Adaptable interfaces for individuals. A tailored, interface designs help develop better cyber behavior.

Tailored interfaces have been found to make people are more inclined to use the software effectively.

In conversation with Professor Phil Morgan:

“People often say, “there is no way on earth I would have made that mistake under normal circumstances, but for some reason at that point in time when I was interacting with the interface, I wasn't really thinking about what I was doing, and I clicked the link’. That often happens because of bad human-machine interface design.”

Drilling Down: Post-Completion Errors and Cybersecurity Behavior

Systems, software, solutions, can be configured to remove a human behavior known as a post-completion error. An example of this type of behavior is forgetting to pick up a bank card after using an ATM. System design can take these types of errors into account by using ‘Hard and Soft Constraints’.

The Hard Constraint

In the case of the ATM car risk problem, an ATM will now eject the card before the cash is issued. This removes the likelihood a person will forget to take the card from the machine. This is a hard constraint, no card, no cash. Phil Morgan has researched the application of hard and soft constraints. The 20-years of research into these mechanisms has shown repeatedly that small-scale, hard constraints can result in a powerful shift to a more intensive cognitive processing strategy. This can protect against issues associated with post-completion errors such as forgetfulness. 

The Soft Constraint

Hard constraints are useful in certain situations but sometimes they are too much of a hammer to crack a nut. In a cyber-security context, there will be some situations where it is sensible to force people to face a delay of a few seconds to access sensitive information. Instead of automatically processing information, here the employee is given the time to think before acting.

Soft constraints provide input on interface design interfaces to allow humans to perform tasks in ways that make them pay more attention to what they are doing. For example, provide time and information to help them spot a phishing attack.

Merging Soft and Hard Constraints

Hard and soft constraints can work in harmony resulting in more effective cognitive processing of interface-based tasks resulting in fewer errors and mistakes.

In conversation with Professor Phil Morgan:

“Colleagues and collaborators of Phil’s have designed systems where an employee has an ambient light display next to their laptop or other device. When you move away from the device, the light will change from green to amber like a traffic light system.  It acts to warn the person to close or lock a device.  The further you move away from that device, the more the light becomes red. In an open office environment, that is not only a warning to the user but also to those around them. In doing so, it encourages safer cyber security behaviors”

The Tools and Personas of People-First Security

Changing cyber behavior must recognize the differences in humans as well as take account of the things that we share. Research, including that conducted by Phil Morgan and his team, has explored multiple factors when it comes to individuals; these factors have brought into sharp focus not just individual differences, but things like age and job role, the context in which you are working, distractions, cognitive pressure, and time pressure.

In conversation with Professor Phil Morgan:

“When there are regular interruptions, for example, you will have to task-switch and multi-task. This makes tasks more cyber-risky; this is in additional to the risk associated with individual differences and characteristics. An also, in addition to our relationship with the technology that we use: our affinity with it and how we interact with it."

So, it's not just a case of trying to understand human individual differences; There are many layers of understanding needed. This includes understanding humans and their differences within the context in which they work as well as the wider context. This intelligence must then be used to build effective cyber-behavior interventions.

Download Free Guide: Six Steps to Build Value from Cyber Threat Intelligence

Gold Standards in Cyberpsychology and Cyber-Behavior

Professor Morgan has spent three years within Airbus developing a set of ‘gold standard tools’, which can account for up to 65-70% of the reasons why people might engage in cyber risky behaviors.

These tools are based on layered understanding involving individual differences, the environments in which we work, and our relationship with the technology that we use.

Examples of Tools and Personas: A Use Case

The Scenario: a training or awareness session, or an onboarding session, that an employee undergoes a cyber-security vulnerability test. 

The Data: The main outcome is the employee is then defined as a ‘persona’, for example, persona 123456789.

In the test, they seem to identify most with persona 3, but they may also have elements of persona 1, 2, 4, etc. This represents their vulnerabilities and strengths.

Using the Output: This persona is useful when a staff member moves throughout company roles and pathways. An organization can use this information to work with individuals and motivate them. The HR function can use the persona tests to tailor interventions around the personas.

Privacy and Ethics Issues and Data Collection: people may be concerned about the collection of personal data by researchers. This is where ethics come in. The research Phil Morgan and his team conduct, uses ethical frameworks to ensure that data is anonymized. A participant generates a code. If they want to take part in a future study: this code is used by researchers to match the new data to the employee’s baseline data, the participant provides their code so that the data sets can be combined.

It is possible to demonstrate that researchers are not targeting employees but rather supporting them. The approach can result in a strong buy-in from employees and this is a key step to getting closer to optimal cyber-security solutions to better support employees to be even more cyber safe.

The Ultimate Cybersecurity Warrior

The system allows an employee to become an ‘ultimate cyber security warrior’ through persona improvement. The research data is used to advise people on how to make those changes.  The company can support and develop interventions so that employees can become an ‘ultimate cyber security warrior’.

Phil Morgan describes the system as “like a fitness regime”. “You're given a journey to travel from a lower stage of fitness to being as fit as possible. It is even possible to use things like gamification to make it more interesting and engaging to get through various stages, and you feel good about being more cyber-secure. It’s all about motivation!

Part Three: Getting Company Buy-in

The escalation of cyber-attacks is focusing the minds of organizations around the world. Best practice ways to mitigate these risks need organization-wide buy-in. Leadership buy-in is incredibly important: a  Gartner, Inc., report  says that “by 2025, 40% of boards of directors will have a dedicated cybersecurity committee overseen by a qualified board member.” This ensures that cybersecurity risk mitigation is taken seriously.

Leader organizations are moving away from the checkbox exercise approach to cybersecurity based on inaccurate graphs that show the delivery of training and awareness packages over six months.

Instead, these organizations are moving towards a more objective data-driven approach.

Objective Data-driven Approach

Business leaders need evidence that an activity is having a positive impact on the organization. They need evidence of actual behavior change within their organization.

As Phil Morgan said, “It's critical that leaders can see the real impact and benefit in terms of the financial and reputational impact of the interventions on their organization. They can then see how the behaviors are improving over time.”

Developments in Objective Data-driven Metrics

The objective measures used by researchers tend to involve devices that log data from all sorts of interfaces over a given period. This could include data on eye movements, not just the object of fixation. Other data points include data on active movement around a screen, as well as pupil dilation that offers a strong signal of workload and the degree to which information is being processed.

It is also possible to use an EEG kit to measure neuronal activity within the brain. This activity is an indicator that provides an insight into the extent to which people are attending to information, perceiving information, remembering information, and thinking about information.

With human state monitoring, researchers can use devices like fitness watches to measure heart-rate fluctuations and facial expressions. Put this objective data together with subjective data, and it is possible to paint a very strong picture of what is going on in a particular situation.

Discover how SafeTitan works to protect employees against advanced security threats.

Book Free Demo

Part Four: Summing up

Knowledge is not Enough to Create Cyber-power

Within the cyber-security world, knowledge alone is not enough. We must experience the behavior change, first-hand, by looking at the outcomes of the interventions that have been implemented. Researchers can do this virtually, by immersing people in scenarios with virtual and augmented reality. They can observe, via behaviors, what the outcomes can be: this is very powerful. Positive outcomes are one thing, but the negatives are also important learning exercises. If you can determine what negative outcomes could be if you perform something in a certain way that tends to stick with humans.

Business leaders now realize that improving behaviors is not a quick-win situation. The statistic of 88% of security breaches has human-error at their core, and this figure will not suddenly reduce to 10%. However, with a human-centric approach a business can move the dial downwards through a better understanding of human behaviors and developing more targeted and relevant interventions.

In the last word from Professor Phil Morgan:

“We are already making great improvements in how we’re driving and sustaining stronger security behaviors. Let’s maintain this momentum to stay one click ahead of the cybercriminals. They are – like us – only human after all!”

Real-time intelligent security awareness training is available now from SafeTitan. For further information or a demo on how to make your security awareness training programs more effective, contact us here.

Discover how SafeTitan can help employees to become the ultimate cybersecurity warrior.

Book Free Demo
Get Your 14 Day Free Trial
TitanHQ

Talk to Our Email and DNS Security Team

Call us on US +1 813 304 2544

Contact Us