Better Cybersecurity Starts with Fixing Your Employees’ Bad Habits


Cybercrime is here to stay, and it’s costing American firms a lot of money. The average annualized cost of cybercrime for global companies has increased nearly 62% since 2013, from $7.2 million to $11.7 million. And these are just the average direct costs. Target, which experienced a massive data breach in 2013, reported that the total cost of the breach exceeded $200 million. Verizon, which recently purchased Yahoo, may have snagged a $350 million discount because of three large-scale Yahoo data breaches that occurred in recent years. Given these costs, what can companies do?

Governments and industry are doing what seems like the obvious thing to do — spending billions of dollars to develop and implement new technologies designed to stop the bad guys before they can get through the front door. Yet, even though we have some of the best and brightest minds on the case, there are still major limitations to what we can do with silicon and code. Despite our predilection for using technology to solve what appear to be technological problems, one lament that echoes in information security circles is that we’re not doing enough to deal with cybersecurity’s biggest, most persistent threat — human behavior.

In 2014, IBM reported that, “over 95% of all [security] incidents investigated recognize ‘human error’ as a contributing factor.” In fact, the recent string of malware attacks with cyberpunk appellations such as “WannaCry,” “Petya,” and “Mirai,” as well as the apparent state-sponsored attacks on Equifax and the American electoral system, all started because of poor decisions and actions from end users. If it wasn’t an engineer inadvertently building a vulnerability into a piece of software, it was an end user clicking on a bad link, falling for a phishing attack, using a weak password, or neglecting to install a security update in a timely manner. Attackers didn’t need to break down a wall of ones and zeros, or sabotage a piece of sophisticated hardware; instead they simply needed to take advantage of predictably poor user behavior.

When companies focus their attention on solving what they think are technology problems with technology solutions, they’ll neglect to identify simple interventions that can help reduce the incidence of bad behaviors and promote good ones. And yes, while some technology investments have sought to prevent these behaviors in the first place by offloading human decisions onto artificial intelligence and machine learning systems, these innovations still have some ways to go. As the security robot that fell into a mall fountain reminds us, AI and related innovations are not yet fully developed technologies. For example, how many times has your spam filter missed a phishing email? In the absence of fail-proof AI, human judgement is still needed to fill the gap between the capabilities of our security technologies and our security needs. But if human judgment isn’t perfect, and technology isn’t enough, what can companies do to reduce behavioral risks?

One major insight from the fields of behavioral economics and psychology is that our behavioral biases are quite predictable. For instance, security professionals have said time and again that keeping software up-to-date, and installing security patches as soon as possible, is one of the best methods of protecting information security systems from attacks. However, even though installing updates is a relative no-brainer, many users and even IT administrators procrastinate on this critical step. Why? Part of the problem is that update prompts and patches often come at the wrong time — when the person responsible for installing the update is preoccupied with some other, presently pressing the issue. Additionally, when it comes to updating our personal computers and devices, we’re often provided with an easy “out” in the form of a “remind me later” option. Because of this small contextual detail, users are much more likely to defer to the update, no matter how critical. How many times have you clicked on the “remind me tomorrow” option before finally committing to the update?

At ideas42, my behavioral science research and design firm, we’ve been documenting the various contexts that lead users and administrators to decide (and act) in less-than-optimal ways, placing themselves and their companies at greater risk for a cyberattack. Through this lens, we’ve generated a number of insights about why people set bad passwords, neglect to install updates, click on malicious links, and fall for phishing emails. We’ve also outlined what organizations and administrators can do to improve their users’ behavior (to learn more, check out our research-based true-crime novella, Deep Thought: A Cybersecurity Story).

Set strong defaults. One of the most influential insights from the behavioral sciences is that whatever is in the “default” position generally sticks. This insight has been used to great effect in the domains of retirement savings and organ donation, where a shift from an “opt-in” to an “opt-out” default has significantly increased participation rates. Similar logic could be applied to the choice context around enterprise user security. Instead of having your employees opt-in to specific security actions such as installing and using a VPN, turning on 2-factor authentication, enabling full-disc encryption, or authorizing auto-update features, employers could take the time to set up computers and systems that employees use to have these features turned on by default. Doing so could lead to higher rates of compliance than trusting your employees to do it for themselves.

Use calendar commitments to nudge system updating. Sometimes enabling auto-updates for system software isn’t possible, leaving the choice of whether to update in the hands of employees. However, when facing the prospect of halting workflow to update some software, employees may decide to defer the action until some ambiguous “better time.” The problem is that if we remain present biased, focused on the current task at hand, that “better time” may never come along. One way researchers have been able to get around this present bias problem is to get users to make concrete commitments about when they’ll follow through — the more specific the better. In the context of software updates, employers could help facilitate employees in making a concrete commitment to update. For instance, when an update is released, an administrator could send an email instructing employees to block off a time on their calendar to complete the update. The simple act of getting employees to pre-commit can stop the procrastination cycle dead in its tracks.

Compare employees to their peers. People have a tendency to look toward other people, especially those who are similar to them, to learn how to act. This phenomenon, called social proof, can have powerful effects on people’s behavior, and is especially influential when the desirable behavior is ambiguous — such as with cyber hygiene. Opower, a customer engagement platform for utility providers, has used this insight to great effect. The company sends a small infographic along with households’ utility bills showing how much energy the household is consuming relative to the average household in their neighborhood, as well as the most efficient ones. This small indication of what other households are doing reduces average energy consumption by 2% per household, which, at scale, is a huge change.

What this and similar interventions have shown is that people don’t actually need to see others exhibit a specific behavior; they can instead be told what others do. In an enterprise setting, administrators can leverage social proof to help employees identify desirable behaviors and motivate them to take them on. For instance, employers could poll employees about various behaviors and safeguards they use online and assign a score based on those behaviors. They could then produce reports for each individual comparing them to the average employee, as well as the most diligent employees, and provide them with actionable steps they can take to improve their score. This simple social proof intervention could lead to greater compliance across an organization.

Turn awareness training into a constant feedback system. One major insight from behavioral science is that if you provide someone training, you might increase people’s knowledge, but you aren’t likely to change their behaviors. Often awareness training happens something like this: once a year, employees get into a room for an hour or two and get lectured at by a professional awareness trainer, only to go back to their workstations and ignore most of what they were taught. There are many reasons why this might happen: people have limited attention and can’t absorb all the information they just learned; they may not have a concrete sense of how to make what they learned actionable and so don’t change their behavior; they may be overconfident that none of the risks they learned about apply to them in particular — “it will never happen to me!” — and the list goes on.

One way that enterprises may improve the efficacy of awareness training is to make it an ongoing process, and build in feedback so that employees learn about when they mess up and what they can do to avoid that error in the future. For instance, to make your workforce more resilient to phishing attacks, you might choose to employ software like Phishme, which sends out fake phishing emails to employees on a regular basis, and provides remediation when users fall for the attack. Similar processes could be put in place to help employees avoid bad links, remember to update their software, and take on other beneficial behaviors such as using 2-factor authentication, turning on their VPN when accessing an insecure network, and utilizing more secure passwords.

If we keep trying to use technology to solve what are in reality human problems, we’ll continue to remain vulnerable to attacks. However, if we take an approach that looks at the context in which human beings are liable to make mistakes, we will be more likely to find sustainable solutions that will keep ourselves, and our enterprises, safe from the bad guys.

--------------------------------------------------------------------------------------------------------------------------

First published at Harvard Business Review

Comments