If you’ve been given responsibility for network security in a non-technical area of the business, there’s one eternal question that has been bedeviling admins for decades. Shelves of words have been spilled on the subject, to limited result.
How do I get the user to stop clicking everything?
Everyone with cybersecurity responsibilities has their own crop of horror stories where an intransigent user has clicked furiously on a Dridex installer, wondering why their “invoice” won’t load. A user might enable macros to see the “important notice”, scratch their head at the display issues, then open the document on another machine because theirs obviously had issues. A more recent corollary is the user who gets an email from the “CEO”, and subsequently starts a wire transfer to a dodgy address in Asia without following up with anyone. These are problems that have been appearing in almost every organization, for years. So what is wrong with these people and how do we fix it?
Theory 1: The Bad User
Lets call this the BOFH theory, as it’s most commonly used by people in security to explain why we shouldn’t have to do anything about phishing, because it’s forever unsolvable. The user, an ignorant, benighted soul, is incapable of looking up from their daily toil to enlighten themselves on security issues. One can never expect a marketing exec to reach our levels of security sophistication, and as such, it’s foolish to attempt to uplift them. This is wrong and counter productive, on several levels.
First is the Ned Flanders’ Parents Corrollary: “We’ve tried nothing and we’re all out of answers!” Those of us working these issues on a daily basis rarely, if ever have conversations with business profit centers on their terms. We have a tendency to shower users with a barrage of horrible outcomes if they click a phish, up to and including compromise of the entire production network. While true-ish, when you take this approach to almost every potential threat, an average user will immediately tune you out as a hysterical Chicken Little. And they’re not necessarily wrong. The most common outcome for a phish derived compromise on a properly configured network is a reimage of the impacted host followed by a SOC investigation and report to the CISO. While irritating and time consuming, it is not a catastrophe. A much more productive approach is to explain to the user the downtime associated with a reimage and the fiscal cost to the business. (Depending on the org, up to 16 lost working hours for the impacted user, and more for the SOC.)
The kinder, gentler version of the Bad User theory is phishing education. People simply don’t know what a phish looks like and what it can do, and it is incumbent upon us to teach them, and then phishing will be solved forever. There are three problems with this.
- It assumes that the user never has a good reason to click on a message that appears slightly off.
- It assumes a security savvy user/admin wouldn’t click on a phish. Various APT groups have enjoyed great success proving otherwise. If you think you are not susceptible to this, it’s you – you are the security vulnerability.
- Email Fatigue erodes your judgement
- Phishing education courses are terrible. I mean really, look at this:
Theory 2: The Bad Company
Some folks realize that attributing unwanted user behavior to mass, contagious, intractable idiocy is counter-productive, usually wrong, and poisons relationships between security and the rest of the company. These people will tell you it is not the user’s fault for clicking, per se, it is the company’s fault for incentivizing bad actions. This is closer to true, as organizational incentives do strongly predict individual outcomes, but can still be problematic.
On the true side, some companies like to deluge employees with emails that look like phishes. It’s not uncommon for users to receive corporate emails full of HTML, an urgent call to action, followed by a link to an internet network resource. If this sounds familiar to you, it’s not really a mystery why your users would click on a phish.
On the “Yes, but” side, phishes have more than one lure. For every phish that looks like a legitimate request to update your corporate phonebook entry, there’s one where the ‘CEO’ is asking the user to Western Union money to Vietnam, or offer a ‘corporate discount’ after filling out a survey. The issue here is less learned email helplessness, and more a security culture that doesn’t treat users as partners. A healthy SOC does not scold users for misbehavior, it enlists users as foot soldiers to ferret out malicious indicators that would otherwise go under the weather. There is precious little that makes a non-technical user more proud than to be able to present a new threat to the professionals for disposition. Give your users reasons to have pride in themselves and they will jump at the chance to be helpful.
Theory 3: Driving a nail with a platypus
Why is phishing an intractable problem? Because…
Organizational issues require organizational solutions
There is no patch, update, conversion, or SIEM that will in the slightest bit impact human behavior, but that doesn’t seem to stop folks from trying. Users click phishes and will continue to do so because they are incentivized to view clicking as a rational act. Some questions to ask before looking to a technical solution for phishing:
- Does the business pass around document files for frequent, multiuser revisions? Consider a cloud based document editing solution, or even version control software. No one can click on the malicious attachment that isn’t being sent.
- How closely do your intra-company communications resemble phishes? What is the penalty suffered by ignoring them? A chat between security and company communications can go a long way towards teaching better email hygiene.
- Are you afraid of your CEO? Business Email Compromise is a very lucrative scam that relies on recipients of the phish being too intimidated to question an email from someone they believe to be their boss.
The common thread to these possible solutions is that they are cheap or free, and you can already implement those that rely on internal resources. Before you spend money engineering a non-engineering problem, it might be more productive to put the platypus down and ask “Why wouldn’t someone click that?”.