The enemy is us: a look at insider threats

The enemy is us: a look at insider threats

They can go undetected for years. They do their questionable deeds in the background. And, at times, one wonders if they’re doing more harm than good.

Although this sounds like we’re describing some sophisticated PUP you haven’t heard of, we’re not.

These are the known attributes of insider threats.

Insider threats are one of a handful of non-digital threats troubling organizations of all sizes to date. And—to bang on the hype—the danger they pose is real.

When once companies thought that risks to their high-valued assets can only come from outside actors, they’re slowly realizing that they are also facing potential dangers from within. The worst part is no one can tell who the culprits are until the damage is done.

In the Osterman Research white paper entitled White Hat, Black Hat and the Emergence of the Gray Hat: The True Costs of Cybercrime, it is found that insider threats account for a quarter of the eight serious cybersecurity risks that significantly affect private and public sectors. To put it another way, an organization’s current and former employees, third-party vendors, contractors, business associates, office cleaning staff, and other entities who have physical or digital access to company resources, critical systems, and networks are collectively ranked in the same list as ransomware, spear phishing, and nation-state attacks.

The majority of insiders who have caused their employers a headache didn’t necessarily have technical backgrounds. In fact, they didn’t have the desire or the inclination to do something malicious against their company to begin with. In the 2016 Cost of Insider Threats [PDF], a benchmark study conducted by the Ponemon Institute, a significant percentage of insider incidents within companies in the United States was not caused by criminal insiders but by negligent staff members.

This finding remains consistent with the 2018 Cost of Insider Threats [PDF], where coverage also includes organizations in the Asia-Pacific region, Europe, Africa, and the Middle East. The insider’s general lack of attention and misuse of access privileges, coupled with little-to-no cybersecurity awareness and training, are some of the reasons why they’re dangerous.

Understanding insider threats

Many have already described what an insider threat is, but none as inclusive and encompassing as the meaning put forward by the CERT Insider Threat Center, a research arm of Carnegie Mellon University’s Software Engineering Institute (SEI). They have defined an insider threat as:

…the potential for individuals who have or had authorized access to an organization’s assets to use their access, either maliciously or unintentionally, to act in a way that could negatively affect the organization.

From this definition, we can classify insiders into two main categories: the intentional and the unintentional. Within those categories, we’ve described the five known types of insider threats to date. The are as follows:

Intentional insiders

They knowingly do harm to the organization, its assets, resources, properties, and people.

The malicious insider 

This type has several names, including rogue agent and turncoat. Perhaps its main differentiation from the professional insider (as you will see below) is that not one insider of this type started off with malicious intent. Some disgruntled employees, for example, may decide to compromise the company’s network if they perceive that their company has done them wrong by planting malware, deleting company files, stealing proprietary intellectual property to be sold, or even withholding essential accounts and data for ransom.

In certain circumstances, employees go rogue because they want to help their home country. Such is the case of Greg Chung, who was found guilty of supplying China with proprietary military and spacecraft intel during his tenure in Rockwell and Boeing by stealing nearly three decades worth of top-secret documents. The number of boxes of files retrieved from his home was not disclosed, but we can assume it to be in the hundreds.

Employees who are coerced or forced to perform malicious acts on behalf of one or more entities also fall under this type.

The professional insider

This type is usually referred to as a spy or mole in an organization. They enter an organization generally as employees or contractors with the intent to steal, compromise, sabotage, and/or damage assets and the integrity of the company. They can either be funded and directed by nation states or private organizations—usually a competitor of the target company.

When the Jacobs Letter was made public, a 37-page allegation penned by former Uber employee Ric Jacobs, it seemed that the civil suit between Google and Uber was no longer your usual intellectual property theft case. In this letter, Jacobs claimed that Uber ex-CEO Travis Kalanick was the mastermind behind the theft, with Anthony Levandowski as the actor. Although this allegation has yet to be substantiated, Levandowski would fit this type if found true.

The violent insider

Acts that negatively impact organizations don’t start or end in the abuse, misuse, and theft of non-physical assets. They can also include threats of a violent nature. Peopleware is as essential as the software and hardware an organization uses, if not even more crucial. So, what negatively affects employees in turn affects the organization, too.

Therefore, it’s imperative that organizations also identify, mitigate, and protect their staff from potential physical threats, especially those that are born from within. The CERT Insider Threat Center recognizes workplace violence (WPV) as another type of insider threat, and we categorized it under intentional insiders.

WPV is defined as violence or threat of violence against employees and/or themselves. This can manifest in the form of physical attacks, threatening or intimidating behavior and speech (written, verbal, or electronically transmitted), harassment, or other acts that can potentially put people at risk.

This author hopes that CERT and/or other organizations looking into insider threats expand their definition to include workplace bullying, domestic violence (e.g. when an abusive partner comes after his/her abused partner in the workplace), and other actions that put employee safety at risk or negatively impact their emotional and psychological well-being.

Read: Of weasels, snakes, and queen bees

Insider Threat Researcher Tracy Cassidy of CERT has identified [PDF] the following indicators that enable an employee to fall under this type:

  • History of violence
  • Legal problems
  • Loss of significant other
  • Conflict with supervisor
  • Potential loss of employment
  • Increased drinking
  • Concerning web searches

In 2015, Vester L. Flanagan II (aka Bryce Williams) shot and killed two of his former colleagues in WDBJ7, a local TV station in Roanoke, Virginia, during a live interview. Flanagan later posted a clip of the shooting on Facebook and on Twitter, claiming that his victims wronged him.

Two years after the Flanagan incident, Randy Stair was posting troubling videos and messages on Twitter about his plot to kill his co-workers at the Weis supermarket in Pennsylvania. No one was entirely sure of his motive, but investigations revealed that he disliked his manager and was showing signs of extreme loneliness days before the incident.

Unintentional insiders

They have no ill intent or malice towards their employer, but their actions, inactions, and behavior sometimes cause harm to the organization, its assets, resources, properties, and people.

The accidental insider

They are also called the oblivious, naïve, or careless insiders. This type is perhaps the most overlooked and underestimated regarding their potential risk and damage to organizations. Yet, multiple studies confirm that accidental insiders account for a majority of the significant breaches that make headlines. Insiders under this type are relatively common.

Incidents, like unknowingly or inadvertently clicking a link in an email message of dubious origin, accidentally posting or leaking information online, improperly disposing sensitive documents, and misplacing company-owned assets (e.g., smartphones, CDs, USBs, laptops), even if they only happen once, may not seem like a big deal. But these actions increase an organization’s exposure to risk that could lead to its compromise.

Here’s an example of an accidental insider’s potential for damage: A publicly-accessible Amazon Web Service (AWS) account was used by hijackers to mine cryptocurrencies. Security researchers from Redlock investigated the matter and found misconfigurations in the AWS server. This gave hijackers access to credentials that could allow anyone to open the S3 buckets where sensitive information was stored. It turned out that the account belonged to someone at Tesla, so the researchers alerted them of the breach.

The negligent insider

Employees under this type are generally familiar with the organization’s security policies and the risks involved if they’re ignored. However, they look for ways to avoid them anyway, especially if they feel such policies limit their ability to do their work.

A data analyst working for the Department of Veterans Affairs downloaded and took home the personal data of 26.5 million US military veterans. Not only was this a violation of the department’s policies, but the analyst was also not authorized to do this. The analyst’s home was then burglarized, and the laptop was stolen. The data included names, social security numbers, and dates of birth.

Steps to controlling insider incidents

While cybersecurity education and awareness are initiatives that every organization must invest in, there are times when these are simply not enough. Such initiatives may decrease the likelihood of accidental insider incidents, but not for negligence-based incidents, professional insiders, or other sophisticated attack campaigns. Organizations must implement controls and use software to minimize insider threat incidents. That said, organizations must also continue to drive education and awareness, as well as provide professional and emotional support for employees to mitigate potential damage from accidental, malicious, or violent insiders.

Get executive support. As more and more organizations realize the risks insider threats pose, it also becomes easier to get executive buy-in on the idea of lessening insider threat incidents happening in the workplace. Gather and use information about incidents that occurred within the organization (especially those the C-suite may not even be aware of) before pitching the idea of creating an insider threat program.

Build a team. If an organization is employing thousands, it would be ideal to have a team that exclusively handles the insider threat program. Members must track, oversee, investigate, and document cases or incidents of insider threats. This team must comprise of a multidisciplinary membership from security, IT security, HR, legal, communication, and other departments. If possible, the organization should also bring in outside help, such as a workplace violence consultant, a mental health professional, and even someone from law enforcement, to act as external advisors to the team.

Identify risks. Threats of insiders vary from one industry to another. It is vital that organizations identify what threats they are exposed to within their industry before they can come up with a plan for how to detect and mitigate them.

Update existing policies. This is assuming that the organization already has a security or cybersecurity policy established. If not, creating one now is essential. It’s also important for the team to create a plan or process for how they should respond to incidents of insider threats, especially on reports of workplace violence. The team should always remember that there is no one-size-fits-all approach to addressing insider threat incidents of a similar nature.

Implement controls. An organization that has little-to-no controls isn’t secure at all. In fact, they are low-hanging fruit for external and internal actors. Controls keep an organization’s system, network, and assets safe. They also minimize the risk of insider threats. Below are some controls organizations may want to consider adopting. (Again, doing so should be based on their risk assessment):

  • Block harmful activity. This includes the accessing of particular websites or the downloading and installation of certain programs.
  • Whitelist applications. This includes file types of email attachments employees can open.
  • Disable USB drives, CD drives, and follow-based email programs.
  • Minimize accessibility of certain data. Organizations should focus on this, too, when it comes to their telework or remote workers.

Read: How to secure your remote workers

  • Provide the least level of access to privileged users.
  • Place flags on old credentials. Former employees may attempt to use the credentials they used when they were still employed.
  • Create an employee termination process.

Install software. Many organizations may not realize that software helps in nipping insider threats in the bud. Below is a list of some programs the organization may want to consider using:

  • User activity monitoring software
  • Predictive/data analytics software (for looking for patterns collected from employee interactions within the organization’s network)
  • Security information and event management (SIEM) software
  • Log management software
  • Intrusion detection (IDS) and prevention (IDP) software
  • Virtual machines (for a safe environment to detonate or open potentially harmful files)

It’s important to note that while software, controls, and policies designed to aid organizations in stopping insider risks are in place, insider threat incidents may never be eradicated entirely. Furthermore, predicting insider threats is not easy.

“To be able to predict when an insider maliciously wants to harm an organization, to defraud them, to steal something from them—it’s really hard with the technology alone to identify someone who is doing something with malicious intent,” said Randy Trzeciak, director of Carnegie Mellon University’s CERT program, in an interview with SearchSecurity.Com. “You really do need to combine the behavioral aspects of what might motivate somebody to defraud an organization, or to steal intellectual property, or to sabotage a network or system, which is usually outside of the control of what a traditional IT department is and what they do to prevent or detect malicious activity by insiders.”

Additional reading:


Jovi Umawing

Knows a bit about everything and a lot about several somethings. Writes about those somethings, usually in long-form.