Attribution Part II: Don't overthink it

Threat modeling: What are you so afraid of?

When asked “what should we defend against,” a common response by a decision maker is “everything,” operating under the implicit logic that if a threat exists, why on earth wouldn’t an organization defend against it? Firstly, because the cost curve for that security strategy leans towards the exponential. Secondly, because a threat isn’t a threat unless it is a threat towards you. POS skimmers, air gap jumpers, and Gamera all exist as potentially catastrophic security threats, but not all of those threats are directed against all of us, all the time. So, when allocating time, energy, and funds towards a secure network, how do we decide which frightening news story to respond to, and which to file away under “interesting, but not relevant?” That’s where threat modeling comes in.

From Wikipedia: “Threat modeling is a process by which potential threats can be identified, enumerated, and prioritized – all from a hypothetical attacker’s point of view. [emphasis mine] The purpose of threat modeling is to provide defenders with a systematic analysis of the probable attacker’s profile, the most likely attack vectors, and the assets most desired by an attacker.”

The operative phrase is “from a hypothetical attacker’s point of view.” Not all attackers have the same targets and not all networks attract the same attackers. If you do not own a Tesla, you do not have to defend against the recent hack seen from Chinese researchers. Easy enough. But who are your attackers and what might they want? Here’s where things get hairy. Defenders tend to have habits of mind that attribute their own personality and motivational traits to the attacker. For example, destructive malware is used to be seen as unrealistic, because what would a threat actor stand to gain from that? It was unlikely, right up until it wasn’t.

Another common error we like to make is to assume the threat actor’s motives are unknowable, that they are simply chaos agents striking out opportunistically, and that criminals don’t have any rational motives other than money. This is a little silly, as everyone, hackers included, act in ways satisfying motivations that are rational to them. We can short circuit this cognitive error with a liberal application of empathy—or imagining what sort of motivations would be required of an attacker to make our network an attractive target. Every hacker conducts targeting, for example:

  • Malware will filter based on geolocation
  • Phishing will imitate one financial institution to the exclusion of others
  • Business email compromise will target one executive to the exclusion of others

These observable facts prove that there are targeting decisions being made on the threat actor’s end. What people don’t always realize is that these decisions are a weakness—they betray motivations and priorities of the hacker behind the screen. We can use them to construct a threat model and orient defenses accordingly.

There are many, many threat models available on the internet with extensive documentation on how to apply them to your organization. Most are designed to map out data flow, identify soft points in organizational processes, and assign mitigations based on specific type of probable attacker and their identified motivations. These models are great, they are thorough, and nobody ever uses them. Here’s why:

shutterstock_128927741-680x400

They tend to look like this. Nobody likes looking at this. Organizational decision makers especially do not like looking at this.

Most threat model processes available online are technically correct to an exquisite degree, but are still profoundly wrong because they ignore the human factors in the system—namely, no one in their right mind wants to read a Visio diagram that scrolls over three pages. So these things have a habit of eating enormous numbers of SOC hours, only to be shelved shortly after presentation and never revisited. So how do we assess threats and have people listen to us? We ask one question:

Who are our attackers and what do they want?

This is a simple, accessible question that requires minimal technical expertise to answer effectively. If you are a government, your attackers are APTs who want actionable strategic intelligence. If you are not a government or government contracting organ, mitigating against APT attacks rather than spending on off-site backups can seem a little foolish, because you’re defending against someone else’s threat. If your company isn’t in the habit of assessing its own threats, large security vendors have taken to publishing general threats we all face on an annual basis (Shameless plug for our own data here).

It’s easy to talk in hypotheticals about modeling threats to the organization, so let’s dig into some practical examples with three case studies on threats from the past.

Case study: Mecha Godzilla

super-mechagodzilla_head_at_abeno_harukas_art_museum_august_31_2014_01

Mecha Godzilla was constructed by the Japanese military to defend Tokyo against incursions by Godzilla. Mecha Godzilla is powered by a heavy water nuclear reactor and can fire missiles, as well as energy beams from its eyes.

  • Targets: Godzilla and secondary mutated animals in the Tokyo region
  • Motivations: Defending the city
  • Secondary impacts: public infrastructure damage
  • Final threat profile: Mecha Godzilla does not pose a direct threat to cybersecurity organizations that are also not Godzilla. Secondary impacts can be mitigated via updated organizational resilience plans.

Case study: Gamera

1024px-gamera_in_hamajima

Gamera is a giant mutated turtle that can walk on two legs and fly. It can breathe fire, shoot fireballs, and feed on petroleum products. It is weak to cold.

  • Targets: Energy sources, fire, raw petroleum
  • Motivations: Feeding on energy, finding others of its kind
  • Secondary impacts: public infrastructure damage, energy cost increases, and disruptions to air travel
  • Final threat profile: Gamera poses a direct, significant threat to organizations in the energy sector. Non-renewable energy companies should move Gamera mitigations to a critical priority. Longer term mitigations can include diversifying into renewable energy sources. Organizations not in the energy sector should implement Gamera mitigations in the form of organizational resilience plans, limiting non-essential air travel and maintaining secondary power sources for critical infrastructure.

Case study: Cylons

105091321_6bbb2cf7c2_z

Cylons are synthetic humanoids who violently rebelled against mankind and have an unspecified plan, probably for revenge. Cylons have been known to attack humans directly, but are also highly proficient at cyber attacks making use of spyware, destructive malware, and data exfiltration. Strong motivation to kill all humans, combined with best-in-class data backup processes qualify Cylons as an Advanced Persistent Threat.

  • Targets: All humans
  • Motivations: Revenge, religious, and self-actualization
  • Secondary impacts: Overzealous network segmentation can cause targeted organizations to experience a marked drop in data management efficiency.
  • Final threat profile: Cylons are an advanced persistent threat group with a propensity for indiscriminate targeting, and multivector, long term attempts at network penetration. Highly skilled at social engineering, Cylons have been seen to circumvent security controls by attacking human vectors. Cylons are a critical, high priority threat to organizations composed of humans and require mitigations incorporating defense in depth.

The common factor for all these case studies is that they all assess threats from the attackers’ point of view. Should the threat actor state a motivation to kill all humans, we take that at face value and implement mitigations accordingly. If a threat actor states a motivation to take down the international banking system, we also take that at face value, regardless of their actual ability to achieve that. Motivations may not predict consequences of an attack, but they do inform us on what that attack will probably look like.

Some final takeaways on threat modeling:

  1. Listen to your threat actors. Whether or not you know who they are, you probably know very well what they want.
  2. Defend against your threats, not someone else’s.

  3. Communicate your threats to decision makers in human-readable form, not impenetrable charts.

And if you’d like an exceptional article going over the theory and practice of formal modeling, please check out:

https://baoz.net/guerrilla-threat-modelling/

ABOUT THE AUTHOR

William Tsing

Breaking things and wrecking up the place since 2005.