Netflix

Researchers go hunting for Netflix’s Bandersnatch

A new research paper from the Indian Institute of Technology Madras explains how popular Netflix interactive show Bandersnatch could fall victim to a side-channel attack.

In 2016, Netflix began adding TLS (Transport Layer Security) to their video content to ensure strangers couldn’t eavesdrop on viewer habits. Essentially, now the videos on Netflix are hidden away behind HTTPS—encrypted and compressed.

Previously, Netflix had run into some optimisation issues when trialling the new security boost, but they got there in the end—which is great for subscribers. However, this new research illustrates that even with such measures in place, snoopers can still make accurate observations about their targets.

What is Bandersnatch?

Bandersnatch is a 2018 film on Netflix that is part of the science fiction series Black Mirror, an anthology about the ways technology can have unforeseen consequences. Bandersnatch gives viewers a choose-your-own-adventure-style experience, allowing for various options to perform task X or Y. Not all of them are important, but you’ll never quite be sure what will steer you to one of 10 endings.

Charlie Brooker, the brains behind Bandersnatch and Black Mirror, was entirely aware of the new, incredibly popular wave of full motion video (FMV) games on platforms such as Steam [1], [2], [3]. Familiarity with Scott Adams text adventures and the choose your own adventure books of the ’70s and ’80s would also be a given.

No surprise, then, that Bandersnatch—essentially an interactive FMV game as a movie—became a smash hit. Also notable, continuing the video game link: It was built using Twine, a common method for piecing together interactive fiction in gaming circles.

What’s the problem?

Researchers figured out a way to determine which options were selected in any given play-through across multiple network environments. Browsers, networks, operating systems, connection type, and more were changed for 100 people during testing.

Bandersnatch offers two choices at multiple places throughout the story. There’s a 10-second window to make that choice. If nothing is selected, it defaults to one of the options and continues on.

Under the hood, Bandersnatch is divided into multiple pieces, like a flowchart. Larger, overarching slices of script go about their business, while within those slices are smaller fragments where storyline can potentially branch out.

This is where we take a quick commercial break and introduce ourselves to JSON.

Who is JSON?

He won’t be joining us. However, JavaScript Object Notation will.

Put simply, JSON is an easily-readable method of sending data between servers and web applications. In fact, it more closely resembles a notepad file than a pile of obscure code.

In Bandersnatch, there are a set of answers considered to be the default flow of the story. That data is prefetched, allowing users who choose the default or do nothing to stream continuously.

When a viewer reaches the point in the story where they must make a choice, a JSON file is triggered from the browser to let the Netflix server know. Do nothing in the 10-second window? Under the hood, the prefetched data continues to stream, and viewers continue their journey with the default storyline.

If the viewer chooses the other, non-default option, however, then the prefetched data is abandoned and a second, different type of JSON file is sent out requesting the alternate story path.

What we have here is a tale of two JSONs.

Although the traffic between the Netflix browser and its servers is encrypted, researchers in this latest study were able to decipher which choices its participants made 96 percent of the time by determining the number and type of JSON files sent.

Should we be worried?

This may not be a particularly big problem for Netflix viewers, yet. However, if threat actors could intercept and follow user choices using a similar side channel, they could build reasonable behavioral profiles of their victims.

For instance, viewers of Bandersnatch are asked questions like “Frosties or sugar-puffs?”, “Visit therapist or follow Colin?”, and “Throw tea over computer or shout at dad?”. The choices made could potentially reveal benign information, such as food and music preferences, or more sensitive intel, such as a penchant for violence or political leanings.

Just as we can’t second guess everyone’s threat model (even for Netflix viewers), we also shouldn’t dismiss this. There are plenty of dangerous ways monitoring along these lines could be abused, whether the data is SSL or not. Additionally, this is something most of us going about our business probably haven’t accounted for, much less know what to do about it.

What we do know is that it’s important that content providers—such as gaming studios or streaming services—affected by this research account for it, and look at ways of obfuscating data still further.

Afterall, a world where your supposedly private choices are actually parseable feels very much like a Black Mirror episode waiting to happen.

ABOUT THE AUTHOR

Christopher Boyd

Former Director of Research at FaceTime Security Labs. He has a very particular set of skills. Skills that make him a nightmare for threats like you.