Facebook’s history betrays its privacy pivot

Facebook’s history betrays its privacy pivot

Facebook CEO Mark Zuckerberg proposed a radical pivot for his company this month: it would start caring—really—about privacy, building out a new version of the platform that turns Facebook less into a public, open “town square” and more into a private, intimate “living room.”

Zuckerberg promised end-to-end encryption across the company’s messaging platforms, interoperability, disappearing messages, posts, and photos for users, and a commitment to store less user data, while also refusing to put that data in countries with poor human rights records.

If carried out, these promises could bring user privacy front and center.

But Zuckerberg’s promises have exhausted users, privacy advocates, technologists, and industry experts, including those of us at Malwarebytes. Respecting user privacy makes for a better Internet, period. And Zuckerberg’s proposals are absolutely a step in the right direction. Unfortunately, there is a chasm between Zuckerberg’s privacy proposal and Facebook’s privacy success. Given Zuckerberg’s past performance, we doubt that he will actually deliver, and we blame no user who feels the same way.

The outside response to Zuckerberg’s announcement was swift and critical.

One early Facebook investor called the move a PR stunt. Veteran tech journalist Kara Swisher jabbed Facebook for a “shoplift” of a competitor’s better idea. Digital rights group Electronic Frontier Foundation said it would believe in a truly private Facebook when it sees it, and Austrian online privacy rights activist (and thorn in Facebook’s side) Max Schrems laughed at what he saw as hypocrisy: merging users’ metadata across WhatsApp, Facebook, and Instagram, and telling users it was for their own, private good.

The biggest obstacle to believing Zuckerberg’s words? For many, it’s Facebook’s history.

The very idea of a privacy-protective Facebook goes so against the public’s understanding of the company that Zuckerberg’s comments taste downright unpalatable. These promises are coming from a man whose crisis-management statements often lack the words “sorry” or “apology.” A man who, when his company was trying to contain its own understanding of a foreign intelligence disinformation campaign, played would-be president, touring America for a so-called “listening tour.”

Users, understandably, expect better. They expect companies to protect their privacy. But can Facebook actually live up to that?

“The future of the Internet”

Zuckerberg opens his appeal with a shaky claim—that he has focused his attention in recent years on “understanding and addressing the biggest challenges facing Facebook.” According to Zuckerberg, “this means taking positions on important issues concerning the future of the Internet.”

Facebook’s vision of the future of the Internet has, at times, been largely positive. Facebook routinely supports net neutrality, and last year, the company opposed a dangerous, anti-encryption, anti-security law in Australia that could force companies around the world to comply with secret government orders to spy on users.

But Facebook’s lobbying record also reveals a future of the Internet that is, for some, less secure.

Last year, Facebook supported one half of a pair of sibling bills that eventually merged into one law. The law followed a convoluted, circuitous route, but its impact today is clear: Consensual sex workers have found their online communities wiped out, and are once again pushed into the streets, away from guidance and support, and potentially back into the hands of predators.

“The bill is killing us,” said one sex worker to The Huffington Post.

Though the law was ostensibly meant to protect sex trafficking victims, it has only made their lives worse, according to some sex worker advocates.

On March 21, 2018, the US Senate passed the Allow States and Victims to Fight Online Sex Trafficking (FOSTA) bill. The bill was the product of an earlier version of its own namesake, and a separate, related bill, called the Stop Enabling Sex Traffickers Act (SESTA). Despite clear warnings from digital rights groups and sex positive advocates, Facebook supported SESTA in November 2017. According to the New York Times, Facebook made this calculated move to curry favor amongst some of its fiercest critics in US politics.

“[The] sex trafficking bill was championed by Senator John Thune, a Republican of South Dakota who had pummeled Facebook over accusations that it censored conservative content, and Senator Richard Blumenthal, a Connecticut Democrat and senior commerce committee member who was a frequent critic of Facebook,” the article said. “Facebook broke ranks with other tech companies, hoping the move would help repair relations on both sides of the aisle, said two congressional staffers and three tech industry officials.”

Last October, the bill came back to haunt the social media giant: a Jane Doe plaintiff in Texas sued Facebook for failing to protect her from sex traffickers.

Further in Zuckerberg’s essay, he promises that Facebook will continue to refuse to build data centers in countries with poor human rights records.

Zuckerberg’s concern is welcome and his cautions are well-placed. As the Internet has evolved, so has data storage. Users’ online profiles, photos, videos, and messages can travel across various servers located in countries around the world, away from a company’s headquarters. But this development poses a challenge. Placing people’s data in countries with fewer privacy protections—and potentially oppressive government regimes—puts everyone’s private, online lives at risk. As Zuckerberg said:

“[S]toring data in more countries also establishes a precedent that emboldens other governments to seek greater access to their citizen’s data and therefore weakens privacy and security protections for people around the world,” Zuckerberg said.

But what Zuckerberg says and what Facebook supports are at odds.

Last year, Facebook supported the CLOUD Act, a law that lowered privacy protections around the world by allowing foreign governments to directly request companies for their citizens’ online data. It is a law that, according to Electronic Frontier Foundation, could result in UK police inadvertently getting their hands on Slack messages written by an American, and then forwarding those messages to US police, who could then charge that American with a crime—all without a warrant.

The same day that the CLOUD Act was first introduced as a bill, it received immediate support from Facebook, Google, Microsoft, Apple, and Oath (formerly Yahoo). Digital rights groups, civil liberties advocates, and human rights organizations directly opposed the bill soon after. None of their efforts swayed the technology giants. The CLOUD Act became law just months after its introduction.

While Zuckerberg’s push to keep data out of human-rights-abusing countries is a step in the right direction for protecting global privacy, his company supported a law that could result in the opposite. The CLOUD Act does not meaningfully hinge on a country’s human rights record. Instead, it hinges on backroom negotiations between governments, away from public view.

The future of the Internet is already here, and Facebook is partially responsible for the way it looks.

Skepticism over Facebook’s origin story 2.0

For years, Zuckerberg told anyone who would listen—including US Senators hungry for answers—that he started Facebook in his Harvard dorm room. This innocent retelling involves a young, doe-eyed Zuckerberg who doesn’t care about starting a business, but rather, about connecting people.

Connection, Zuckerberg has repeated, was the ultimate mission. This singular vision was once employed by a company executive to hand-wave away human death for the “*de facto* good” of connecting people.

But Zuckerberg’s latest statement adds a new purpose, or wrinkle, to the Facebook mission: privacy.

“Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks,” Zuckerberg said.

Several experts see ulterior motives.

Kara Swisher, the executive editor of Recode, said that Facebook’s re-steering is probably an attempt to remain relevant with younger users. Online privacy, data shows, is a top concern for that demographic. But caring about privacy, Swisher said, “was never part of [Facebook’s] DNA, except perhaps as a throwaway line in a news release.”

Ashkan Soltani, former chief technology officer of the Federal Trade Commission, said that Zuckerberg’s ideas were obvious attempts to leverage privacy as a competitive edge.

“I strongly support consumer privacy when communicating online but this move is entirely a strategic play to use privacy as a competitive advantage and further lock-in Facebook as the dominant messaging platform,” Soltani said on Twitter.

As to the commitment to staying out of countries that violate human rights, Riana Pfefferkorn, associate director of surveillance and cybersecurity at Stanford Law School’s Center for Internet and Society, pressed harder.

“I don’t know what standards they’re using to determine who are human rights abusers,” Pfefferkorn said in a phone interview. “If it’s the list of countries that the US has sanctioned, where they won’t allow exports, that’s a short list. But if you have every country that’s ever put dissidents in prison, then that starts some much harder questions.”

For instance, what will Facebook do if it wants to enter a country that, on paper, protects human rights, but in practice, utilizes oppressive laws against its citizens? Will Facebook preserve its new privacy model and forgo the market entirely? Or will it bend?

“We’ll see about that,” Pfefferkorn said in an earlier email. “[Zuckerberg] is answerable to shareholders and to the tyranny of the #1 rule: growth, growth, growth.”

Asked whether Facebook’s pivot will succeed, Pfefferkorn said the company has definitely made some important hires to help out. In the past year, Facebook brought aboard three critics and digital rights experts—one from EFF, one from New American’s Open Technology Institute, and another from AccessNow—into lead policy roles. Further, Pfefferkorn said, Facebook has successfully pushed out enormous, privacy-forward projects before.

“They rolled out end-to-end encryption and made it happen for a billion people in WhatsApp,” Pfefferkorn said. “It’s not necessarily impossible.”

WhatsApp’s past is now Facebook’s future

In looking to the future, Zuckerberg first looks back.

To lend some authenticity to this new-and-improved private Facebook, Zuckerberg repeatedly invokes a previously-acquired company’s reputation to bolster Facebook’s own.

WhatsApp, Zuckerberg said, should be the model for the all new Facebook.

“We plan to build this [privacy-focused platform] the way we’ve developed WhatsApp: focus on the most fundamental and private use case—messaging—make it as secure as possible, and then build more ways for people to interact on top of that,” Zuckerberg said.

The secure messenger, which Facebook purchased in 2014 for $19 billion, is a privacy exemplar. It developed default end-to-end encryption for users in 2016 (under Facebook’s stead), refuses to store keys to grant access to users’ messages, and tries to limit user data collection as much as possible.

Still, several users believed that WhatsApp joining Facebook represented a death knell for user privacy. One month after the sale, WhatsApp’s co-founder Jan Kaum tried to dispel any misinformation about WhatsApp’s compromised vision.

“If partnering with Facebook meant that we had to change our values, we wouldn’t have done it,” Kaum wrote.

Four years after the sale, something changed.

Kaum left Facebook in March 2018, reportedly troubled by Facebook’s approach to privacy and data collection. Kaum’s departure followed that of his co-founder Brian Acton the year before.

In an exclusive interview with Forbes, Acton explained his decision to leave Facebook. It was, he said, very much about privacy.

“I sold my users’ privacy to a larger benefit,” Acton said. “I made a choice and a compromise. And I live with that every day.”

Strangely, in defending Facebook’s privacy record, Zuckerberg avoids a recent pro-encryption episode. Last year, Facebook fought—and prevailed—against a US government request to reportedly “break the encryption” in its Facebook Messenger app. Zuckerberg also neglects to mention Facebook’s successful roll-out of optional end-to-end encryption in its Messenger app.

Further, relying so heavily on WhatsApp as a symbol of privacy is tricky. After all, Facebook didn’t purchase the company because of its philosophy. Facebook purchased WhatsApp because it was a threat. 

Facebook’s history of missed promises

Zuckerberg’s statement promises users an entirely new Facebook, complete with end-to-end encryption, ephemeral messages and posts, less intrusive, permanent data collection, and no data storage in countries that have abused human rights.

These are strong ideas. End-to-end encryption is a crucial security measure for protecting people’s private lives, and Facebook’s promise to refuse to store encryption keys only further buttresses that security. Ephemeral messages, posts, photos, and videos give users the opportunity to share their lives on their own terms. Refusing to put data in known human-rights-abusing regimes could represent a potentially significant market share sacrifice, giving Facebook a chance to prove its commitment to user privacy.

But Facebook’s promise-keeping record is far lighter than its promise-making record. In the past, whether Facebook promised a new product feature or better responsibility to its users, the company has repeatedly missed its own mark.

In April 2018, TechCrunch revealed that, as far back as 2010, Facebook deleted some of Zuckerberg’s private conversations and any record of his participation—retracting his sent messages from both his inbox and from the inboxes of his friends. The company also performed this deletion, which is unavailable to users, for other executives.

Following the news, Facebook announced a plan to give its users an “unsend” feature.

But nearly six months later, the company had failed to deliver its promise. It wasn’t until February of this year that Facebook produced a half-measure: instead of giving users the ability to actually delete sent messages, like Facebook did for Zuckerberg, users could “unsend” an accidental message on the Messenger app within 10 minutes of the initial sending time.

Gizmodo labeled it a “bait-and-switch.”

In October 2016, ProPublica purchased an advertisement in Facebook’s “housing categories” that excluded groups of users who were potentially African-American, Asian American, or Hispanic. One civil rights lawyer called this exclusionary function “horrifying.”

Facebook quickly promised to improve its advertising platform by removing exclusionary options for housing, credit, and employment ads, and by rolling out better auto-detection technology to stop potentially discriminatory ads before they published.

One year later, in November 2017, ProPublica ran its experiment again. Discrimination, again, proved possible. The anti-discriminatory tools Facebook announced the year earlier caught nothing.

“Every single ad was approved within minutes,” the article said.

This time, Facebook shut the entire functionality down, according to a letter from Chief Operating Officer Sheryl Sandberg to the Congressional Black Caucus. (Facebook also announced the changes on its website.)

More recently, Facebook failed to deliver on a promise that users’ phone numbers would be protected from search. Today, through a strange workaround, users can still be “found” through the phone number that Facebook asked them to provide specifically for two-factor authentication.

Away from product changes, Facebook has repeatedly told users that it would commit itself to user safety, security, and privacy. The actual track record following those statements tells a different story, though.

In 2013, an Australian documentary filmmaker met with Facebook’s public policy and communications lead and warned him of the rising hate speech problem on Facebook’s platform in Myanmar. The country’s ultranationalist Buddhists were making false, inflammatory posts about the local Rohingya Muslim population, sometimes demanding violence against them. Riots had taken 80 people’s lives the year before, and thousands of Rohingya were forced into internment camps.

Facebook’s public policy and communications lead, Elliot Schrage, sent the Australian filmmaker, Aela Callan, down a dead end.

“He didn’t connect me to anyone inside Facebook who could deal with the actual problem,” Callan told Reuters.

By November 2017, the problem had exploded, with Myanmar torn and its government engaging in what the United States called “ethnic cleansing” against the Rohingya. In 2018, investigators from the United Nations placed blame on Facebook.

“I’m afraid that Facebook has now turned into a beast,” said one investigator.

During the years before, Facebook made no visible effort to fix the problem. By 2015, the company employed just two content moderators who spoke Burmese—the primary language in Myanmar. By mid-2018, the company’s content reporting tools were still not translated into Burmese, handicapping the population’s ability to protect itself online. Facebook had also not hired a single employee in Myanmar at that time.

In April 2018, Zuckerberg promised to do better. Four months later, Reuters discovered that hate speech still ran rampant on the platform and that hateful posts as far back as six years had not been removed.

The international crises continued.

In March 2018, The Guardian revealed that a European data analytics company had harvested the Facebook profiles of tens of millions of users. This was the Cambridge Analytica scandal, and, for the first time, it directly implicated Facebook in an international campaign to sway the US presidential election.

Buffeted on all sides, Facebook released … an ad campaign. Drenched in sentimentality and barren of culpability, a campaign commercial vaguely said that “something happened” on Facebook: “spam, clickbait, fake news, and data misuse.”

“That’s going to change,” the commercial promised. “From now on, Facebook will do more to keep you safe and protect your privacy.”

Here’s what happened since that ad aired in April 2018.

The New York Times revealed that, throughout the past 10 years, Facebook shared data with at least 60 device makers, including Apple, Samsung, Amazon, Microsoft, and Blackberry. The New York Times also published an investigatory bombshell into Facebook’s corporate culture, showing that, time and again, Zuckerberg and Sandberg responded to corporate crises with obfuscation, deflection, and, in the case of one transparency-focused project, outright anger.

British parliamentary committee released documents that showed how Facebook gave some companies, including Airbnb and Netflix, access to its platform in exchange for favors. (More documents released this year showed prior attempts by Facebook to sell user data.) Facebook’s Onava app got kicked off the Apple app store for gathering user data. Facebook also reportedly paid users as young as 13-years-old to install the “Facebook Research” app on their own devices, an app intended strictly for Facebook employee use.

Oh, and Facebook suffered a data breach that potentially affected up to 50 million users.

While the substance of Zuckerberg’s promises could protect user privacy, the execution of those promises is still up in the air. It’s not that users don’t want what Zuckerberg is describing—it’s that they’re burnt out on him. How many times will they be forced to hear about another change of heart before Facebook actually changes for good?

Tomorrow’s Facebook

Changing the direction of a multibillion-dollar, international company is tough work, though several experts sound optimistic about Zuckerberg’s privacy roadmap. But just as many experts have depleted their faith in the company. If anything, Facebook’s public pressures might be at their lowest—detractors have removed themselves from the platform entirely, and supporters will continue to dig deep into their own good will.

What Facebook does with this opportunity is entirely under its own control. Users around the world will be better off if the company decides that, this time, it’s serious about change. User privacy is worth the effort.

ABOUT THE AUTHOR

David Ruiz

Pro-privacy, pro-security editor. Former journalist turned advocate turned cybersecurity defender. Still a little bit of each. Failing book club member.