Privacy is a human right, and online privacy should be no exception.
Yet, as the US considers new laws to protect individuals’ online data, at least two proposals—one statewide law that can still be amended and one federal draft bill that has yet to be introduced—include an unwelcome bargain: exchanging money for privacy.
This framework, sometimes called “pay-for-privacy,” is plain wrong. It casts privacy as a commodity that individuals with the means can easily purchase. But a move in this direction could further deepen the separation between socioeconomic classes. The “haves” can operate online free from prying eyes. But the “have nots” must forfeit that right.
Though this framework has been used by at least one major telecommunications company before, and there are no laws preventing its practice today, those in cybersecurity and the broader technology industry must put a stop to it. Before pay-for-privacy becomes law, privacy as a right should become industry practice.
Data privacy laws prove popular, but flawed
Last year, the European Union put into effect one of the most sweeping set of data privacy laws in the world. The General Data Protection Regulation, or GDPR, regulates how companies collect, store, share, and use EU citizens’ data. The law has inspired countries everywhere to follow suit, with Italy (an EU member) issuing regulatory fines against Facebook, Brazil passing a new data-protective bill, and Chile amending its constitution to include data protection rights.
The US is no exception to this ripple effect.
In the past year, Senators Ron Wyden of Oregon, Marco Rubio of Florida, Amy Klobuchar of Minnesota, and Brian Schatz, joined by 14 other senators as co-sponsors, of Hawaii, proposed separate federal bills to regulate how companies collect, use, and protect Americans’ data.
Sen. Rubio’s bill asks the Federal Trade Commission to write its own set of rules, which Congress would then vote on two years later. Sen. Klobuchar’s bill would require companies to write clear terms of service agreements and to send users notifications about privacy violations within 72 hours. Sen. Schatz’s bill introduces the idea that companies have a “duty to care” for consumers’ data by providing a “reasonable” level of security.
But it is Sen. Wyden’s bill, the Consumer Data Protection Act, that stands out, and not for good reason. Hidden among several privacy-forward provisions, like stronger enforcement authority for the FTC and mandatory privacy reports for companies of a certain size, is a dangerous pay-for-privacy stipulation.
According to the Consumer Data Protection Act, companies that require user consent for their services could charge users a fee if those users have opted out of online tracking.
If passed, here’s how the Consumer Data Protection Act would work:
Say a user, Alice, no longer feels comfortable having companies collect, share, and sell her personal information to third parties for the purpose of targeted ads and increased corporate revenue. First, Alice would register with the Federal Trade Commission’s “Do Not Track” website, where she would choose to opt-out of online tracking. Then, online companies with which Alice interacts would be required to check Alice’s “Do Not Track” status.
If a company sees that Alice has opted out of online tracking, that company is barred from sharing her information with third parties and from following her online to build and sell a profile of her Internet activity. Companies that are run almost entirely on user data—including Facebook, Amazon, Google, Uber, Fitbit, Spotify, and Tinder—would need to heed users’ individual decisions. However, those same companies could present Alice with a difficult choice: She can continue to use their services, free of online tracking, so long as she pays a price.
This represents a literal price for privacy.
Electronic Frontier Foundation Senior Staff Attorney Adam Schwartz said his organization strongly opposes pay-for-privacy systems.
“People should be able to not just opt out, but not be opted in, to corporate surveillance,” Schwartz said. “Also, when they choose to maintain their privacy, they shouldn’t have to pay a higher price.”
Pay-for-privacy schemes can come in two varieties: individuals can be asked to pay more for more privacy, or they can pay a lower (discounted) amount and be given less privacy. Both options, Schwartz said, incentivize people not to exercise their privacy rights, either because the cost is too high or because the monetary gain is too appealing.
Both options also harm low-income communities, Schwartz said.
“Poor people are more likely to be coerced into giving up their privacy because they need the money,” Schwartz said. “We could be heading into a world of the ‘privacy-haves’ and ‘have-nots’ that conforms to current economic statuses. It’s hard enough for low-income individuals to live in California with its high cost-of-living. This would only further aggravate the quality of life.”
Unfortunately, a pay-for-privacy provision is also included in the California Consumer Privacy Act, which the state passed last year. Though the law includes a “non-discrimination” clause meant to prevent just this type of practice, it also includes an exemption that allows companies to provide users with “incentives” to still collect and sell personal information.
In a larger blog about ways to improve the law, which was then a bill, Schwartz and other EFF attorneys wrote:
“For example, if a service costs money, and a user of this service refuses to consent to collection and sale of their data, then the service may charge them more than it charges users that do consent.”
The alarm for pay-for-privacy isn’t theoretical—it has been implemented in the past, and there is no law stopping companies from doing it again.
In 2015, AT&T offered broadband service for a $30-a-month discount if users agreed to have their Internet activity tracked. According to AT&T’s own words, that Internet activity included the “webpages you visit, the time you spend on each, the links or ads you see and follow, and the search terms you enter.”
Most of the time, paying for privacy isn’t always so obvious, with real dollars coming out or going into a user’s wallet or checking account. Instead, it happens behind the scenes, and it isn’t the user getting richer—it’s the companies.
Powered by mountains of user data for targeted ads, Google-parent Alphabet recorded $32.6 billion in advertising revenue in the last quarter of 2018 alone. In the same quarter, Twitter recorded $791 million in ad revenue. And, notable for its CEO’s insistence that the company does not sell user data, Facebook’s prior plans to do just that were revealed in documents posted this week. Signing up for these services may be “free,” but that’s only because the product isn’t the platform—it’s the user.
A handful of companies currently reject this approach, though, refusing to sell or monetize users’ private information.
As for Google’s very first product—online search— the clearest privacy alternative is DuckDuckGo. The privacy-focused service does not track users’ searches, and it does not build individualized profiles of its users to deliver unique results.
Even without monetizing users’ data, DuckDuckGo has been profitable since 2014, said community manager Daniel Davis.
“At DuckDuckGo, we’ve been able to do this with ads based on context (individual search queries) rather than personalization.”
Davis said that DuckDuckGo’s decisions are steered by a long-held belief that privacy is a fundamental right. “When it comes to the online world,” Davis said, “things should be no different, and privacy by default should be the norm.”
It is time other companies follow suit, Davis said.
“Control of one’s own data should not come at a price, so it’s essential that [the] industry works harder to develop business models that don’t make privacy a luxury,” Davis said. “We’re proof this is possible.”
Hopefully, other companies are listening, because it shouldn’t matter whether pay-for-privacy is codified into law—it should never be accepted as an industry practice.