Pages

24 March 2018

Facebook’s Surveillance Machine

Zeynep Tufekci

In 2014, Cambridge Analytica, a voter-profiling company that would later provide services for Donald Trump’s 2016 presidential campaign, reached out with a request on Amazon’s “Mechanical Turk” platform, an online marketplace where people around the world contract with others to perform various tasks. Cambridge Analytica was looking for people who were American Facebook users. It offered to pay them to download and use a personality quiz app on Facebook called thisisyourdigitallife.

About 270,000 people installed the app in return for $1 to $2 per download. The app “scraped” information from their Facebook profiles as well as detailed information from their friends’ profiles. Facebook then provided all this data to the makers of the app, who in turn turned it over to Cambridge Analytica.

A few hundred thousand people may not seem like a lot, but because Facebook users have a few hundred friends each on average, the number of people whose data was harvested reached about 50 million. Most of those people had no idea that their data had been siphoned off (after all, they hadn’t installed the app themselves), let alone that the data would be used to shape voter targeting and messaging for Donald Trump’s presidential campaign.

This weekend, after this was all exposed by The New York Times and The Observer of London, Facebook hastily made a public announcement that it was suspending Cambridge Analytica (well over a year after the election) and vehemently denied that this was a “data breach.” Paul Grewal, a vice president and deputy general counsel at Facebook, wrote that “the claim that this is a data breach is completely false.” He contended that Facebook users “knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.” He also said that “everyone involved gave their consent.”

Mr. Grewal is right: This wasn’t a breach in the technical sense. It is something even more troubling: an all-too-natural consequence of Facebook’s business model, which involves having people go to the site for social interaction, only to be quietly subjected to an enormous level of surveillance. The results of that surveillance are used to fuel a sophisticated and opaque system for narrowly targeting advertisements and other wares to Facebook’s users.

Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.

Facebook doesn’t just record every click and “like” on the site. It also collects browsing histories. It also purchases “external” data like financial information about users (though European nations have some regulations that block some of this). Facebook recently announced its intent to merge “offline” data — things you do in the physical world, such as making purchases in a brick-and-mortar store — with its vast online databases.

Facebook even creates “shadow profiles” of nonusers. That is, even if you are not on Facebook, the company may well have compiled a profile of you, inferred from data provided by your friends or from other data. This is an involuntary dossier from which you cannot opt out in the United States.

Every weekday, get thought-provoking commentary from Op-Ed columnists, the Times editorial board and contributing writers from around the world.

Despite Facebook’s claims to the contrary, everyone involved in the Cambridge Analytica data-siphoning incident did not give his or her “consent” — at least not in any meaningful sense of the word. It is true that if you found and read all the fine print on the site, you might have noticed that in 2014, your Facebook friends had the right to turn over all your data through such apps. (Facebook has since turned off this feature.) If you had managed to make your way through a bewildering array of options, you might have even discovered how to turn the feature off.

This wasn’t informed consent. This was the exploitation of user data and user trust.

Let’s assume, for the sake of argument, that you had explicitly consented to turn over your Facebook data to another company. Do you keep up with the latest academic research on computational inference? Did you know that algorithms now do a pretty good job of inferring a person’s personality traits, sexual orientation, political views, mental health status, substance abuse history and more just from his or her Facebook “likes” — and that there are new applications of this data being discovered every day?

Given this confusing and rapidly changing state of affairs about what the data may reveal and how it may be used, consent to ongoing and extensive data collection can be neither fully informed nor truly consensual — especially since it is practically irrevocable.

What did Cambridge Analytica do with all the data? With whom else might it have shared it? In 2015, Facebook sent a stern letter to Cambridge Analytica asking that the data be deleted. Cambridge Analytica employees have said that the company merely checked a box indicating that the data was deleted, at which point Facebook decided not to inform the 50 million users who were affected by the breach, nor to make the issue public, nor to sanction Cambridge Analytica at the time.

The New York Times and The Observer of London are reporting that the data was not deleted. And Cambridge Analytica employees are claiming that the data formed the backbone of the company’s operations in the 2016 presidential election.

If Facebook failed to understand that this data could be used in dangerous ways, that it shouldn’t have let anyone harvest data in this manner and that a third-party ticking a box on a form wouldn’t free the company from responsibility, it had no business collecting anyone’s data in the first place. But the vast infrastructure Facebook has built to obtain data, and its consequent half-a-trillion-dollar market capitalization, suggest that the company knows all too well the value of this kind of vast data surveillance.

Should we all just leave Facebook? That may sound attractive but it is not a viable solution. In many countries, Facebook and its products simply are the internet. Some employers and landlords demand to see Facebook profiles, and there are increasingly vast swaths of public and civic life — from volunteer groups to political campaigns to marches and protests — that are accessible or organized only via Facebook.

The problem here goes beyond Cambridge Analytica and what it may have done. What other apps were allowed to siphon data from millions of Facebook users? What if one day Facebook decides to suspend from its site a presidential campaign or a politician whose platform calls for things like increased data privacy for individuals and limits on data retention and use? What if it decides to share data with one political campaign and not another? What if it gives better ad rates to candidates who align with its own interests?

A business model based on vast data surveillance and charging clients to opaquely target users based on this kind of extensive profiling will inevitably be misused. The real problem is that billions of dollars are being made at the expense of the health of our public sphere and our politics, and crucial decisions are being made unilaterally, and without recourse or accountability.

No comments:

Post a Comment