21 July 2018

The New Economy’s Old Business Model Is Dead

BY HENRY FARRELL

The titans of the new economy are different from their predecessors in one very important way: They aren’t job creators — at least not on a scale to match their dizzying growth in value. General Motors, at its peak in 1979, had some 618,000 employees in the United States and 853,000 worldwide. Facebook had just a few more than 25,000 employeesin 2017, up from nearly 12,700 as recently as 2015. Google’s parent corporation, Alphabet, is the third-largest company in the world by market capitalization but has only about 75,000 employees.

But the exponential difference between technology companies’ revenues and their payrolls probably won’t last. The fact that they can have billions of users but only tens of thousands of employees is in part thanks to algorithms and machine learning, which have taken the place of many ordinary workers. It is also the result, however, of political decisions made back in the 1990s that freed these companies from regulation — and those political decisions probably won’t withstand increased scrutiny. As politicians and citizens get more worried about the behavior of technology giants, these companies are going to have to shoulder new regulatory burdens — and will then have no choice but to hire many more people to manage them. In other words, the new economy’s old business model might be about to come to an end.

Algorithms are the propelling engine of online service companies. The business model of Silicon Valley companies is relatively straightforward. First, come up with a clever and compelling new service, which people might plausibly want. Then, invest in the technology to deliver that service, combining commodity hardware (leased server space and computing power) and purpose-written software with algorithms to manage business processes and user interactions. Then, mortgage your future to promote the service, in the hope that it goes viral and starts being used by millions of people.

Under this model, new businesses must find real money upfront to design the software, straighten out the algorithms, and get the service up and running.

Under this model, new businesses must find real money upfront to design the software, straighten out the algorithms, and get the service up and running.

Venture capitalists provide this initial investment, spreading their bets across a large number of companies, nearly all of which fail. However, the few that succeed can make massive amounts of money because algorithms scale quickly, making it very, very cheap for online service companies to add a new customer.

That’s why giants dominate the new economy. If people like a company’s product, there is very little to stop it growing further once it has gotten past the magic point at which its revenues start to exceed its costs. As Ben Thompson observes on his Stratechery blog, most online service companies face nearly flat marginal cost curves, unlike traditional companies that had to scale up capacities and employment as they added new customers. In the past, if you ran a traditional advertising-based publishing company, you had to keep on hiring more employees to sell ads as the company grew bigger. If you are Facebook, you can turn your everyday sales over to automated algorithms, which make and recalibrate entire advertising marketplaces on the fly.

And so a few online service companies have become monstrously large in terms of users and revenue without having to hire many more employees, like those far-future mutants in old science fiction movies that had enormous heads supported by tiny bodies. Just like those mutants, however, many of these creatures only flourish in an environmentally sealed bubble.

Companies such as Facebook and Google draw energy from radioactive algorithms, but they have also grown because they were shielded from regulation by political decisions made in the 1990s. Back in that distant era, U.S. politicians were determined to prevent roadblocks on the so-called information superhighway. They fought regulation at both national and international levels, arguing that the new space of e-commerce should largely be ruled by self-regulation rather than government. Bill Clinton-era officials believed that regulators not only couldn’t catch up with the new business models that companies were coming up with but would probably only do harm if they did.

This is how social media companies started their life in a sterile environment that was aerobically sealed against the infectious threat of government intervention. Companies including Facebook based their entire business model around one such protection: the safe harbor from intermediary liability provided by legal instruments such as Section 230 of the 1996 Communications Decency Act. Traditional publishers are liable for whatever they publish. They can suffer serious legal penalties for publishing illegal content, such as child pornography. They can also be sued for libel or distribution of copyrighted material.

Section 230, however, made it clear that online service providers should be treated as mere intermediaries, like phone companies, rather than the publishers of any content that their users put up. Under normal circumstances, only the people who had actually created and uploaded the material would be legally liable. Online service providers had legal protection if they wanted to take objectionable material down, but they were not obligated to.

Without these legal protections, the cleverest algorithms in the world could not have allowed companies such as Facebook and YouTube to get away without hiring thousands of workers. Compliance is hard work, which requires the careful balancing of risks and often tricky management decisions.

Even under their current, relatively minimal obligations, it is hard for social media companies to filter content, as they do to enforce their own rules, or delete criminal content, such as child pornography. Since the early days of the internet, when pranksters tried to trick innocents into clicking on shocking pictures, such as the notorious “goatse” (if you’re unfamiliar, consider yourself lucky), people have tried to game content moderation systems, creating a Red Queen’s race of outrage and counterresponse.

Basic moderation can be carried out by machine learning algorithms that can discover, for example, how to distinguish between pornography and ordinary photographs, albeit with many errors of categorization. Yet even this kind of moderation needs to be backed up by human judgment. Adrian Chen writes in Wiredabout how Facebook and Twitter “rely on an army of workers” in the Philippines to filter obscene photos, beheading videos, and animal torture porn so that they do not appear on people’s social media feeds. One of Chen’s sources estimates that some 100,000 people worldwide are employed to carry out this difficult and psychologically damaging work.

Under current laws, Facebook and YouTube are not legally liable for their failure to block most of this material, but they want to avoid offending customers. Furthermore, while there are borderline cases, most of this material is relatively straightforward to categorize. This is what allows these companies to outsource the dirty work to machine learning algorithms and badly paid third-party contractors.

If, instead, these companies faced real legal risks and repercussions, they would have to greatly increase their compliance efforts. Every time they wanted to expand into new kinds of content or attract new customers, they would have to scale up their compliance efforts, too. They would likely still want to farm out the grunt work as much as possible to machine learning processes or exploitative relationships with subcontractors. But they would almost certainly have to hire many new employees, too, to manage compliance efforts that were too complex or too risky to be put out. The marginal costs of attracting new customers would increase substantially, changing, and perhaps even fundamentally challenging, their existing business model.

For example, Facebook does employ human moderators to identify hate speech. These moderators have mere seconds to decide whether an item is hate speech or not. If regulators outside the United States started to impose harsh penalties for letting hate speech through, Facebook would have to hire experienced moderators who had the time to carefully consider each item and how to respond.

These are not abstract worries for social media companies. The bubble that protected companies such as Facebook, YouTube, and Google is about to pop. European regulators have made it clear that they want to bring U.S. online service companies to heel, while U.S. lawmakers are chipping away at Section 230 and similar provisions.

The first real signs of trouble for these companies was a European Court of Justice decision that Google had to remove search results that unreasonably interfered with individuals’ privacy rights. This ruling became notorious as the “right to be forgotten,” prompting an extensive public relations campaign by Google, which enlisted European and American scholars and former policymakers to complain on Google’s behalf that the ruling was a gross example of judicial overreach. While Google was worried about the ruling itself, the way it was worded was perhaps even more alarming. As the legal researcher Julia Powles notes, the European court determined that Google was a “data processor and controller,” suggesting that it could not keep hiding behind the excuse that it was a simple intermediary and might in the future be held to have a wide series of obligations to governments and its users.

These fears were amplified by recent European regulatory proposals to move toward an intermediary responsibility model in areas such as copyright, hate speech, and paid content. In the United States, President Donald Trump has just signed legislationthat limits Section 230’s protections for sites that fail to stop sex trafficking.

However, the real legislative push is probably only just getting started. A series of revelations have done serious damage to Facebook, Google, and other companies. Facebook has allowed advertisers to specifically target audiences of anti-Semites and racists. It also allowed Russian operatives who were interested in sowing division and confusion in the U.S. democratic system to use its services without any effort to block or thwart them until well after the fact. YouTube has used suggestion algorithms that seem to systematically lead people from ordinary political controversy to deranged conspiracy theories, as the writer Zeynep Tufekci among others has identified.

None of these were the consequences of deliberate choices made by Facebook or YouTube. That is exactly the problem. They are the inevitable byproduct of a business model that relies on algorithms to create markets and to serve up stimulating content to users. When machine learning algorithms are left on their own to build marketplaces, by discovering audiences with identifiably distinguishing characteristics, they will not know that there is any innate difference between marketing to anti-Semites and marketing to people with a passion for gardening. When algorithms are optimized for user engagement, they will serve up shocking and alarming videos.

The problem for Facebook and Google and other companies is that really solving these problems, as opposed to just pretending to, requires radical changes to their business models.

The problem for Facebook and Google and other companies is that really solving these problems, as opposed to just pretending to, requires radical changes to their business models.

Specifically, they will not be able to use algorithms to manage their users’ behavior without employing more human judgment to ensure that the algorithms do not go awry and to correct them if they do. Yet if they have to hire more employees per user, they will not be able to scale up at nearly zero cost as they have done in the past. Indeed, they will either have to scale back or transform their business models to curtail the things that their users can do — and the ways in which they feed back their content to them.

This explains why Facebook CEO Mark Zuckerberg was so insistent in his recent congressional testimony that algorithms could do more or less everything that members of Congress or regulators might want. If Zuckerberg had to hire more workers instead, Facebook’s core business model would come under challenge.

Yet even the most sophisticated algorithms cannot substitute for human judgment over a multitude of complex questions. Machine learning algorithms are excellent at discovering hidden structure in apparently disorganized information and at categorizing data that falls into distinct classes. They are badly suited to make the kinds of complex political judgment calls, and to justify these decisions, that regulators are starting to demand.

Social media companies have pulled off the magical trick of providing services to billions of users without hiring millions of employees. They have been able to do this in part because regulators and lawmakers have left them alone. Now they are becoming a target for regulators, exactly because they have become so central to life and because they are obviously incapable of or unwilling to address the problems that their business model has created. The gears of time do not turn backward. Even if social media companies are compelled to live up to their regulatory responsibilities, they are not going to become General Motors. Yet they are discovering that algorithms aren’t nearly as complete a substitute for human employees as they once imagined.

No comments: