Pages

6 November 2019

Are Facebook and Google State Actors?

By Jed Rubenfeld 

It cannot be thought that any single person or group shall ever have the right to determine what communication may be made to the American people. ... We cannot allow any single person or group to place themselves in a position where they can censor the material which shall be broadcasted to the public.

What Secretary Herbert Hoover warned against has now come to pass. A handful of internet mega-platforms, unsurpassed in wealth and power, exercise a degree of control over the content of public discourse that is unprecedented in history. No governmental actor in this country, high or low, has the authority to excise from even a small corner of public discourse opinions deemed too dangerous or offensive. Yet Facebook and Google do that every day for hundreds of millions of people.

This is permitted as a constitutional matter because Facebook and Google are private companies, whereas the Constitution applies only against state actors. If Facebook and Google were state actors, their censorship policies would have provoked a constitutional firestorm.


But suppose Google and Facebook are in fact state actors when blocking speech they deem objectionable? Suppose existing doctrine already compels this result—not through any fancy reconceptualization, but through a straightforward application of precedent? Then the firestorm would be long overdue, and the world of social media would, as a constitutional matter, have to be turned upside down.

That’s the world I believe we’re living in. At least there’s a very powerful argument for it, deserving of careful consideration, that seems to have escaped litigants and judges alike. If Congress had done in almost any other setting what it’s done to online speech, the unconstitutionality would have been immediately apparent.

Let’s begin with a simple hypothetical. Say that a state wants to shut down its abortion clinics. Legislators can’t just ban those clinics; that would be unconstitutional. So instead the legislature passes the Childbirth Decency Act, immunizing against any legal liability individuals who barricade abortion clinics, blocking all access to them. No one is statutorily compelled to do anything. But individuals who barricade abortion clinics cannot be sued in tort; they can’t be prosecuted for trespass; they can’t be held liable in any way.

When the barricading begins, would courts find no state action and thus no occasion for any constitutional analysis? The problem is that if legislators can pass an immunity law of this kind, they can circumvent almost any constitutional right.

Congress wants to look through the email of every major corporate CEO? Easy. The newly enacted Email Decency Act immunizes from all legal liability any hackers who break into the CEOs’ email files and transfer them to public databases. Confiscate guns? The Firearms Decency Act authorizes nongovernmental organizations to hire private security contractors to break into people’s homes in order to seize and dispose of any guns they find there, immunizing all parties from liability.

The principle at stake in all these hypothetical Decency Acts is clear. Some kind of constitutional scrutiny has to be triggered if legislators, through an immunity statute, deliberately seek to induce private conduct that would violate constitutional rights if state actors engaged in that conduct themselves. Yet this is exactly what Congress has done for internet censorship in the nonhypothetical Communications Decency Act (CDA).

Section 230 of the CDA, the single most important statutory provision touching the internet, famously grants online platforms a broad immunity against liability for the third-party content they carry. If an article or advertisement in the New York Times libels you, you can sue the Times; if an article or advertisement on Facebook does the same thing, Facebook is immune. Important as this measure is, it’s not my focus here.

Section 230 also grants a second, “good Samaritan” immunity to online platforms as well. In this second immunity, Section 230 authorizes internet platforms to block content deemed “lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable, whether or not such material is constitutionally protected.” Section 230 explicitly exempts websites from most civil and state criminal liability for any action they take in a “good faith effort” to exclude such “offensive” material. Through this immunity, as Professor Dawn Nunziato puts it, “Congress encouraged private Internet actors to do what it could not do itself—restrict harmful, offensive, and otherwise undesirable speech, the expression of which would nonetheless be protected by the First Amendment.”

In other words, Section 230 squarely violates the principle suggested above. Its immunity is not universal—for example, the section does not apply to federal criminal law—but for all practical purposes, Section 230 effectively immunizes and induces private conduct that would be unconstitutional if governmental actors did it themselves. Indeed, that was its purpose.

Lower courts have repeatedly ruled that Facebook, Google and other private internet platforms are not state actors—but the holdings in these cases have been based primarily on public-forum and public-function doctrine. In other words, courts have found that internet platforms do not become state actors merely by virtue of providing a forum for public expression, a holding consistent with the Supreme Court’s recent decision in Manhattan Community Access Corp. v. Halleck (finding that a privately owned public access television channel was not a state actor). But these cases don’t address the very different question posed by Section 230: Does state action exist when Congress passes an immunity statute designed to induce private parties to take action that would trigger constitutional review if engaged in by state actors? No case dealing with Section 230 seems even to have recognized this issue. The Supreme Court, however, has confronted that question—and the court’s answer was yes.

In 1985, a federal agency enacted regulations designed to induce private railroads to test their workers for drugs and alcohol. Subpart D of those regulations immunized from state law liability railroads that chose to administer these tests in specified circumstances. Subpart D didn’t compel the tests; it merely permitted and immunized them. In Skinner v. Railway Labor Executives’ Assoc., railway workers sued the federal government, claiming that these tests would violate the Fourth Amendment. Defendants replied that any tests implemented under Subpart D would not be compulsory but, rather, would be undertaken voluntarily by private parties, so there would be no state action.

The district court had refused to accept this argument. So had the Ninth Circuit. And so did all nine justices of the Supreme Court. Said the court:

The fact that the Government has not compelled a private party to perform a search does not, by itself, establish that the search is a private one. Here, specific features of the regulations combine to convince us that that the Government did more than adopt a passive position toward the underlying private conduct.

In particular, the Supreme Court emphasized that the “regulations, including those in Subpart D, pre-empt state laws” and made illegal all contracts prohibiting the tests, thus immunizing employers who administered the tests from all state law liability, including for breach of contract. The justices further stressed that workers were not “free to decline” the tests; employees who refused could be removed from service. Finally, not only had “the Government removed all legal barriers to the testing authorized by Subpart D”; it also had “made plain” “its strong preference for testing.” Accordingly, “the Government’s encouragement, endorsement, and participation” “suffice to implicate the Fourth Amendment.”

Every piece of this reasoning applies to Section 230. In both cases, the government did far more “than adopt a passive position toward the underlying conduct.” Just as Subpart D immunized from state law liability railroads that administered specified tests, Section 230 immunizes from state law liability platforms that censor “lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable” material. Just as railway workers were not free to decline to submit to the tests, so too Facebook and YouTube users cannot decline to submit to censorship; in both cases, individuals who refuse to comply can be excluded from service. And just as the government in Skinner had made plain its “strong preference” for the testing, Section 230 and its legislative history make plain the government’s strong preference for the removal of “offensive” content.

Skinner is, therefore, a strikingly close precedent. Yet it appears that not a single court has addressed the implications of Skinner for Section 230.

Perhaps distinctions can be found between Skinner’s Subpart D and the CDA’s Section 230. Someone might say, for example, that Section 230 immunizes not only blocking third-party content but also leaving it up (although functionally the situation in Skinner was probably no different). Or that for many or most internet platforms, filtering out offensive content is something they actively want to do (although railroads that administered the Subpart D tests would presumably have wanted to as well). Another arguable distinction might be that, in Skinner, Congress had expressed an interest in the railroads’ sharing their drug test results with federal authorities. (Note, however, that we don’t know the extent to which the federal government has requested or even required Google and Facebook to reveal information about users attempting to post objectionable content. In the case of attempts to post criminal or extremist content, it is quite possible, perhaps even likely, that the government has expressed at least as much of an interest in information-sharing as it had in Skinner.) Or maybe it will be said that the implications of applying Skinner to Section 230 are simply so extreme—potentially reaching every website that blocks offensive third-party content—that Skinner just can’t be a good precedent for the internet.

But Skinner is not the only important precedent. Another line of cases provides powerful additional support for a finding of state action here, although the reach of these cases would be much more limited—applying not to every website but only to the mega-platforms like Facebook and Google.

Case law going back to the Supreme Court’s Bantam Books prior restraint decision of 1963 establishes that informal governmental pressure and threats can turn private-party conduct into state action. (In Bantam Books, a private bookseller had stopped selling certain books after receiving a letter from state commissioners listing those books as objectionable and suggesting that the bookseller might be referred to local prosecutors if he continued selling them. The court found “state action.”) While governmental actors are free to denounce private parties for failing to restrain others’ speech, the line is crossed, as the U.S. Court of Appeals for the Second Circuit and other courts have put it, when “comments of a government official can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official’s request.”

For years, members of Congress have been pressuring Facebook and Google to block (or block more) hate speech, extremist content, false news, white supremacism and so on, threatening these companies with death-sentence regulatory measures, including an antitrust break-up and public-utility-style regulation. These threats have not exactly been veiled. “Figure it out,” said Rep. Cedric Richmond in April 2019, to representatives of Facebook and Google at a hearing on the platforms’ hate speech policies. “Because you don’t want us to figure it out for you.” The threats have apparently been quite effective. Just last month, one day before another round of hearings was to begin, Facebook announced a series of new, more aggressive measures to block hateful and extremist content.

A number of cases indicate that this pressure campaign might on its own be sufficient to turn Facebook and Google into state actors, entirely apart from Section 230. One of these cases, called Writers Guild, is especially analogous. In 1974, the big (pre-cable) television networks adopted the Family Viewing Policy, barring content “inappropriate for viewing by a general family audience” during the first hour of prime time and the hour immediately preceding. Plaintiffs in Writers Guild sued the networks as well as the Federal Communications Commission (FCC), challenging the Family Viewing Policy on First Amendment grounds. Defendants responded that the policy, having been voluntarily adopted by private parties, was exempt from constitutional scrutiny.

In a lengthy, closely reasoned opinion, after a weeks-long trial, the district court rejected this argument. The court found that the Family Viewing Policy had been adopted due to “pressure” from the FCC, which was itself responding to pressure from congressional committees. The FCC had not mandated the Family Viewing Policy, but its chairman had “threatened the industry with regulatory action” and “with actions that would impose severe economic risks and burdens” on the networks if they did not move to block excessive “sex and violence” from prime-time programming. As a result, the court concluded, the state action requirement was satisfied.

Writers Guild is hardly controlling. Not only is it a mere district court opinion about a different medium in a different era, but that opinion was later vacated on jurisdictional grounds. Nevertheless, the case is an important example. It shows that the question of whether, or how greatly, governmental pressure has influenced Google’s or Facebook’s content-based censorship policies is ultimately a triable question of fact—with the state action determination hanging in the balance.

The bottom line, however, is this: When governmental pressure is combined with a statutory provision like Section 230, the result must be state action. Immunity plus pressure has to trigger the Constitution’s restraints.

Consider the hypotheticals given above but now add to them a campaign by governmental actors to pressure private parties into performing the conduct the government wants. Imagine a state not only immunizing from liability private parties who barricade abortion clinics but also threatening private parties with adverse legal consequences if they fail to do so. When these parties—freed from the fear of liability if they do barricade, and threatened with adverse legal action if they don’t—begin taking the very action that the legislature wants them to take, I don’t think a no-state-action argument could be made with a straight face.

Or imagine Congress immunizing employers that break into their employees’ homes to search for evidence of immigration law violations, and then going further and threatening large employers with potential adverse regulatory action if they fail to engage in such searches. Is there a court in the country that would fail to find state action in such circumstances? Absent a state action finding, the ability of Congress to circumvent constitutional rights would be obvious.

Thus even if litigants and judges have been slow to see it, and even if the implications are dramatic, there is a strong argument under existing doctrine that Google and Facebook are state actors when they block “objectionable” content. Immunity plus pressure has to equal state action. The real question is not whether the Constitution applies to these mega-platforms’ content moderation policies. It’s what rules the Constitution requires. In a second post, I’ll turn to that issue.

Disclosure: The author has advised attorneys representing parties who are litigating or may litigate against Google.

No comments:

Post a Comment