14 December 2019

Are Facebook and Google State Actors? A Reply to Alan Rozenshtein

By Jed Rubenfeld 

I argued previously that Section 230 of the Communications Decency Act, in combination with congressional pressure, has turned internet mega-platforms like YouTube and Facebook into state actors when they censor “objectionable” content. Alan Rozenshtein has replied, thoughtfully and critically. This is my response.

There is a simple reason why Section 230—which grants broad immunity to websites that block “objectionable” but “constitutionally protected” speech, as the statute itself puts it—is constitutionally concerning. Through a grant of immunity, the statute deliberately seeks to induce private parties to take action that would violate constitutional rights if governmental actors did it directly. That’s a powerful formula for evading the Constitution. Imagine a statute immunizing private parties who barricade abortion clinics, hack into people’s email or confiscate people’s guns. Such immunity statutes have to trigger constitutional scrutiny; otherwise, all constitutional rights are in jeopardy.

Against this conclusion, Rozenshtein points to the Supreme Court’s 1978 decision in Flagg Bros, Inc. v. Brooks. It’s exactly the right case for him to cite. In Flagg Bros., the court found that a New York statute authorizing a warehouseman’s sale of bailed goods under certain conditions didn’t make such a sale state action. The case can be read to hold that a merely “permissive” statute never turns private conduct into state action. Because the statute “permits, but does not compel” the sale, said the Flagg Bros. court, there was no state action.


But in 1985, as I pointed out in my earlier piece, the Supreme Court in Skinner v. Railway Labor Executives’ Ass’n found state action where, precisely, a federal regulation permitted but did not compel specified private conduct (subjecting railroad workers to drug and alcohol tests). Moreover, like Section 230, the regulation in Skinner was an immunity provision. It knocked out all state law liability for the private parties that engaged in the desired conduct—conduct that would have triggered heightened constitutional scrutiny had governmental actors done it themselves.

So which case controls? To the extent that an earlier Supreme Court decision is inconsistent with a later one, the latter is of course the law. And Flagg Bros. did not involve an immunity statute. The New York statute simply established that specified actions would satisfy a specified legal duty. (It was no more an immunity statute than a speed limit is.) Hence, not only is Skinner the later decision, but it is also far more closely on point when compared to Section 230. So the notion that Flagg Bros. undercuts Skinner is very weak.

Nor is Skinner the only Supreme Court case holding that a permissive federal statute turned private conduct into state action. In 1956, in Railway Employees Dep’t v. Hanson, the Supreme Court found “governmental action” in closed shop agreements—contracts between an employer and a union committing the employer not to employ anyone who refuses to join the union—where a newly enacted federal statute permitted, but did not compel, such agreements. The key feature of the statute, the court emphasized, was that it knocked out any state law liability that might otherwise be triggered by such agreements. Thus Flagg Bros. is sandwiched between two cases finding state action where federal immunity statutes authorized but didn’t compel private parties to engage in conduct thatwould have triggered constitutional scrutiny had governmental actors performed it directly.

Rozenshtein attempts to distinguish Skinner on a couple of grounds, but I think he overstates. He says that Congress has expressed “no single preference” with respect to filtering offensive online content, whereas in Skinner the government had indicated a “strong preference” for drug testing. The idea here seems to be that because Section 230 not only immunizes websites for removing content but also immunizes them against liability for the third-party content they carry, the statute is in effect neutral between filtering or not filtering “offensive” material. This view ignores the legislative facts.

The Communications Decency Act was so named because it purported to criminalize “indecent” and “offensive” online content (and still does criminalize obscene online content)—or, in the words of the act’s chief sponsor, the “filth” that was threatening America’s youth. These provisions were quickly struck down as unconstitutional, but they leave no doubt that the legislation’s primary purpose was not in the least neutral. Congress’s express goal was to extirpate offensive speech from the internet—and, more specifically, “to encourage efforts by Internet service providers to eliminate such material by immunizing them from liability.” Section 230 cannot be extricated from these statutory purposes, of which it was originally a part. Its “Good Samaritan” immunity substantially helps effectuate Congress’s goal by ensuring that websites trying to block offensive content cannot be hit with liability for doing so. The pro-filtering “preference” could not have been clearer.

Rozenshtein also says that the regulations in Skinner “facilitated” information-sharing with the government. The same is probably true of the Communications Decency Act, but what the Skinner court actually said on this point is that the government had expressed a “desire” (emphasis added) to “share the fruits” of the railroads’ drug and alcohol tests—and the government has repeatedly expressed a similar “desire” to share the fruits of social media companies’ policing their sites for extremist and other content. Just last August, for example, President Trump called on social media companies to work “in partnership” with federal law enforcement to “detect mass shooters before they strike.”

Would Rozenshtein find no constitutional problem in a statute immunizing private parties who barricade abortion clinics or seize people’s guns? He doesn’t say. Instead, he argues that even if state action exists when lawmakers immunize “otherwise illegal” conduct, Section 230 does not do so because—he writes—a website’s censoring of third-party objectionable content is “perfectly legal, irrespective of Section 230.”

This claim is hard to understand. Absent Section 230, a website’s censoring of offensive content would by no means be “perfectly legal.” Section 230 was prompted by Stratton Oakmont, Inc. v. Prodigy Services Co., in which a New York court held that by monitoring and filtering third-party content, websites could become liable for the content they didn’t block. The primary purpose behind Section 230’s “Good Samaritan” immunity was to prevent that result. Thus, absent Section 230, a website’s content-based censorship decisions could well have subjected it to tort liability—or even criminal liability. Moreover, in states like California, where free speech constraints are imposed on private parties that open their property for public expression, a website that censored offensive content might (absent Section 230) be suable for state constitutional violations. Finally, courts have stated that Section 230 also bars breach of contract actions for a website’s content-removal decisions, meaning that Section 230 might even immunize a website that took down content in violation of its own terms of service. In all these ways, Section 230 immunizes conduct that would by no means be otherwise “perfectly legal.”

But I want to repeat: My thesis is not that Section 230 turns every website it governs into a state actor. My argument is much narrower: It is directed at the mega-platforms like Facebook and YouTube. The reason is twofold.

First, Congress has for years been applying intense pressure on Facebook and Google to ratchet up their censorship of hate speech, extremism, false news, and so on. “Let’s see what happens by just pressuring them first,” said House Judiciary Committee Chairman Jerrold Nadler in April, as members of Congress prepared to “grill Facebook and Google” once again about “hate speech” on their platforms. The implication was that if “pressuring” didn’t work, Congress would have to regulate, and the regulation Congress is considering for Facebook and Google includes highly adverse measures, from eliminating their immunity for third-party content, to turning them into public utilities.

While politicians are free of course to denounce hate speech and private companies that fail to suppress it, courts have long held that state action can be found when private companies respond to “comments of a governmental official [that] can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official’s request.” Rozenshtein dismisses the pressure on Google and Facebook as mere “jawboning,” but when members of Congress say those companies “better” stop carrying hate speech or else face regulation, and “we’re going to make [that regulation] swift, we’re going to make it strong, and we’re going to hold them very accountable,” it seems pretty “reasonable” to interpret such statements “as intimating that some form” of “adverse regulatory action will follow the failure to accede.”

Rozenshtein also falls here for the canard that largely Democratic congressional pressure to censor hate speech is somehow canceled out by largely Republican congressional pressure not to suppress conservative voices. These two vectors don’t add up to zero. On the contrary, they redouble each other, pressuring Facebook and Google to satisfy both demands by censoring content even more aggressively, against left-leaning and right-leaning speakers alike.

Second, and equally important, mega-platforms like Facebook and Google wield a degree of power over the content of public discourse unprecedented in world history. This phenomenon implicates constitutional values and changes the constitutional calculus. I don’t mean that Google and Facebook should be viewed as state actors merely because of their unprecedented, enormous power over public discourse. I mean that this power should be taken into account when courts decide, for example, under what circumstances an immunity statute turns private conduct into state action, or how much governmental pressure can be tolerated before such pressure turns private conduct into state action.

But to come to grips with the problem in its full magnitude, these three extraordinary elements must be added together: a handful of behemoth private companies exercising an unprecedented degree of control over public discourse; a deliberate congressional grant of immunity to these companies if they censor “objectionable” speech (which would clearly violate the First Amendment were Congress to do it directly); and sustained congressional pressure on these companies to censor more aggressively. If this doesn’t result in state action, lawmakers all over the country will know how to violate any constitutional right they like.

No comments: