Duke Law Journal Online
VOLUME 72 APRIL 2023
GONZALEZ V. GOOGLE: THE CASE FOR
PROTECTING “TARGETED
RECOMMENDATIONS”
TOMER KENNETH & IRA RUBINSTEIN††
A
BSTRACT
Does Section 230 of the Communications Decency Act protect
online platforms (e.g., Facebook, YouTube, and Twitter) when they
use recommendation algorithms? Lower courts upheld platforms’
immunity, notwithstanding notable dissenting opinions. The Supreme
Court considers this question in Gonzalez v Google, LLC. Plaintiffs
invite the Court to analyze “targeted recommendations” generically
and to revoke Section 230 immunity for all recommended content. We
think this would be a mistake.
This Article contributes to existing scholarship about Section 230
and online speech governance by adding much needed clarity to the
desirable—and undesirable—regulation of recommendation
algorithms. Specifically, this Article explains the technology behind
algorithmic recommendations, the questions it raises for Section 230
immunity, and the stakes in Gonzalez. It opposes generically revoking
Section 230 immunity for all uses of recommendation algorithms.
Instead, it illustrates and defends a nuanced approach for the desired
outcome of Gonzalez and for future possible regulation of
recommendation algorithms.
Copyright © 2023 Tomer Kenneth & Ira Rubinstein.
JSD Candidate; Fellow, Information Law Institute, New York University School of Law.
†† Senior Fellow, Information Law Institute, New York University School of Law. We
thank Neli Frost, Eugene Volokh, and the fellows at NYU’s Information Law Institute for helpful
discussions and comments.
2023] GONZALEZ V. GOOGLE 177
I
NTRODUCTION
Are online platforms (e.g., Facebook, YouTube, and Twitter)
legally immune when they use recommendation algorithms to match
specific content to specific users? This is the question now before the
Supreme Court in Gonzalez v. Google, LLC.
1
Although lower courts
have upheld immunity,
2
a few notable dissenting opinions have
rejected this conclusion, arguing for limited application of Section 230
in those contexts.
3
Granting certiorari in Gonzalez and framing the
question at issue in very broad terms, the Supreme Court seems poised
to reach sweeping conclusions about the application of Section 230
immunities to recommendation algorithms.
4
This Article takes issue with the Supreme Court’s framing and
argues that a generic application of Section 230 to recommendation
algorithms is a mistake. The Article defends the majority opinion in
the lower courts and argues that there are better ways to address online
speech and regulate online platforms. The Article complements
existing scholarship by discussing the often-ignored relationship
between Section 230 and recommendation algorithms. This
contribution matters for Gonzalez but also for legislative efforts to
amend Section 230
5
and for other attempts to regulate
recommendation algorithms.
6
In Part I, we briefly present the relevant background: the
technology behind algorithmic recommendation, the context of
Section 230, and the question in Gonzalez. Then, in Part II, we criticize
the Supreme Court’s framing of the relationship between Section 230
and recommendation algorithms as too generic. This broad framing
invites an overinclusive analysis, misunderstands the relevant
technology, and forces a false dilemma: either undermining the
1. Gonzalez v. Google LLC, 2 F.4th 871 (9th Cir. 2021), cert. granted, No. 21-1333, 2022
WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v. Taamneh, No. 21-1496,
2022 WL 4651263 (U.S. Oct. 3, 2022).
2. Force v. Facebook, Inc., 934 F.3d 53, 71 (2d Cir. 2019); Gonzalez, 2 F.4th at 913.
3. See, e.g., Force, 934 F.3d at 76 (Katzmann, C.J., concurring); Gonzalez, 2 F.4th at 918
(Gould, J., concurring).
4. See infra Part II.
5. See, e.g., J
ASON A. GALLO & CLARE Y. CHO, CONG. RSCH. SERV., R46662, SOCIAL
MEDIA: MISINFORMATION AND CONTENT MODERATION ISSUES FOR CONGRESS 6–8 (2021).
6. Compare NetChoice, LLC v. Paxton, 573 F.Supp 3d 1092, 1099 (W.D. Tex. 2022), rev’d,
49 F.4th 439 (5th Cir. 2022) (granting a preliminary injunction that barred enforcement of a Texas
social media law restricting content moderation), with NetChoice, LLC v. Att’y Gen., 34 F.4th
1196, 1232 (11th Cir. 2022) (upholding several provisions in a Florida social media law regulating
content moderation).
178 DUKE LAW JOURNAL ONLINE [Vol. 72:176
protections that helped make online platforms so desirable for users or
not regulating online platforms at all.
In Part III, we argue that the Supreme Court should uphold the
Ninth Circuit majority’s view in Gonzalez. According to this view,
platforms forfeit Section 230 protections only if they make material
contributions to the content that users upload.
7
We explain the
advantages of this application-specific approach for Section 230
generally and for recommendation algorithms more specifically. We
also consider the shortcomings of the dissenting opinions, which would
exclude algorithmic recommendations from Section 230 immunity.
One such view sees all uses of recommendation algorithms as
conveying a message; another excludes only recommending
connections to other users, groups, or pages.
8
Both should be avoided.
Admittedly, our position offers little recourse for many of the
perverse outcomes of the prior interpretations of Section 230.
9
We
share many of these concerns and believe that governments can and
should do more to rein in online platforms and to cultivate a better
online speech environment. However, we think that excluding
platforms’ use of recommendation algorithms from Section 230
immunities is the wrong approach. In the concluding section, we point
to more desirable solutions, such as carving out narrow exceptions to
Section 230 or amending the statute to ensure that firms engage in
Good Samaritan screening as a condition of immunity. We also briefly
consider requiring the use of technological friction to mitigate
algorithmic amplification or using soft regulation that provides
guidance to online platforms.
I.
ON SECTION 230 AND RECOMMENDATION ALGORITHMS
There is an abundance of scholarly writing on the historical
background and genealogy of Section 230 of the Communications
Decency Act of 1996.
10
For our purposes, a brief introduction suffices.
Section 230 provides “interactive computer services” immunity from
7. See infra note 14; Part III.A.
8. See infra Part III.B–C.
9. See Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying
Bad Samaritans § 230 Immunity, 86 F
ORDHAM L. REV. 401, 401–03 (2017) (summarizing potential
harms to minors and young adults under broad Section 230 immunity).
10. For some examples, see generally Adam Candeub, Reading Section 230 As Written, 1 J.
FREE SPEECH L. 139 (2021); Eric Goldman, An Overview of the United States’ Section 230 Internet
Immunity, in O
XFORD HANDBOOK OF ONLINE INTERMEDIARY LIABILITY 154 (Giancarlo Frosio
ed., 2020); Danielle Keats Citron & Benjamin Wittes, The Problem Isn’t Just Backpage: Revising
Section 230 Immunity, 2 G
EO. L. TECH. REV. 453 (2018).
2023] GONZALEZ V. GOOGLE 179
(1) liability as “publisher or speaker of any information” that a third-
party uploads and from (2) civil liability for the removal of content
under certain circumstances.
11
Congress hoped that Section 230 would
promote the continued development of the Internet and online services
and preserve its vibrancy as an educational and informational resource
for all citizens while also encouraging the removal of offensive content
without exposing these services to publisher’s liability.
12
Courts
adopted a broad view of Section 230. This included, first, interpreting
“interactive computer services” as covering new social media platforms
like Facebook and Twitter,
13
emphasizing that statutory immunity
protected these services against liability for “any information” that
third parties published,
14
and, second, imposing liability only when
platforms make a “material contribution” to the content uploaded by
users.
15
Section 230 has succeeded in its main goal: facilitating the creation
of a vibrant social networking environment online, led and governed
by private companies.
16
However, in recent years, scholars,
17
11. 47 U.S.C. § 230(c); see also VALERIE C. BRANNON & ERIC N. HOLMES, CONG. RSCH.
SERV., R46751, SECTION 230: AN OVERVIEW 1 (2021) (“[Section 230] sought to allow users and
providers of ‘interactive computer services’ to make their own content moderation decisions,
while still permitting liability in certain limited contexts.”).
12. 47 U.S.C. § 230(a)–(b).
13. See, e.g., Klayman v. Zuckerberg, 753 F.3d 1354, 1355 (D.C. Cir. 2014) (classifying
Facebook as an interactive computer service); Fields v. Twitter, Inc., 217 F. Supp. 3d 1116, 1118
(N.D. Cal. 2016) (classifying Twitter as an interactive computer service).
14. See, e.g., Gonzalez v. Google LLC, 2 F.4th 871, 886–87, 896 (9th Cir. 2021), cert. granted,
No. 21-1333, 2022 WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v.
Taamneh, No. 21-1496, 2022 WL 4651263 (U.S. Oct. 3, 2022) (emphasizing that Congress made a
policy decision to provide broad protection under Section 230, protecting any information);
Carafano v. Metrosplash.com., Inc., 339 F.3d 1119, 1122–25 (9th Cir. 2003) (same); Doe v. Internet
Brands, Inc., 824 F.3d 846, 851–54 (9th Cir. 2016) (same).
15. See Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157,
1169–71 (9th Cir. 2008) (distinguishing “providing neutral tools” from “materially contributing”
to the alleged unlawfulness); Marshall’s Locksmith Serv. v. Google, LLC, 925 F.3d 1263, 1271
(D.C. Cir. 2019) (holding that algorithms that convert third-party input of location into picture
form use neutral means and therefore enjoy Section 230 immunity).
16. 47 U.S.C. § 230(b)–(c); Danielle Keats Citron, How To Fix Section 230, B.U.
L. REV.
(forthcoming) (manuscript at 3) (on file with authors) (“The absence of liability meant that search
engines could link to sites, blogs, and other online activity without fear that they would be liable
for defamatory comments.”).
17. See generally, e.g., Gautam Hans, Revisiting Roommates.com, 36 B
ERKELEY TECH. L.J.
1228 (2022) (evaluating potential reforms of Section 230 that would further the goals of civil rights
protections); Olivier Sylvain, Platform Realism, Informational Inequality, and Section 230 Reform,
131 Y
ALE L.J.F. 475 (2021) (arguing in favor of Section 230 reform that would result in outcomes
more consistent with settled consumer-protection and civil-rights laws); Danielle Keats Citron &
Mary Anne Franks, The Internet as a Speech Machine and Other Myths Confounding Section 230
180 DUKE LAW JOURNAL ONLINE [Vol. 72:176
legislators,
18
and courts
19
have questioned the breadth of Section 230
immunity. The major concern is that Section 230 grants platforms too
much discretion and power to govern the ever-growing aspects of
online life.
In Gonzalez v. Google, LLC, petitioners seek to limit the scope of
Section 230 immunity.
20
The petitioners, plaintiffs in a Ninth Circuit
case, are relatives of victims of terrorist attacks for which ISIS claimed
responsibility.
21
They sought to establish the platforms’ liability under
the Anti-Terrorism Act (ATA) for content that circulated on those
platforms.
22
Recognizing that a broad interpretation of Section 230
immunity protects platforms from liability for terrorist content
uploaded by third-party users,
23
plaintiffs opted to distinguish their
claims by arguing that platforms’ use of recommendation algorithms is
not protected under Section 230.
24
Allegedly, platforms
“‘recommended ISIS videos to users’ and enabled users to ‘locate other
videos and accounts related to ISIS,’ thereby assisting ISIS in spreading
its message.”
25
Conversely, the platforms argued (among other things)
that Section 230 protects their use of algorithms to recommend specific
content to specific users.
26
Reform, 2020 U. CHI. LEGAL F. 45 (recommending changes to Section 230 that would condition
immunity on reasonable moderation practices).
18. G
ALLO & CHO, supra note 5, at tbl.B-1 (listing over two dozen Section 230 reform
proposals introduced in the 116th Congress).
19. See, e.g., Biden v. Knight First Amend. Inst. at Columbia Univ., 593 U.S. 1220, 1221
(2021) (Thomas, J., concurring) (criticizing the immense power that private platforms have over
online speech and the need to regulate them); Gonzalez, 2 F.4th at 912–13, 923 (“Whether social
media companies should continue to enjoy immunity for the third-party content they publish, and
whether their use of algorithms ought to be regulated, are pressing questions that Congress should
address.”).
20. Gonzalez, 2 F.4th at 886.
21. Id. at 880–85.
22. Id. at 880. The ATA allows U.S. nationals to recover damages for injuries suffered “by
reason of an act of international terrorism,” 18 U.S.C. § 2333(a), and extends liability to “any
person who aids and abets, by knowingly providing substantial assistance” to a person who
commits an act of international terrorism, 18 U.S.C. § 2333(d).
23. Force v. Facebook, Inc., 934 F.3d 53, 65–66 (2d Cir. 2019).
24. Gonzalez, 2 F.4th at 881, 894–95.
25. Id. at 881. Plaintiffs in the other two cases decided in GonzalezTaamneh and
Clayborn—make roughly similar claims, alleging that YouTube, Facebook and Twitter failed to
do enough to stop ISIS from using their platforms to promote its messages and recruit terrorists.
Id. at 883–84.
26. Id. at 882, 894–95.
2023] GONZALEZ V. GOOGLE 181
To assess these claims, we need to better understand the
technology of recommendation algorithms.
27
Social media platforms
process copious amounts of user-generated content. Given the scale
and variation of content involved, platforms rely on algorithmic
automation to manage content with the goals of making the platforms
interesting and maximizing user engagement.
28
The two main ways to
algorithmically manage content are content moderation and
algorithmic recommendation.
Content moderation means, roughly, (1) fitting content into
predefined categories based on published “community guidelines” and
(2) issuing warnings about, demoting, or removing content that violates
these guidelines.
29
For instance, a platform like Twitter relies on
content moderation algorithms to identify uploaded content as
“COVID-19 misinformation” and enforce (or cease to enforce) its
policy of removing “demonstrably false or potentially misleading
content that has the highest risk of causing harm.”
30
Conversely, algorithmic recommendation optimizes the use of
(permissible) content on the platform.
31
Platforms use
recommendation algorithms to rank content algorithmically and, based
on these rankings, to promote specific content to particular users or
distribute certain content more broadly. While content moderation
asks how best to classify content, recommendation algorithms ask how
best to use this content in order to maximize desired outcomes
(typically, user engagement). Social media firms collect and analyze
hundreds (or even thousands) of data points and feed this data to
recommendation algorithms designed to predict what specific content
will keep specific users most engaged. For example, Facebook’s
newsfeed algorithm relies on predictive models that learn what drives
users to interact with a piece of content “based on who posted it, what[]
27. See generally Ira S. Rubinstein & Tomer Kenneth, Taming Online Public Health
Misinformation, 60 H
ARV. J. ON LEGIS. (forthcoming 2023), https://ssrn.com/abstract_id=4192903
[https://perma.cc/F97T-VTM3] (discussing the technological background of algorithmic
recommendation and content moderation).
28. See generally, e.g., T
ARLETON GILLESPIE, CUSTODIANS OF THE INTERNET:
PLATFORMS, CONTENT MODERATION, AND THE HIDDEN DECISIONS THAT SHAPE SOCIAL
MEDIA (2018) (viewing content moderation as a fundamental aspect of social media platforms
and suggesting that algorithmic choice of content is what draws users in and keeps them on a given
platform).
29. Rubinstein & Kenneth, supra note 27, at 52–56.
30. Natasha Lomas, Twitter Says It’s No Longer Enforcing COVID-19 Misleading
Information Policy, T
ECHCRUNCH (Nov. 29, 2022), https://techcrunch.com/2022/11/29/twitter-co
vid-29-misleading-info-policy-change [https://perma.cc/ZAA3-4WCE].
31. Rubinstein & Kenneth, supra note 27, at 52–56.
182 DUKE LAW JOURNAL ONLINE [Vol. 72:176
it[‘s] about, whether it contains an image, or a video, what’s in the
video, how recent it is, how many of our friends liked or shared it and
so on.”
32
YouTube and other platforms follow a similar approach.
33
In sum, there are important technological differences between
content moderation and algorithmic recommendation. These
differences have legal implications. In previous writing, we have argued
that content moderation regulations are content-based and hence
subject to strict scrutiny under the First Amendment, while
recommendation algorithms are content-neutral and hence should
receive intermediate scrutiny.
34
In a similar vein, the plaintiffs in
Gonzalez seek to distinguish the treatment of recommendation
algorithms from content moderation for the purposes of Section
230(c)(2) protections. Should they prevail? Do the technological
distinctions between content moderation and algorithmic
recommendation warrant an exclusion of Section 230 protections for a
platform’s uses of recommendation algorithms? Are online platforms
(e.g., Facebook, YouTube, and Twitter) legally immune when they
recommend specific content to specific users?
Most courts have held that Section 230 protects platforms in using
recommendation algorithms.
35
We agree with this conclusion and find
32. See, e.g., SINAN ARAL, THE HYPE MACHINE 84 (2020); see also TANIA BUCHER,
IF . . . THEN: ALGORITHMIC POWER AND POLITICS 78 (2018) (identifying similar factors
Facebook considers in determining the “relevancy score” of posts in a use’s newsfeed);
Akos
Lada, Meihong Wang & Tak Yan, How Machine Learning Powers Facebook’s News Feed
Ranking Algorithm, E
NGINEERING AT META (Jan. 26, 2021), https://engineering.fb.com/2021/01/
26/ml-applications/news-feed-ranking [https://perma.cc/VGL3-B6EV] (describing the technical
aspects of Facebook’s ranking algorithm).
33. See, e.g., M
OZILLA FOUNDATION, YOUTUBE REGRETS 13–14 (2019), https://assets.mof
oprod.net/network/documents/Mozilla_YouTube_Regrets_Report.pdf [https://perma.cc/L4HR-
UQBW].
34. See Rubinstein & Kenneth, supra note 27, at 56. For an opposing view, see generally
Daphne Keller, Amplification and Its Discontents: Why Regulating the Reach of Online Content
Is Hard, 1 J.
FREE SPEECH L. 227 (2021). Recent court rulings diverge over how to analyze content
moderation for First Amendment purposes. Compare NetChoice, LLC v. Paxton, 573 F. Supp. 3d
1092, 1099 (W.D. Tex. 2022), rev’d, 49 F.4th 439 (5th Cir. 2022) (granting a preliminary injunction
that barred enforcement of a Texas social media law restricting content moderation), with
NetChoice, LLC v. Att’y Gen., 34 F.4th 1196, 1232 (11th Cir. 2022) (upholding several provisions
in a Florida social media law regulating content moderation).
35. See Gonzalez v. Google LLC, 2 F.4th 871, 894 (9th Cir. 2021), cert. granted, No. 21-1333,
2022 WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v. Taamneh, No. 21-
1496, 2022 WL 4651263 (U.S. Oct. 3, 2022) (“Though we accept . . . that Google’s algorithms
recommend ISIS content to users, the algorithms do not treat ISIS-created content differently
than any other third-party created content, and thus are entitled to § 230 immunity.”); Force v.
Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019) (“The algorithms take the information provided by
Facebook users and ‘match’ it to other users . . . based on objective factors applicable to any
content . . . . [This use of recommendation algorithms] is not enough to hold Facebook
2023] GONZALEZ V. GOOGLE 183
the dissenting opinions’ reasoning unpersuasive.
36
But before turning
to this discussion, the next Part takes up more pressing matters—the
Supreme Court’s pending decision about recommendation algorithms.
II.
THE CHALLENGE OF GONZALEZ V. GOOGLE, LLC
The Supreme Court is set to address the application of Section 230
to uses of recommendation algorithms in Gonzalez v. Google, LLC.
37
This section raises questions about the Court’s decision to take up the
case and argues that the Court’s apparent rationale for considering it
is ill-advised. Granting certiorari in Gonzalez and commenting in other
cases, the Supreme Court has shown an inclination to exclude
“targeted recommendations” from Section 230 protection altogether.
38
We take issue with this view. We think it is a mistake to analyze the use
of recommendation algorithms in such broad strokes. And, we think
that, in most cases, the use of recommendation algorithms should be
protected.
A. Why the Court Took This Case
The Court’s decision to hear the case caught many by surprise
(including the authors of this paper). After all, the circuit courts are in
agreement about the application of Section 230 in this context,
dissenting opinions notwithstanding. And, the Supreme Court rarely
grants certiorari to interpret a federal statute in the absence of a circuit
split.
39
Additionally, the case raises challenging causation issues. The
petitioners argue that platforms are liable for recommending videos
responsible as the ‘develop[er]’ or ‘creat[or]’ of that content.”); see also Dyroff v. Ultimate
Software Grp., 934 F.3d 1093, 1096 (9th Cir. 2019) (“Ultimate Software, as the operator of
Experience Project, is immune from liability under the CDA because its functions, including
recommendations and notifications, were content-neutral tools used to facilitate
communications.”); Marshall’s Locksmith Serv. v. Google, LLC, 925 F.3d 1263, 1270–71 (D.C.
Cir. 2019) (stating that using neutral algorithms—“that do not distinguish between legitimate and
scam locksmiths”—to decide which information appears on a map is protected under Section
230); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124–25 (9th Cir. 2003) (suggesting
Section 230 protects a platform’s “decision to structure the information provided by users . . . such
as ‘matching’ profiles with similar characteristics”).
36. See infra Part IV.
37. See Question Presented, Gonzalez v. Google LLC, No. 21-1333, 2022 WL 4651229
(2022).
38. See infra notes 40–45 and accompanying text. As we understand it, the term “targeted
recommendations” refers to platforms’ use of recommendation algorithms to personalize content
for their users.
39. See, e.g., Tejas N. Narechania, Certiorari in Important Cases, 122 C
OLUM. L. REV. 923,
927 (2022) (finding it “unusual” for the court to review a case that presented no circuit split).
Admittedly, the Court retains a very broad discretion in its decisions to grant certiorari. Id. at 924.
184 DUKE LAW JOURNAL ONLINE [Vol. 72:176
that a third party posted, which allegedly contributed to inciting the
terrorist attack that led to the death of their relatives. This is a rather
convoluted chain of events, and it is not at all clear that plaintiffs can
successfully establish a causal link between the recommended video
and the attack or the ensuing deaths. More importantly, given the
particular facts of the case, it is questionable whether the Court’s
decision can illuminate the more mundane liability claims that Section
230 regularly shields against, such as garden-variety defamation
actions. Against this background, it is useful to consider why the Court
decided nevertheless to grant certiorari and what kind of changes it
may have in mind.
Reading between the lines, the Court appears to favor limiting
Section 230 immunity for uses of recommendation algorithms. In
granting certiorari in Gonzalez, the Court indicated its willingness to
make broad decisions regarding the application of Section 230 to
recommendation algorithms. The Court framed the question presented
as:
Does section 230(c)(l) immunize [platforms] when they make
targeted recommendations of information provided by third parties,
or only limit the liability of interactive computer services when they
engage in traditional editorial functions (such as deciding whether to
display or withdraw) with regard to such information?
40
By constructing the question in such an expansive manner, the
Court seems to invite a broad-brush “solution” to the interplay
between Section 230 and recommendation algorithms. This counters a
bottom-up approach that is attentive to different uses and applications
of recommendation algorithms. Taking on Gonzalez to broadly
reshape our understanding of Section 230 is also in line with the
Supreme Court’s recent maximalist tendencies.
41
Another indication that the Court is inclined to carve out broad
exclusions from Section 230 stems from the general discontent toward
existing regulation of platforms. The Court is no stranger to criticisms
of Section 230 and the power of social media platforms. For example,
Justice Thomas opined on the issue in three previous cases in which the
Court denied certiorari to challenges to Section 230. Against the broad
interpretation of Section 230 that courts have so far adopted, he argued
40. Question Presented, Gonzalez v. Google LLC, No. 21-1333, 2022 WL 4651229 (2022).
41. See generally Strict Scrutiny, This Maximalist Conservative Supermajority, C
ROOKED
MEDIA (June 27, 2022), https://crooked.com/podcast/this-maximalist-conservative-supermajority
[https://perma.cc/4PD5-R9W9] (discussing the Supreme Court’s recent maximalist, rather than
incremental, tendencies in many of the cases it takes on).
2023] GONZALEZ V. GOOGLE 185
that a proper textualist reading would limit Section 230 immunity.
42
He
emphasized that upholding immunity protects “unwelcome content”
and that platforms “can solicit thousands of potentially defamatory
statements” while avoiding “product-defect claims” that involve
content about terrorism or human trafficking.
43
Justice Thomas also
seemed to understand the political economy of Section 230, noting that
a broad reading “confer[s] sweeping immunity on some of the largest
companies in the world.”
44
Finally, he warned that since private
companies exert “enormous control” over speech, “[w]e will soon have
no choice but to address how our legal doctrines apply to highly
concentrated, privately owned information infrastructure such as
digital platforms.”
45
B. A Few Warnings
It would be a mistake for the Court to try and “fix” the
shortcomings of online platforms by excluding targeted
recommendations. For one, many (if not all) platforms—including
social media services like Facebook and Twitter and search engines
like Google and Bing—rely upon targeted recommendations to select
and organize content that users will find relevant and engaging.
46
(And,
of course, to drive advertising revenues.) Using algorithmic tools to
rank and favor content is all but necessary because of the scale and
volume of content uploaded to these platforms.
47
Indeed, in its early
days Facebook displayed content in reverse chronological order.
48
But,
as the amount of content it hosted grew gigantically, simple
chronological ordering did not allow users to easily find or process
relevant content. Thus, Facebook started to rely on its newsfeed
algorithm to rank content on users’ behalf, replacing the chronological
ranking with a more sophisticated ranking tools that considers
thousands of relevant factors. Nowadays, Facebook and its users are
42. Doe v. Facebook, Inc., 142 S. Ct. 1087, 1087 (2022) (Thomas, J., concurring);
Malwarebytes v. Enigma Software Grp. USA, 141 S. Ct. 13, 15–17 (2020) (Thomas, J.,
concurring); Biden v. Knight First Amend. Inst. at Columbia Univ., 593 U.S. 1220, 1221 (2021)
(Thomas, J., concurring).
43. Malwarebytes, 141 S. Ct. at 15–18.
44. Id. at 13.
45. Biden, 593 U.S. at 1221.
46. See Rubinstein & Kenneth, supra note 27, at 52–56.
47. Id.
48. A
RAL, supra note 32, at 84.
186 DUKE LAW JOURNAL ONLINE [Vol. 72:176
utterly dependent on algorithmic recommendation to deliver relevant
content to specific users.
49
Supporting the use of recommendation algorithms is more than a
deferral to companies’ favorite modus operandi. The volume, variety,
and velocity at which online content is generated and processed on
major platforms like Facebook and Google makes it inevitable that
these services rely on recommendation algorithms. It is doubtful that
platforms could provide the benefits that Section 230 hoped to deliver
for users—rich and diverse informational, educational, cultural
resources provided by online speech services—without relying on
recommendation algorithms.
50
In addition, terminological ambiguity
complicates matters. Platforms have to rely on algorithms to manage
content because of scale, as noted. And, any type of content
management would “recommend” something—be it the
chronologically recent posts or the ones some algorithm deems most
desirable. Hence, as a practical matter, it is not clear what platforms
can do to manage content without using any recommendation
algorithms.
51
Furthermore, the most common uses of recommendation
algorithms—to favor content that the user is interested in and
connections that the user would like to engage with—are socially
desirable. They make those platforms interesting and engaging for
billions of users with different backgrounds and interests. Put simply,
without recommendation algorithms, large platforms would turn into
ugly assemblages of chaotic, irrelevant, and almost randomly presented
content, depriving users of the value of content recommendations
tailored to their interests.
Note, excluding targeted recommendations from Section 230
immunity will not make the use of recommendation algorithms illegal.
Instead, it would make platforms potentially liable for content that
they recommend. But, this is no small matter. What might happen,
concretely, if the Gonzalez court ends Section 230 immunity for
“targeted recommendations?”
Consider two examples: Facebook and Google. Assume that
Facebook internalizes this regulatory shift and decides to minimize
potential liability by shutting off the recommendation algorithms in its
49. Id.
50. 47 U.S.C. § 230(a)–(b).
51. See infra notes 93–101 and accompanying text; Rubinstein & Kenneth, supra note 27, at
57–61 (arguing recommendation algorithms enable platforms to perform efficient content
moderation and dissemination).
2023] GONZALEZ V. GOOGLE 187
newsfeed algorithm. There are reasons to think that Facebook users
would be worse off. An internal report on the results of an experiment
to this effect found that turning off the newsfeed algorithm “led to a
worse experience almost across the board. People spent more time
scrolling through the News Feed searching for interesting
stuff . . . . They hid 50% more posts, indicating they weren’t thrilled
with what they were seeing.
52
Moreover, removing “all ranked
sorting” would probably lead to users seeing even more “borderline”
content than they do with the current system.
53
As for Google’s search
engine, it too relies heavily on ranked search results.
54
Indeed, online
search results enjoy expansive legal protection beyond Section 230,
including constitutional safeguards.
55
However, if Google had to
terminate its use of algorithmic ranking, the quality of its search results
would be diminished beyond recognition.
Of course, this is not to say that recommendation algorithms are
trouble free. Platforms’ uses of recommendation algorithms lead to
many undesirable outcomes. Those include exacerbating body image
problems for teenage girls by promoting images of idealized bodies and
exposing users to undesirable violent and graphic content or
misinformation.
56
Moreover, online platforms seem very reluctant to
52. See Alex Kantrowiz, Facebook Removed the News Feed Algorithm in an Experiment.
Then It Gave Up, B
IG TECH. (Oct. 25, 2021), https://www.bigtechnology.com/p/facebook-remov
ed-the-news-feed-algorithm [https://perma.cc/AS62-97ZP].
53. See id. (“Wiping out all ranked sorting of the News Feed clearly led to other problems,
including . . . integrity issues.”); Keller, supra note 34, at 256 (pointing out that Facebook’s current
way of handling “borderline” content relies heavily on algorithmic ranking).
54. James Grimmelmann, The Structure of Search Engine Law, 93
IOWA L. REV. 1, 7–11
(2007); see also Danny Sullivan, FAQ: All About the Google RankBrain Algorithm, S
EARCH
ENGINE LAND (June 23, 2016), https://searchengineland.com/faq-all-about-the-new-google-rank
brain-algorithm-234440 [https://perma.cc/2DCA-5BV4] (discussing Google’s use of machine
learning algorithms to help deliver its search results).
55. Courts have recognized First Amendment protections of search results. See, e.g., Best
Carpet Values, Inc. v. Google LLC, No. 5:20-CV-04700-EJD, 2021 WL 4355337 at *10 (N.D. Cal.
Sept. 24, 2021) (citing other cases as well). Compare E
UGENE VOLOKH & DONALD M. FALK,
FIRST AMENDMENT PROTECTION FOR SEARCH ENGINE RESULTS 6–10 (2012) (arguing search
results are entirely protected by the First Amendment), with Oren Bracha & Frank Pasquale,
Federal Search Commission? Access, Fairness, and Accountability in the Law of Search, 93
C
ORNELL L. REV. 1149, 1193–1201 (2008) (arguing that that the First Amendment does not
encompass search engine results).
56. See Georgia Wells, Jeff Horwitz & Deepa Seetharaman, Facebook Knows Instagram Is
Toxic for Teen Girls, Company Documents Show, W
ALL ST. J. (Sept. 14, 2021) https://www.wsj.
com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631
620739 [https://perma.cc/5BFP-6VRE] (describing how Facebook promoted images of idealized
bodies to teenage girls despite knowing that this exacerbated body image problems for vulnerable
teens); M
OZILLA FOUNDATION, supra note 33 (cataloguing accounts of YouTube’s algorithm
exposing users to undesirable content); see generally Neli Frost, The Global Political Voice Deficit
188 DUKE LAW JOURNAL ONLINE [Vol. 72:176
significantly modify their recommendation algorithms even in light of
such harms.
57
However, revoking Section 230 immunity for targeted
recommendations is too blunt of an instrument to remedy these failings
and will likely cause other problems. We suggest better approaches in
Part IV.
The previous paragraphs argued against a broad-stroke exclusion
of recommendation algorithms from Section 230 protection. Note,
however, that we make a principled argument against generic
application of Section 230 to recommendation algorithms. As such, we
are also hesitant about adopting a broad-stroke inclusion of
recommendation algorithms under Section 230 protections. Indeed, a
major problem with the Court’s articulation of the question in
Gonzalez is that it seems to force a false dilemma: either regulating
recommendation algorithms by excluding Section 230 or providing
unqualified protections. If required to choose between the two, we
favor retaining Section 230 protections. But, we think this is the wrong
question. As we explain in the next Parts, we favor a more nuanced
analysis—one that extends Section 230 immunities in most cases but
also allows courts to gradually develop exceptions and best practices
that would counter the undesirable effects of Section 230’s private
governance regime.
58
The bottom line is this: the Court is ill-advised to try and solve the
plethora of problems associated with online speech by rejecting Section
230 immunity for all uses of recommendation algorithms. Such an
outcome would reflect a misunderstanding of relevant technology, a
disregard of the important benefits associated with algorithmic
ranking, and a lack of faith in the ability of legislators and courts (and
platforms) to gradually devise better tailored solutions to the ever-
changing challenges of regulating online speech. This is not only a bad
outcome, it also is clearly misaligned with the purpose of Section 230.
Matrix (Mar. 26, 2022) (unpublished manuscript) (on file with authors) (arguing that amplification
hinders democratic deliberations and other speech-related political interests).
57. See Keach Hagey & Jeff Horwitz, Facebook Tried To Make Its Platform a Healthier
Place. It Got Angrier Instead, W
ALL ST. J. (Sept. 15, 2021), https://www.wsj.com/articles/facebook-
algorithm-change-zuckerberg-11631654215 [https://perma.cc/N8YR-3UYB] (describing how
Facebook’s leadership rejected suggestions to modify its algorithms to deemphasize outrage and
lies because the changes could undermine user engagement); Karen Hao, How Facebook Got
Addicted To Spreading Misinformation, MIT
TECH. REVIEW (Mar. 11, 2021), https://www.technol
ogyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation [https://perma.cc/96
6N-2AMB] (describing how Facebook rejected proposals to change its newsfeed algorithm to
reduce political polarization).
58. See infra Part V.
2023] GONZALEZ V. GOOGLE 189
III.
THREE ANALYSES OF RECOMMENDATION ALGORITHMS
In this section we discuss several views that were considered by
lower courts for applying Section 230 to recommendation algorithms
as possible nuanced solutions to Gonzalez v. Google, LLC. We support
one—the majority’s view in the Ninth Circuit—and reject two others.
The analysis that follows culminates in the unsurprising claim that
the Court should uphold immunity for most uses of recommendation
algorithms, including those at issue here. In most respects, we endorse
thirty years of legal reasoning by lower courts about the interpretation
of Section 230. Given the Court’s apparent inclination to make
sweeping changes, we think it is necessary and valuable to highlight
what the status quo gets right and the negative consequences that
would ensue from any radical departures. We recognize that our
preferred solution for Gonzalez does little to address the many perils
of online speech. Later, in the final Part of this paper, we will point to
more adequate ways to address these problems.
A. The Preferred View: Recommendation Algorithms as Tools,
Material Contribution Test
To recap: Section 230 immunizes platforms from being held liable
as publishers or speakers of any information that third-parties publish
and shields platforms from civil liability for voluntarily and in good-
faith restricting access or availability to some materials.
59
One way to
analyze recommendation algorithms in the context of Section 230
amounts to “business as usual.” On this approach, recommendation
algorithms are “neutral tools” and using them is akin to any other
measure the platforms adopt to manage content. Hence, platforms
would enjoy Section 230 immunity as long as they do not make
“material contributions” to the content that users upload.
60
Thus
understood, analyzing the use of recommendation algorithms requires
courts to answer two simple questions: are these neutral tools? And,
does the particular use constitute a material contribution to the
content?
We think this approach is the best resolution for Gonzalez. Recall
the plaintiffs in Gonzalez argued that Section 230 does not apply
59. 47 U.S.C. § 230(c); supra Part II.
60. See Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157,
1169–71 (9th Cir. 2008) (distinguishing “providing neutral tools” from “materially contributing”
to the alleged unlawfulness); Marshall’s Locksmith Serv. v. Google, LLC, 925 F.3d 1263, 1271
(D.C. Cir. 2019) (holding that algorithms that convert third-party input of location into picture
form use neutral means and therefore enjoy section 230 immunity).
190 DUKE LAW JOURNAL ONLINE [Vol. 72:176
because Google did more than merely publish content. Plaintiffs
argued that the company created and developed the ISIS content that
appears on YouTube.
61
While the plaintiffs recognized that the
platforms did not initially create the relevant ISIS videos, they still
argued that Google made material contribution by using
recommendation algorithms to match these specific videos to specific
users in order to enhance engagement.
62
The Ninth Circuit majority
adopted this analysis but rejected the plaintiffs’ claims.
63
Drawing on
earlier cases, the court found—correctly in our view—that platforms
do not become content creators or developers simply by “supplying
‘neutral tools’ that deliver content in response to user inputs.”
64
That
is, Google’s recommendation algorithms neither specify nor urge users
to upload any specific content. Rather, as we explained above, these
algorithms analyze users’ behavior on the platform (including posts
and viewing history) and match users with new video recommendations
accordingly in order to enhance engagement.
65
Hence, the fact that
Google’s (YouTube’s) algorithms recommend ISIS content to users—
based on viewership history, actions, and other information about the
user—should not result in forfeiture of Section 230 immunities.
66
Similarly, the Second Circuit held in a similar case involving
Facebook’s recommendation algorithms that “[t]he algorithms take
the information provided by Facebook users and ‘match’ it to other
users—again, materially unaltered—based on objective factors
applicable to any content, whether it concerns soccer, Picasso, or
plumbers.”
67
Whenever platforms use recommendation algorithms only to
“match” between information created by one user to some content
uploaded to the website by other users, platforms should be protected
under Section 230 immunity. Similarly, Section 230 should also protect
adjacent decisions, such as making some content more available than
61. Gonzalez v. Google LLC, 2 F.4th 871, 892 (9th Cir. 2021), cert. granted, No. 21-1333,
2022 WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v. Taamneh, No. 21-
1496, 2022 WL 4651263 (U.S. Oct. 3, 2022).
62. Id. at 891–93.
63. Id. at 893–97.
64. Id. at 893.
65. Id. at 894–95; see also Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124 (9th Cir.
2003) (“[T]he fact that Matchmaker classifies user characteristics into discrete categories and
collects responses to specific essay questions does not transform Matchmaker into a ‘developer’
of the ‘underlying misinformation.’”). Hence, the website’s decision to match profiles with similar
characteristics is consistent with Section 230 immunity.
66. Gonzalez, 2 F.4th at 894–95.
67. Force v. Facebook, Inc., 934 F.3d 53, 70 (2d Cir. 2019).
2023] GONZALEZ V. GOOGLE 191
others, placing content on specific areas of the website, and deciding
which users will be shown some content based on data about that user.
Those actions should ordinarily be understood as platforms’
management of third-party content for the benefit of the specific users.
They should seldom be regarded as decisions for which platforms
should be held liable.
Using recommendation algorithms to rank content, decide which
should be more visible to (particular) users, and so on are all an
“essential part of traditional publishing.”
68
These actions do not pass
the line between publishing and speaking and are protected under
Section 230.
69
As the majority in Force
70
correctly explained: “Merely
arranging and displaying others’ content to users of Facebook through
such algorithms—even if the content is not actively sought by those
users—is not enough to hold Facebook responsible as the
“develop[er]” or “creat[or]” of that content.”
71
Even Chief Judge
Katzman, writing an influential partial-dissent in Force, seemed to
agree that in performing these services, Facebook “acts solely as the
publisher.”
72
On this view, Section 230 immunity extends to using
recommendation algorithms to match content and users, regardless of
the outcomes. Dyroff
73
is a dire example of this reasoning.
74
An online
messaging board connected a user, who sought to buy heroin, with
another user who responded to that original message. A day later, the
buyer died because the drugs he bought were laced with fentanyl.
75
Despite the tragic outcome, the Ninth Circuit held that Section 230
immunities applied.
76
Just as in the objectionable terrorist content
cases discussed above, the court realized that revoking Section 230
immunity from platforms that use recommendation algorithms is out
of sync with the technology and the law. By using algorithms that
recommend or notify users about information posted on the website,
the Ninth Circuit held in Dyroff that platforms are acting as “publisher
68. Id.
69. Id. at 66–67, 70–71.
70. Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
71. Id. at 70.
72. Id. at 82–83, 85 (“Of course, the failure to remove terrorist content, while an important
policy concern, is immunized under § 230 as currently written.”).
73. Dyroff v. Ultimate Software Grp., 934 F.3d 1093 (9th Cir. 2019).
74. Id. at 1093.
75. Id. at 1094–96.
76. Id. at 1097–99.
192 DUKE LAW JOURNAL ONLINE [Vol. 72:176
of other’s content” and should therefore be immune under Section
230.
77
Lastly, we support extending Section 230 immunity to uses of
recommendation algorithms based on understanding them as “neutral
tools.” For instance, in Force, the Ninth Circuit emphasized that
Facebook’s recommendation algorithms are neutral tools that connect
specific users to specific content and as such are protected under
Section 230.
78
In our view, when platforms use “content-neutral
algorithms, without more” to match specific content to specific users,
they should retain their Section 230 immunity.
79
One possible objection to this stance urges that recommendation
algorithms are not really “neutral.” After all, recommendation
algorithms do favor some content: some posts, videos, or groups will
appear at the top of searches or newsfeeds and some at the bottom.
This criticism is untenable. Favoring some content by featuring it more
prominently than other content is unavoidable. Even the Yellow
Pages—which arranges businesses and organizations into groups and
lists their contact information in alphabetical order—makes “Ace
Plumbing” more prominent than “Zeke’s Plumbing.” So, too, for a
platform’s use (and indeed lack of use) of recommendation algorithms.
Any method to manage content would eventually make some content
more prominent. In analyzing those methods, we must look beyond
this feature and evaluate how platforms make those decisions. As
explained, recommendation algorithms are deemed “neutral” because
their curation is not based on the content’s meaning or subject matter.
Rather, they rely on objective factors applicable to any content
(“whether it concerns soccer, Picasso, or plumbers”
80
) to decide which
content to amplify.
81
For those reasons, we think recommendation algorithms are
usually content-neutral for First Amendment purposes.
82
And, for
similar reasons, courts have correctly recognized that using content
77. Id. at 1098 (“These functions—recommendations and notifications—are tools meant to
facilitate the communication and content of others.”).
78. Force, 934 F.3d at 66–67, 70–71.
79. Gonzalez v. Google LLC, 2 F.4th 871, 897 (9th Cir. 2021), cert. granted, No. 21-1333,
2022 WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v. Taamneh, No. 21-
1496, 2022 WL 4651263 (U.S. Oct. 3, 2022); Force, 934 F.3d at 69–70; Marshall’s Locksmith Serv.
v. Google, LLC, 925 F.3d 1263, 1270–71 (D.C. Cir. 2019); Fed. Trade Comm’n v. LeadClick
Media, LLC, 838 F.3d 158, 174–79 (2d Cir. 2016).
80. Force, 934 F.3d at 70.
81. Id.
82. Rubinstein & Kenneth, supra note 27, at 56.
2023] GONZALEZ V. GOOGLE 193
recommendation algorithms for such purposes does not forfeit Section
230 immunity.
* * *
As noted, influential dissenting opinions have rejected the
“business as usual” approach.
83
Objecting to the extensive protection
that Section 230 provides to platforms under the material contribution
standard, they sought to limit such protections. Wisely, they realized
that it makes little sense to limit those protections by excluding
recommendation algorithms from Section 230 protections altogether.
Instead, they tried to single out specific, yet still too broad, applications
of recommendation algorithms by platforms and explain why those
should be excluded from Section 230 immunity. We disagree with both
dissenting views on the merits, as we explain below.
B. First Alternative View: Recommending Connections
One view favors excluding recommending “connections” from
Section 230 protections. Judge Berzon, writing concurrently in
Gonzalez, argued that platforms forfeit their Section 230 immunity
when they amplify and direct content to specific users. That they use
“neutral” algorithms to do so matters little in her view.
84
“These types
of targeted recommendations and affirmative promotion of
connections and interactions among otherwise independent users,” she
opined, “are well outside the scope of traditional publication.”
85
According to this view, when recommendation algorithms are used to
facilitate connections and social interactions, they are not protected by
Section 230. Is there a persuasive explanation for this conclusion? We
think not.
One possible explanation is that connections on platforms are a
kind of content, an input that users upload to the platform. On this
view, recommending connections to users implies participating in the
creation of the content. That is, when recommendation algorithms are
applied to connections, they always amount to a material contribution
and thus are never protected under Section 230. On this account, when
users connect to users or groups, they upload content implicitly stating,
“I like this group/user and want to connect with them.” In turn, when
83. See, e.g., Force, 934 F.3d at 76 (Katzmann, C.J., concurring); Gonzalez, 2 F.4th at 918
(Gould, J., concurring).
84. Gonzalez, 2 F.4th at 914.
85. Id.
194 DUKE LAW JOURNAL ONLINE [Vol. 72:176
platforms use recommendation algorithms to suggest specific friends,
groups, or events, they implicitly tell users, “I think you will like X,”
and the user implicitly responds, “I’m following your advice; I do like
X.”
There are several problems with this account. To begin with, we
are hesitant to say that connections are themselves content. While they
are created by users’ input, connections seem to be more part of the
structure of the platform than something that users try to convey to
others. Moreover, this argument seems to suggest that
recommendation algorithms are also content. Allegedly, the use of
recommendation algorithms converts the content of “connection to X”
to “platform thinks you will like connection to X.” But, ascribing such
content to recommendation algorithms is mistaken. Such uses of
recommendation algorithms only help platforms decide which
connections they should recommend to which user. As the Ninth
Circuit held in Dyrrof, “[T]hese functions—recommendations and
notifications—are tools meant to facilitate the communication and
content of others. They are not content in and of themselves.”
86
Thus,
we do not think that recommending content should be understood as
contributing to creation of content.
Even if recommending connections is somehow contributing to
content, that would not suffice. To decide whether Section 230
protections apply, we must find that what platforms do with those
recommendations amounts to a material contribution. We are hesitant
to agree that recommending connections always amounts to material
contribution.
There are many ways to implement connection recommendations
on the platform, some more pervasive than others. Without
considering more details about the means platforms use to recommend
connections to specific users—how often do these recommendations
appear, how much screen space do they capture, how easy it is for users
to ignore, how often do users actually ignore those recommendations,
etc.—it is difficult to say whether these recommendations amount to a
material contribution.
87
We can imagine that some uses of
recommendation algorithms can make material contributions. Most
86. Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1098 (9th Cir. 2019) (holding that
Section 230 immunized social networking operator from liability for its alleged role in facilitating
the drug overdose death of a man who used the social network to identify a local drug dealer and
obtain heroin, which turned out to be laced with fentanyl).
87. Compare Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d
1157, 1169–71 (9th Cir. 2008), with Marshall’s Locksmith Serv. v. Google, LLC, 925 F.3d 1263,
1271 (D.C. Cir. 2019); Dyroff, 934 F.3d at 14–17.
2023] GONZALEZ V. GOOGLE 195
obviously, if a platform requires a user to connect to one group or
friend out of the recommended list in order to continue using the
platform, it would likely amount to material contribution excludable
from 230 immunity.
88
But, for similar reasons, we are doubtful that
merely using recommendation algorithms to recommend connections
on platforms, without additional information, suffices to forfeit Section
230 protections. As we argue throughout this Article, a more nuanced
analysis of the application of recommendation algorithms is necessary.
Finally, we do not argue that by using recommendation algorithms
platforms are conveying a message. We suggest that platforms would
lose protection under Section 230 when they materially contribute to
content that a user uploads. This view accepts the existing analysis of
Section 230, which focuses on the material contribution to the content
that users upload. Using recommendation algorithms to suggest
connections can sometimes help create or develop this content.
Whether such a contribution is sufficient to strip platforms of their 230
immunity, though, requires further details and a case-by-case
approach.
C. Second Alternative View: Conveying a Message
Another view of recommendation algorithms detaches them from
users’ content entirely. In Force, Chief Judge Katzman seemed to
support this view. He argued that recommendation algorithms that
match different users with similar interests do more than “publish”
users’ content.
89
Rather, they “forge[] connections, [and] develop[] new
social networks.”
90
In his words, when “targeting and recommending
[profile, group, or event pages written by other users] to users,”
Facebook “uses the algorithms to create and communicate its own
message.”
91
So, the argument goes, using recommendation algorithms
to suggest friends and groups is not protected under Section 230
because these activities amount to conveying messages, not merely
publishing them.
92
88. Cf. Roommates.com, 521 F.3d at 1166 (“By requiring subscribers to provide the
information as a condition of accessing its service, and by providing a limited set of pre-populated
answers, Roommate becomes . . . the developer, at least in part, of that information.”). Because
Roommates.com made material contributions to the content, it was not protected under Section
230. Id.
89. Force v. Facebook, Inc., 934 F.3d 53, 76–77 (2d Cir. 2019).
90. Id.
91. Id. at 82.
92. Id. at 76–77, 82.
196 DUKE LAW JOURNAL ONLINE [Vol. 72:176
Following this analysis, the Supreme Court might find that
recommendation algorithms are not neutral tools but rather are tools
that actively and deliberately convey information. Drawing on the
colloquial language of “recommendation algorithms,” the Court might
say that these algorithms are in fact just that: a recommendation, a
message that platforms convey to users about some content. Granted,
if using recommendation algorithms conveys a message, then it is not
protected under Section 230. But, is this interpretation compelling? Is
the use of recommendation algorithms really conveying a message? We
are doubtful.
As we explained elsewhere, even if platforms have messages that
they wish to convey, it is unclear how the use of recommendation
algorithms to rank content gives a voice to these messages.
It is tempting to understand ranking and moderating content as
complementary activities since both involve the selection,
organization, and presentation of online content, or what many refer
to as curation. And, if ranking is a form of content curation, then it also
seems to involve the exercise of editorial discretion (“we recommend
this, not that”) and, therefore, convey a message. This reading suggests
that platforms should be treated as publishers (because both make
editorial decisions) and, thus, possibly liable for information uploaded
to their websites, despite Section 230(c)(1).
But, the dissimilarities between algorithmic ranking by platforms
and the editorial decisions of traditional media outlets (like
newspapers) are striking. Editors are responsible for the content and
style of a newspaper. They assign, review, edit, rewrite, and lay out all
copy, drawing on their communication and writing skills, their
familiarity with various issues, policies, and events, and their subject-
matter expertise while maintaining their independence. In designing
ranking algorithms, however, technical teams engage in none of these
tasks. Rather, they use complex mathematics and sophisticated
engineering techniques to determine in a computationally efficient
manner which of many personal characteristics are most relevant for
predicting engagement with available content.
93
Thus, algorithmic recommendation, unlike content-moderation,
does not entail policy formation, decisions about values and
viewpoints, or human oversight of automated judgments to ensure
fidelity with these editorial standards. For example, Facebook’s
recommendation algorithms evaluate thousands of pieces of content
93. See Rubinstein & Kenneth, supra note 27, at 57–61.
2023] GONZALEZ V. GOOGLE 197
based on hundreds of signals to determine which content is most likely
to keep each of the platform’s billions of users most engaged. It is
doubtful that Facebook’s values and viewpoints are a good predictor
of what a particular user might find relevant. In a nutshell, why would
Facebook’s (or Nick Clegg’s) core values matter if we are trying to
predict whether a particular user is more engaged by dog photos or cat
photos or by John Stuart Mill or Edmund Burke?
94
Hence, even if
platforms wanted to instill their values as part of the algorithmic
recommendation process, those would probably have little weight in
light of the algorithm’s major task: keeping the user engaged.
95
Furthermore, given the sheer scale of the platforms in question
and the number and variety of possible topics they recommend to users
at any given time, it is doubtful that we can intelligibly identify “the
message” that platforms convey, let alone ascribe platforms any
meaningful intention to convey it.
96
The only way to derive a message
from all this recommended content would be to analyze all of it at a
very high level of generality and abstraction, which arguably brings it
back within the ambit of Section 230 immunity.
It follows that any attempt to extract a coherent message out of
the varied content that platforms recommend to billions of different
users is a fool’s errand. At most, one can say that recommendation
algorithms endorse a specific user’s engagement with some specific
content. But, this is a very limited form of endorsement, one that
cannot easily be traced to platform’s attempt to convey a message
about any of the topics it recommends. Rather than conveying “the
platform’s message,” the underlying algorithms are designed to
amplify whatever content is likely to engage the user.
In Gonzalez, Judge Gould subscribed to the view that
recommendation algorithms convey a message but concluded that
platforms should only forfeit Section 230 immunity when the
94. Id. Clegg is Meta’s President of Global Affairs. Nick Clegg, President, Global Affairs,
M
ETA, https://about.meta.com/media-gallery/executives/nick-clegg [https://perma.cc/C8J7-FKZ7].
95. Indeed, the platform’s values and the user’s interest might be connected. Arguably,
many people opt for Facebook over Gab (for instance) exactly because the former provides
content that they are interested in watching, while the latter provides content that they do not
want to see. So, the platform’s “values” and the user’s explicit interests (which might be what
actually keeps a user engaged) may be confounding factors. However, even under this account,
the recommendation algorithm recommends some content because it keeps the user engaged, not
because it serves the platform’s values.
96. Admittedly, a platform can convey a message, like when it overrules the ordinary uses
of its algorithms and inputs a message such as “go vote” or “get vaccinated.” However, these are
not the actions of recommendation algorithms. If anything, they show the platform overriding
them.
198 DUKE LAW JOURNAL ONLINE [Vol. 72:176
information they amplify is particularly problematic.
97
With this in
mind, he argued that recommendation algorithms should not be
protected “because of the unique threat posed by terrorism
compounded by social media.”
98
He adopted an explicitly content-
based approach, holding that courts should be able to hem-in
recommendation algorithms that amplify bad messages.
99
But, this
approach runs afoul of the ideas underpinning Section 230.
100
Holding
that Section 230 immunity applies unless platforms amplify content
that is “very bad” provides platforms very little assurances regarding
what they can and cannot publish online without risking liability. The
whole point of legislating and interpreting Section 230 broadly was to
avoid this uncertainty and the expected chilling effects that would
likely follow.
101
As we shortly explain, we are not necessarily opposed
to carving out specific topics from Section 230 protection, but this
approach must be limited and narrowly framed.
IV.
CONCLUDING REMARKS: BETTER WAYS TO REGULATE ONLINE
SPEECH
The foregoing discussion offers several valuable lessons for the
relation between Section 230 and recommendation algorithms. First
and foremost, recommendation algorithms are best understood as a
method that platforms apply for various uses. As such, courts (and
legislators) should refrain from regulating all recommendation
algorithms generically. Instead, courts should opt for a more nuanced
97. Gonzalez v. Google LLC, 2 F.4th 871, 920–21 (9th Cir. 2021), cert. granted, No. 21-1333,
2022 WL 4651229 (U.S. Oct. 3, 2022), cert. granted sub nom. Twitter, Inc. v. Taamneh, No. 21-
1496, 2022 WL 4651263 (U.S. Oct. 3, 2022).
98. Id. at 923 (“I would hold that where the website (1) knowingly amplifies a message
designed to recruit individuals for a criminal purpose, and (2) the dissemination of that message
materially contributes to a centralized cause giving rise to a probability of grave harm, then the
tools can no longer be considered ‘neutral.’”).
99. Id. at 921 (“[T]he seemingly neutral algorithm instead operates as a force to intensify
and magnify a message . . . . But when it shows acts of the most brutal terrorism
imaginable . . . then the benign aspects of Google/YouTube, Facebook and Twitter have been
transformed into a chillingly effective propaganda device . . . .”).
100. As the court stated,
But this is not where Congress drew the line . . . . Congress did not differentiate
dangerous, criminal, or obscene content from innocuous content when it drafted §
230(c)(1). Instead, it broadly mandated that ‘[n]o provider . . . of an interactive
computer service shall be treated as the publisher or speaker of any information
provided by another information content provider.’
Id. at 896 (quoting 47 U.S.C. § 230(c)(1)); see also id. at 912; supra Part I.
101. See Goldman, supra note 10, at 155–57 (noting Congress sought to incentivize platforms
to moderate objectionable content within Section 230).
2023] GONZALEZ V. GOOGLE 199
approach, one that considers the specific use and application of
recommendation algorithms in specific contexts. This argument is far
from obvious. Against the background of growing discontent over the
monopolistic power of platforms and the ways they manage content,
many think that Section 230 should be scaled back.
102
This might be the
case. But, excluding recommendation algorithms from Section 230
immunity is not the way to go.
Second, lower courts’ focus on the material contribution standard
seems justified. When platforms do not interfere with users’ discretion
to decide which content to upload, platforms should be regarded as
publishers and enjoy Section 230 immunity.
103
Conversely, when
platforms drive users to upload specific content, the platforms make a
material contribution to the content and thus lose such immunity. This
approach is harmonious with the broad text and interpretation of
Section 230. And, it allows courts to analyze the specific application of
recommendation algorithms—whether they were indeed used as
neutral tools in a specific context. This was also the dividing line
between cases like Dyroff and Carafano
104
on one hand and
Roommates.com
105
on the other. Obviously, this legal standard is not
perfect, and the line between material contribution and editorial
judgement is blurry. But, these are ordinary questions of interpretation
that courts regularly address.
106
Third, courts should reject views that treat recommendation
algorithms as inherently conveying a message. As explained, this view
misconstrues the technology behind the recommendation algorithms
and misunderstands the scale and volume at which platforms manage
content nowadays.
Finally, we recognize that our approach does not resolve the many
problems that recommendation algorithms and content amplification
pose in online platforms.
107
We do not think it should. Gonzalez is not
102. See supra notes 17–19 and accompanying text.
103. See, e.g., Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d
1157, 1166 (9th Cir. 2008); Force v. Facebook, Inc., 934 F.3d 53, 67 (2d Cir. 2019); Gonzalez, 2
F.4th at 892–93.
104. Carafano v. Metrosplash.com., Inc., 339 F.3d 1119 (9th Cir. 2003).
105. Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th
Cir. 2008).
106. Arguably, the platforms always influence the kind of content that is uploaded—using
content moderation, banning specific words, cultivating specific culture, etc. Additionally, content
moderation schemes that impose various sanctions on specific content influence users to upload
only complying content ex ante. The question, therefore, is to what extent and how blunt is the
platforms’ intervention. This is a challenging question, but one that courts are used to facing.
107. See supra Part II.A.
200 DUKE LAW JOURNAL ONLINE [Vol. 72:176
the case to “solve” Section 230. If the Supreme Court in Gonzalez
decides that recommendation algorithms are immune under Section
230 in most circumstances, there are still viable—indeed desirable—
solutions to the perils of online speech. Discussing each of those would
require separate papers, but we present below a few options we find
appealing.
For one, nothing prevents Congress (politics aside) or the Court
from imposing new restrictions on the use of social media by ISIS and
other terrorist organizations. We are not opposed to carving out
exceptions to Section 230 immunity. But, we think those exceptions (to
the extent that they are desirable on the merit) should be grounded in
and limited to very clearly defined categories. This approach is
exemplified in Section 230’s exception for sex trafficking, which refers
to civil and criminal offenses under designated statutes.
108
For instance,
given the circumstances of Gonzalez, it seems reasonable to fashion a
similar narrow exception to Section 230 that would forfeit protections
to platforms for failing to adequately confront terrorism as defined
under the ATA.
109
Moreover, we think that a carefully written law that
narrowly regulates the use of recommendation algorithms in those
specific contexts might survive First Amendment scrutiny.
110
However,
a judicial decision stripping Section 230 protections for any use of
targeted recommendations (as the Supreme Court seems to
contemplate) or one limiting those protections when “harmful
content” is at play (along the line of Judge Gould’s view) is simply too
broad and should be avoided.
Another viable alternative is to amend Section 230 by
conditioning immunity on the platform demonstrating that it has taken
“reasonable steps to prevent or address” unlawful uses of its services.
This approach permits the courts to decide whether the steps taken by
a service in a given case were reasonable or negligent under the
circumstances in question.
111
In turn, this approach invites the kind of
nuanced analysis of the methods that a particular platform used with
108. See 47 U.S.C § 230(e)(5).
109. Some legislative proposals have taken this approach. See, e.g., Protecting Americans
from Dangerous Algorithms Act, H.R. 2154, 117th Cong. (2021) (removing Section 230 immunity
from large social media companies that amplify or recommend content that is directly relevant to
a claim involving civil rights or acts of international terrorism under the ATA). Note that there
are many problems with the definition of “terrorism,” making it perhaps too flexible of an
exception to Section 230.
110. See Rubinstein & Kenneth, supra note 27, at 61–62 (arguing that targeted regulation of
platform amplification mechanisms that pursue compelling government interests could survive
First Amendment scrutiny).
111. See Citron & Wittes, supra note 9, at 419.
2023] GONZALEZ V. GOOGLE 201
regard to the particular content. As discussed, we think this nuanced
approach is desirable. Yet another option is to adopt a model that
European and other countries have embraced: soft-law mechanisms
that influence platforms to self-regulate and enforce their policies in a
manner conducive to the specific relevant harms.
112
Also, we think that
both platforms and regulators should explore the use of innovative
solutions, specifically ones that challenge the engagement-based
business model and technological architecture of online platforms. In
this sense, introducing “friction”
113
or “middleware”
114
to the online
platforms landscape seems worthwhile.
C
ONCLUSION
In closing, the analysis of recommendation algorithms in this
Article only applies to Section 230. Courts should be cautious and
deliberate about their use of the preceding analysis in other contexts,
such as First Amendment law. We have argued elsewhere that the
regulation of recommendation algorithms by Florida’s social media law
and certain proposed federal legislation is content-neutral for First
Amendment purposes.
115
But, much depends on the wording and
precise motivation of these provisions. As always, the devil is in the
details.
112. See Rubinstein & Kenneth, supra note 27, at 35–49 (discussing the use of soft-law
measures to confront online public health misinformation).
113. See, e.g., E
RIN SIMPSON & ADAM CONNER, CTR. FOR AM. PROGRESS, FIGHTING
CORONAVIRUS MISINFORMATION AND DISINFORMATION: PREVENTIVE PRODUCT
RECOMMENDATIONS FOR SOCIAL MEDIA PLATFORMS 10 (2020) (recommending that platforms
voluntarily adopt friction measures to hinder amplification of public health misinformation).
114. See, e.g., Francis Fukuyama, Making the Internet Safe for Democracy, 32 J.
DEMOCRACY
37, 40 (2021) (outlining a proposal “to outsource content curation from the dominant platforms
to a competitive layer of ‘middleware companies’”).
115. See Rubinstein & Kenneth, supra note 27, at 51–52, 56–61 (analyzing amplification or
ranking of social media posts as a content-neutral task); see also NetChoice, LLC v. Att’y Gen.,
34 F.4th 1196, 1226 (11th Cir. 2022) (noting that a provision allowing users to opt-out of platform
recommendations of content “is pretty obviously content-neutral”).