Home » Opinion: The Supreme Court could end social media companies’ charade | CNN
News

Opinion: The Supreme Court could end social media companies’ charade | CNN

Editor’s Note: Former Amb. Marc Ginsberg is the founder and president of the Coalition for a Safer Web, a non-profit organization whose mission is dedicated to developing technologies and policies to expedite the permanent de-platforming of hate and extremist incitement on social media platforms. The views expressed in this commentary are his own. View more opinion on CNN.



CNN
 — 

The US Supreme Court heard oral arguments on Tuesday in Gonzalez v. Google — the first time the justices have taken up the fate of social media’s content immunity granted under Section 230 of the Communications Decency Act. At stake: whether Google is exempt from content liability after YouTube, its subsidiary, allegedly promoted terrorist videos through its algorithm.

The case was brought by the family of Nohemi Gonzalez, an American student who was killed in Paris in a 2015 ISIS attack. Google’s attorney, Lisa Blatt, said that protecting platforms from legal liability is what has allowed the Internet to get off the ground.

And on Wednesday, the Court heard a collateral case, Twitter v. Taamneh, which examined the potential liability of social media companies for aiding and abetting terrorism by hosting content that expresses support for a group behind an act of violence. Twitter’s attorney, Seth Waxman, argued that Twitter could have been liable if the company were warned that specific accounts were planning an attack, but that in the absence of such warnings, Twitter was not liable.

Social media companies must no longer have their cake and eat it too. They must be held to account for failing to quickly deploy the resources and technology to prevent extremist content from inciting violence despite earning a financial bonanza across their platforms.

Section 230 has shielded their need to make that investment. Passed at the dawn of the internet age in 1996, Section 230 was intended to immunize social media companies from third-party content posted on their platforms. It absolves an “interactive computer service” from being treated as a “publisher or speaker.”

In other words, the platforms are treated as benign providers of digital space and have limited liability exposure to whatever customers decide to upload onto that space. The thinking was that newly minted internet service companies would likely face financial ruin arising from a torrent of lawsuits filed against them for publishing defamatory content uploaded by third parties.

But things have changed since those early days of the internet. Platforms have been weaponized by antisemitic extremists and far right terrorists. They have been used to incite racially and religiously motivated attacks across the US. And the American people are paying the price in terms of lives lost.

The crux of the problem is that social media companies earn revenue from digital advertising. Corporate advertisers will pay premiums for placing so-called amplified ads — digital ads that a platform’s algorithm posts to a user account that draws the attention of a large number of other users — on content targeted at users who return time and again to their same online interests. And if those interests happen to focus on extremist content, social media companies still increase their profits from repeated user engagement with that content.

Advertisers have to rely either on the platforms’ assurances they will delete offensive content, or hope that watchdog groups or community members will flag extremist content that they surely would not want their brands to be associated with. Meanwhile, days to months can go by before a platform deletes offensive accounts. In other words, advertisers have no ironclad confidence from social media companies that their ads will not wind up sponsoring extremist accounts.

During Tuesday’s questioning, the justices directed a considerable amount of attention to the extent to which “targeted recommendations” turn social media platforms from neutral, public spaces to publishers of potentially harmful content. Social media’s development and deployment of these content-targeting algorithms transformed companies from passive platforms hosting only third-party content to active aggregators and purveyors of new content, intended to hold customers’ attention.

During Wednesday’s oral arguments, the justices’ questions collectively suggested the Court was leaning in support of Twitter’s defense: that even though ISIS was using Twitter, that does not mean that Twitter was intentionally providing assistance to enable ISIS to commit a specific act of terror.

The Court could decide in Gonzalez v. Google to punt the fate of Section 230 to Congress by rejecting the Gonzalez complaint. If it does, future bills to carve out liability for extremist content would likely replicate prior legislation that imposes content liability on social media companies for creating content that, among other things, violates federal criminal law, violates intellectual property claims, constitutes child pornography or promotes sex trafficking.

With dire warnings that a ruling for Gonzalez would “ruin the internet,” proponents of preserving Section 230 assert that exposing social media companies to content liability would have a chilling effect on free speech and expose them to endless lawsuits. Google’s court filing went so far as to state that an adverse ruling would “threaten the internet’s core functions.”

Conversely, Section 230’s opponents assert that it was never intended to shield social media companies when they knowingly recommend uploaded extremist incitement and terrorist violence in order to drive up digital ad revenue. Social media companies are failing to remove terrorist content flagged not only by their own in-house content moderators, but also by third-party watchdogs. Why? They have an economic disincentive to do so. The content attracts users, which in turn, generates more ad revenue.

By taking up Gonzalez v. Google, the Court has seized the opportunity to restore order from the judicial and legislative chaos surrounding Section 230. A ruling for the plaintiffs will go a long way to ending the charade that social media companies are doing everything possible to protect the safety and security of their users.

Newsletter

February 2023
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728