A Big Week for Social Media
The Bright Team
The Bright Team • Oct 28

A Big Week for Social Media

by The Bright Team

This is a big week in social media, with new lawsuits filed against TikTok, Kanye West almost universally banned for his antisemitic tirades and, of course, as Elon Musk finalised his purchase of Twitter. Declaring that “the bird is freed,” Musk’s first steps were to fire the current CEO, CTO, and -- in a move that worried many commentators -- the head of safeguarding and standards, Vijaya Gadde, who was responsible for imposing the lifetime ban on Donald Trump’s account. It’s too soon yet to say what that means, but rumours and speculation are flying from all sides.

No matter what, our view is that content moderation is going to be the major issue, partially because Musk can’t help himself and partially because content moderation is always the issue. Where it gets tricky is that, when social networks start to get involved in picking what content gets more moderation than others (e.g., to “balance out” views), they start to look an awful lot like an editor of a publication, rather than a messaging space. And that brings us to the real crux of this entire controversy, Section 230.

The Twenty-Six Words that Created the Internet

Join the Waitlist

“Section 230” refers to a portion of the Communications Decency Act (1996) which establishes the rule that online service providers can’t be held liable for what their users say on the service. The section itself is quite short but it has had an outsized impact on online forums and expression -- as cybersecurity expert Jeff Kosseff puts it, they “created” the internet.

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Straightaway, you notice how dated the terminology is. “Interactive computer service?” “Publisher or speaker?” It may seem archaic now, but this was cutting-edge stuff in 1996, back when your Pentium II computer could do (much) less than a new Apple Watch and getting online involved shouting across the house to make sure no one was on the phone. And, critically, Section 230 placed online content in the context of, and in continuity with, longstanding rules about liability for what is put into print.

The traditional rule (in US law) was that while a publisher might be held liable for unlawful content -- say, a libelous article in a magazine or an obscene image in a newspaper -- a distributor (like a bookstore or magazine stand) almost never had the degree of control or knowledge about what they were selling, and so couldn’t be held accountable.

At its core, Section 230 does something similar. If I host a website with a message board and a user writes something libelous about a third party, the user can be sued for libel. But unless I wrote the post along with the user, or specifically selected it for promotion because of its content, I am not liable for what the user did or said. In other words, Section 230 treats web platforms like bookstores and magazine stands: hosting content, but not necessarily agreeing with it.

Importantly, the protections of Section 230 apply even if the platform has rules about what content it will allow and what it won’t, as long as the rules are applied in good faith.

Importantly, the protections of Section 230 apply even if the platform has rules about what content it will allow and what it won’t, as long as the rules are applied in good faith. In the days of print media, I wouldn’t have to sell books that support extremism or are lewd to remain open. In fact, I could even decide that I don’t want to sell books about anything other than Hawaiian pizza and refuse to carry anything that wasn’t relevant to that topic if I wanted, and I’d still be protected. It’s the same for online platforms: as long as you are even-handedly enforcing the rules of your network and not publishing objectionable content yourself, Section 230 protects you.

Simple? No, not really. But, generally, the courts were consistent in enforcing the law, and people knew what to expect.

Race to the Bottom

Beginning in 2015, however, a massive wave of criticism around Section 230 emerged. Although much of it originated on the political right amid claims of bias and “de-platforming,” the criticism never really explained how or why Section 230 was bad. In fact, much of the criticism of the law focused on “free speech,” which, as we’ve outlined elsewhere, has literally nothing to do with private actors, like social media companies.

In fact, requiring a company to host or promote content that violated its terms and conditions would itself be a violation of free speech rights: the social media company’s First Amendment right not to be compelled to promote views that it does not agree with. Did many of the critics of Section 230 know this? Yes they did. Did they make these arguments anyway? Yes they did.

So is Section 230 without flaws? Of course not. The protections it affords are very broad, but the obligations to meet the “good faith” attempt at moderation and enforcement are basically a joke. No one actually contends that Meta, or Twitter, or Google are seriously enforcing their own moderation policies -- largely because the economic incentives to do so are virtually nil.

That’s the gravamen of the current lawsuit before the U.S. Supreme Court, Gonzalez v. Google, which argues that Google’s YouTube algorithm promotes extremist content that spurs terrorism. It’s different to many of the more common arguments to the law, but the real question will be whether the Supreme Court goes beyond the terrorism context to roll back Section 230 more fully.

Whither Twitter

And so we come back to Elon’s new toy. The “bird” may be “freed,” but that’s a statement that comes with a lot of context. Section 230 will continue to apply...as long as editorialising and favourite-playing doesn’t become the norm. At that point, the protections go away and Twitter takes on liability for what its users post. And, lest we forget, Twitter is a global company that is subject to laws outside the US. Which is why European Commissioner Thierry Breton said, just hours ago, “In Europe, the bird will fly by our  rules.” Plus ça change.

And, lest we forget, Twitter is a global company that is subject to laws outside the US. Which is why European Commissioner Thierry Breton said, just hours ago, “In Europe, the bird will fly by our  rules.” Plus ça change.

What do we think will happen? Again, it’s too soon, but our guess is that once the mechanics and day-to-day obligations of content moderation and disinformation management start to become real (and not just clickbait fodder), Musk is going to get bored, fast. What that means is another matter entirely, but devolution of control to the board, a flurry of attention-grabbing acts with little real significance, or a resale for a quick profit are all on the table.

But Section 230 will remain squarely in the spotlight in the coming months, as both supporters and critics represent (and misrepresent) what it means as we watch the Musk Era of social networking develop. One other important point here: current calls to change Section 230 often don’t display an understanding of what that would, or could, involve. They can sometimes feel knee-jerky. At the same time, when these calls for change are well-intentioned (and not in service of an ulterior motive), they reflect a deep, and very real desire for change. In other words, although we would generally say that calls to bin Section 230 are misguided, their commonness stems from the fact that so many people have a sense that something is wrong with social media - platforms need to be held accountable, and everyone is desperate for it to happen. Changing Section 230 may well be the way, but if a good thing is not done well, we’re right back where we started (or worse).

If you’re interested in reading more about our view of online expression and content moderation, have a look at our Membership Agreement and our promises, or feel free to ask us a question on the website or on our social channels. Content moderation is a core focus at Bright, and so you won’t be surprised to hear that we’ve thought about these issues deeply, and think we’ve developed creative, scalable solutions that will prove themselves when the app launches. Until then, like everyone else, we’ll be getting out the popcorn.

Content moderation is a core focus at Bright, and so you won’t be surprised to hear that we’ve thought about these issues deeply...

Join the Conversation

Join the waitlist to share your thoughts and join the conversation.

Brock Melvin
Sue Gutierrez
Adrian Faiers
Mike Perez Perez
chris dickens
The Bright Team
The Bright Team

Two lawyers, two doctors, and an army officer walk into a Zoom meeting and make Bright the best digital social community in the world. The team’s education and diversity of experience have given us the tools to confront some of the toughest tech and social problems.

Join the Waitlist

Join the waitlist today and help us build something extraordinary.