The Western Framework for Free Expression
The Bright Team
The Bright Team • Feb 07

The Western Framework for Free Expression

Breaking the Feed, Social Media: Beyond the Headlines.

This episode we explore the Western framework for protecting freedom of expression and its ability to address new challenges such as abuse, harassment, mis/disinformation, and mental health threats. The conversation highlights the differences between the European and American approaches to hate speech regulation (and emphasises the importance of context and nuance in both systems) and delves into the complexities of managing freedom of expression at universities and the impact of social media on hate speech regulation.

Key Takeaways:
  •  The Western framework for freedom of expression is not well-equipped to address new challenges such as abuse, harassment, misinformation, and mental health threats.
  • There are significant differences between the European and American approaches to hate speech regulation, with Europe having stricter regulations and the US providing more protection under the First Amendment.
  • Managing freedom of expression in universities is a complex issue, with the need to balance academic freedom and the prevention of hate speech.
  • Mis and disinformation present challenges in determining the boundaries of free speech, with the need to distinguish between opinion and fact and the potential impact on defamation and false advertising laws.
  • Social media platforms do not have a legal obligation to uphold First Amendment rights, as they are private actors, and the existing statutory safe harbours protect them from liability.

TW.  Hi, I'm Taryn Ward,

SJ.  I'm Steven Jones,

Join the Waitlist

TW.  and this is Breaking the Feed, Social Media: Beyond the Headlines. 

SJ.  We're taking a closer look at the core issues around social media, including the freedom of expression or free speech, as it's sometimes known to better understand the role that social media plays in our everyday lives and society.

TW.  Last episode, we discussed whether there's a global framework for protecting the freedom of expression, and discuss whether it'd be possible or even desirable to build one. This episode, we'll look at the Western framework for protecting that freedom, discuss areas where there is conflict, or at least less than full agreement, and finally, how well suited or not well suited that framework is to address new and emerging challenges, and that's really our core question for today: 

TW.  How well equipped is the existing Western freedom of expression framework to address new and emerging challenges, like abuse and harassment, missing disinformation, and the larger mental health threat we're facing? 

TW.  This is one of those times I wish I could say and I imagine many people also wish I would say it depends. But actually, I can say with some confidence that it's not. Steve, before we get into it, what is your impression broadly, of how well the Western framework for the freedom of expression is equipped to deal with some of these issues?

SJ.  Oh, gosh, Taryn. I am sad to say that I agree with you, I don't think that it is it is fit for purpose, and I think we see this play out in real time. Now, in the press. And And as we're recording the day after the Iowa caucus, I think, you know, the ability of people to understand mis- and disinformation, and for that, to be limits on that how much for example, candidates for possibly the most important political role in the world can lie, not only about their own history, but also the last election, that there are none like that it has been impossible to control those lies; and, as you talked about, early on with the Socratic method, and debate in in ancient Greece, there were rules about lying in public and lying about people and those rules, and let's face it, mostly this is happening on social media, but also on, you know, news sources like GB news, and, and Fox News and and others, those rules are not being applied, there is no fair and reasonable debate, there is no Town Square, which where rules are applied.

TW.  Partially, I think that's because these frameworks really weren't designed to consider these things, at least in the way they're present today. So there's no way our founding fathers could have had any idea that people would one day get their news from the television, let alone a social media network, like the idea of Twitter or "X", you, I don't even know how you would have explained that concept to them. So, it was not not in their minds as they were designing these protections or systems, and there's no way they could have imagined every citizen with a smartphone walk you around taking photos and videos, or artificial intelligence and the ability to you know, completely invent these, these images. 

TW.  So, we've already looked a bit at the history of "Free expression" and "Free Speech" in the West, and in that episode, we touched on some of the clear differences between the approach in Europe and the approach in the US, or the European and American approaches, you know, as people will refer to it going forward. So, it's worth listening to hear more about sources of law and how and why these approaches develop the way that they have over time, if you haven't already, you know, we've said previously, and I think it's worth saying, again, these different approaches often lead to the same or similar results. But you can get some really different outcomes in certain cases, and that's what what our discussion is about today. So, hate speech and violent extremism is a really good example and increasingly mis- and disinformation too. 

TW.  So, let's get into it. Why don't we start with hate speech, because that's, that's been a really hot topic. And although we'll have a separate episode that is just about hate speech, I think talking about in this context, the European versus the American approach is is really useful. So, in Europe, there are three categories of hate speech, severe forms, and in these cases, member states are required to prohibit them, other forms where states may prohibit them, and this is things like bias motivated threats or harassment, and finally, lawful hate speech, which must be protected from restriction even though it raises concerns in terms of intolerance and discrimination. In practice, a lot of hate speech falls into this middle category, which is one of the problems or challenges. So what you get is, is European States, different European States, restricting this type of speech in very different ways. So, although it sounds like you know, we have these three categories, they fall really neatly. The middle category is the largest and that's where you see a lot of divergence between between how it's regulated. 

TW.  In the US, it's very, very different. Hate speech is fully protected by the First Amendment. That's true across the board. So, hate speech is not a special category, it's not excluded. The only time it is, is if it falls under one of the exceptions we've already identified. So: 

  • true threats, 
  • incitement to violence, 
  • defamation, things like this. 

TW.  So So again, that speech should be protected. Anyway, even if it weren't hate speech, it's not protected, because it's hate speech. It's protected, because it falls under under what are the other exceptions. I also think it's worth saying that these are very, very narrow exceptions. So this is one area of the law that again, you can see very different outcomes from similar fact patterns in the US and in the EU. 

SJ.  Yeah, I think that is really interesting and I think that Canada falls somewhere in between the two, you know that that speech is generally protected, according to I think, is Article two, section two of the Constitution. But section one basically says, the government can put limits on that, and they have. and I think European governments are much more willing to put limits on freedom of expression than an American, because, obviously, as you just articulated, they can; and I think quite often, this is one of the things which really shocks people from Europe, and I look at what's happening in the United States, and the things that people are saying, and the carrying of Nazi flags and all this other sorts of things. And think, well, how is this? How is this happening, but of course, it's perfectly permissible, and generally, I would say, people should say things unless they are genuinely advocating for violence against, you know, a particular person or group or telling the sort of lies which, which amounted to defamation, that then you should be able to say what you want, and you should be vilified by the public for doing it, and you know, there should be a counter argument. The problem is, of course, that these days, not everybody has the same size megaphone.

TW.  Yes. Well, that that's certainly true. I think, one thing, one thing to say here, that is a similarity, and a difference is the role that context plays. So, you know, whether whether you're looking at the EU list and determining which type of hate speech, it is, or the US approach, where you're determining whether it falls under one of the one of the exceptions, you have to look at the context in which in which it's happening. So, that's, that's broadly true of anything to do with free speech and free expression. But it's especially true here. Because to determine whether there's, you know, a real threat or a true threat, you have to look at the surrounding circumstance, and, and that's, that's just true across the board, and I think that's one of the things that makes this area of the law really complicated, and also really interesting. I also don't think there's any way around this. Because if we started having blanket rules about about these things, you would be you would be restricting speech that you don't really want to restrict, and probably missing speech that that ought to be restricted.

TW.  One, one example that has been sort of very high on everyone's discussion list the past few months is the universities and anti-semitism issue, where we're going to do a whole separate two episodes on hate speech and the university controversy. So I don't want to get too into it here. But I think it's worth just saying that universities have been around for a long time. And we've been grappling with universities, and we're free speech and free expression and hate speech in particular, for a long time, and we still haven't been able to figure it out. So imagine how complicated adding an online component makes all of this.

SJ.  I mean, it is a it is a night is a nightmare, and, you know, we both worked at universities, and particularly in North America, the right to freedom of expression or free speech, and the way that you an academic freedom, the way you to teach, the things that you think are important in the way that you think you should teach them is enshrined in most faculty agreements with universities, this is something which the university can't interfere with. I mean, if a faculty member in my school was behaving badly, I still couldn't interfere in the way that they taught, I could just take away the responsibility to teach, and even then, that would cause potentially, without a whole bunch of documentation and justification that that could cause all sorts of problems, right? Because universities were supposed to be able to say things without government control, because there are many cases where things being said at universities about the way government was applying control were absolutely true and valid and the governments were in the wrong. 

SJ.  So, it becomes a bit of a nightmare, and obviously, you should therefore err on the side of allowing people to speak provided, it doesn't meet one of those exceptions, and it's not disrupting the operation of the university in the way which becomes unmanageable, so on and so forth, both sides will be given opportunity to speak peacefully, and so on and so forth. But what we've seen recently is, is it playing out very differently, and I think political interests on all sides, piling on and inflaming a problem, which was already very difficult for universities to manage, and I'm not saying that they did well, or we can get into this in another episode, but it's a big problem that they're dealing with.

TW.  That's one example because it's an easy one, but but you can see this across across different jurisdictions to and that's true, broadly and for universities, so you may have students who are saying something online that is, is very much allowed, and maybe even encouraged in the US, because it's free speech, and it's you, you know, engaging with the issues, and then in Germany, it's it's not allowed. So, it's tough, and it might be fine. If we were talking about two separate local newspapers, right. So, you have a local newspaper in Germany, who's talking about this, and you have a local newspaper in the US who's talking about that, you know, two separate rules make sense. But when we're dealing with things online, and on social media, these things cross borders instantaneously, and for a long time, the way we've dealt with this is to just give social media platforms, something of a free pass, and to say well, this is really complicated. We can't figure it out. So, you don't have to either. But actually, the result of that has been something of a mess online.

TW.  Yeah, and, and as you, as you rightly point out, one that is very widespread, it's not just these, you know, elite institutions and Ivy League schools that are grappling with this, it's every university in North America, every university in Europe, I imagine universities around the world are facing similar challenges. And, you know, again, adding that online component really, really complicates things. So, in terms of hate speech online, and specifically on social media, it means that the same speech that can be regulated in Germany, and in fact, is regulated and restricted in Germany is sometimes allowed in the United States. 

SJ.  As you were speaking there, I was thinking about something said earlier, there were of narrow, very narrow exceptions and that's right, because of the wider blanket ban some speech and you missed speech, that should be bad. But that requires an understanding of nuance, and subtlety, and social media particularly rips that away, in most cases, there is no nuance, nobody is understanding that. And, you know, the polarising nature of, of social media means that the same people who were complaining about students silencing conservative voices, say, at university campuses, and now clamouring for student voices to be silenced at university campuses. And I know there's a whole world of complication in there. But, you know, is free speech, absolute or is it not? And what really are those limits? And you know, in that dichotomy, there is a world of nuance, which is being being completely lost, and this is going to be a really interesting thing to come back to, I think.

TW.  Yeah, I agree. I think our two words for this, really, this whole season of episodes are nuance and context, and I know that context, in particular has become a bit of a dirty word. But I think, you know, without those two words, free expression or free speech, as we know, it doesn't exist, and so I think, if you're listening to this, and you have one takeaway from this whole series, it's nuance and context, you can't you can't have had these freedoms without it. So speaking of nuancing,

SJ.  Now they don't have to listen, you've given the whole thing away

TW.  I've given the whole thing away! But speaking of nuance and context, I think mis- and disinformation is the other area that's that's worth talking about, and, you know, I should say from the beginning, freedom of expression is not, and has not traditionally been limited to correct statements, because of course, it's not, but defamation and false or misleading advertising have traditionally been restricted. 

TW.  So, I know this is one of those, like, we have to hold two different things in our minds at the same time. But but on the one hand, freedom of expression is designed to protect mistakes and misunderstandings and even bad information. But but there are limits to this again, context and nuance. So, you know, if it's defamation if it's false or misleading advertising, those that's not traditionally been protected, and there are lots of reasons for this that we can get into another time, but hopefully, intuitively that it makes sense why and we're mis- and disinformation is hate speech. It may be restricted in some jurisdictions on those grounds, not in the United States, as we just said, but you know, we have this sort of similarity to hate speech in the there's an uncomfortable in between, so we know that there should be some restrictions in here because mis- and disinformation is sort of tiptoeing towards this defamation and false or misleading advertising in many cases. But at the same time, if we go too far, we've stepped on a fundamental right.

SJ.  Yeah, and this is Isn't this that this isn't at the core of the problem. I mean, this was the core of the problem that played out with Fox News and Dominion systems. Right? Fox claimed that they had the right to report this information. And dominium was trying to prove that they knew all along that this was false, and therefore, it wasn't based on mistake. This wasn't based on just a, you know, erroneous information that their own. Their own team knew that these were lies and told them anyway, and we didn't get to see it play out in court because they settled, which was a shame, because I was actually, I was really excited to see that play out in real-time. That I'm right. I mean, I'm obviously not a legal scholar. I'm not a lawyer in any way. But that is the sort of the core of the issue of that case, right, that that brute force has a duty to report the news that writes report the news, but it can't outright lie about things like this. 

TW.  Yeah, so often, there's a distinction drawn between a statement of opinion was generally cannot be restricted, and a statement of fact, which may be restricted in certain very narrow circumstances. So, nuance and context always. But as a general rule, you can sort of distinguish between, you know, if I'm presenting something as an opinion, I have a lot of room to say, whatever I want. But if I'm presenting something, as fact, there may be certain restrictions in that. So, again, it's really difficult to emphasise how narrow this really is, even when we're talking about presenting something as fact. So, you know, essentially, here, we're still talking about defamation and false advertising. And even then, the restrictions have to be really, really limited and narrow, to pass the test and to not be considered, you know, violative of our First Amendment or, or free expression, rights. And, you know, this has been justified for a long time, by some of the arguments in favour of free expression and free speech that we've already looked at something along the lines of, you know, false statements and bad information being a necessary part of open and true debate, and a firm belief that we'll be better informed, and generally better off for having engaged with them, and sort of determine for ourselves or or with support, that these are false or bad ideas.

TW.  So, you know, that's all great. But how do we square all of this with what we see happening on social media right now? Great question. So here, again, there's a really clear difference between the approach in Europe in the US. In Europe, the approach is generally that States have a duty to protect human rights and corporations have a duty to respect them. In the US, we have the State action principle. So essentially, this means that if there is no exercise of government power, the First Amendment does not apply. In consequently, no remedy can be sought in court, in terms of protecting this fundamental right to free speech. I want to make sure this is really clear, because although I'm fairly certain people like Elon Musk understand exactly what this means. They often talk about it like they don't, we can get into that another time. But people get spun up and start talking about their First Amendment rights on social media platforms.

TW.  So let me say this once in be really, really clear, you do not have first amendment rights on social media platforms. Certainly, if you're a European living in Europe, you don't have first amendment rights on social media platforms. But even if you're an American living in America, you do not have first amendment rights on social media platforms. There's no exception to this, you just don't. Even if you live in the US, a US citizen, you vote you pay taxes, you mow your lawn and shovel your sidewalks when it snows, you do not have first amendment rights on social media platforms, because these companies are private actors.

SJ.  Awesome. I mean, you're right. People do act all the time, as if they don't understand that, and whilst I don't have the deep respect for Elon Musk that many of his fans do. I don't believe that he is incapable of grasping the concept that he is not the federal government of the United States. I think he would be very unhappy if the federal government in the United States owned a social media network wouldn't he. I mean, I assume so, and it presumably, and I get him in Cape Town, tell me if I'm wrong. So I really don't know. But presumably that if there was a federal government owned social media network, then first amendment rights would apply, because it would exist because some legislation somewhere have been passed to allow it to exist or something. Right. I mean, is that is that right?

TW.  It is, right. It's interesting to think, you know, sort of as a thought experiment about how this would work, and we have actually because in our work on Bright we've thought about the possibility of a government coming up with their own competitor or social media network. and how that would function and in the answer is not very well, for all kinds of reasons, and for reasons we got into, and we looked at truth, social, people don't really want their government to do this.

TW.  You know, I think the BBC in the UK is an exception. You know, they're sort of exist in this in this very unique space, where there are a government affiliated news organisation that that does really good work. But I think, I think starting something like that now would be a tough sell. Because we have different feelings about our governments than we did when the BBC started, and I think this idea that we're going to have a government owned government backed social media network is is tricky, in part in the US because of the First Amendment, it would be really interesting to get into this in more depth, and to think about what what would have to happen, so that it would be actually respectful of of those rights. But right now, we don't have to because it's none exists, and the private actors who own the existing social media networks, don't care about the First Amendment really at all.

SJ.  Yeah, and this, of course, is why newspapers can ask you for comments, but don't have to publish the things that you say, on their websites, right? I mean, people get all bent out of shape when they get, you know, edited, or not not shown. But you didn't, you didn't meet their rules, and it's their thing. So they can set the rules. It's like, you can, you can say whatever you want outside. But if you come into my house and tell me my granny was a leprechaun, I can ask you to leave, and I certainly don't have to give you tea.

TW.  Hey, yes, that's true. Well, I just want to be clear, I would never say that. So, I hope you will give me tea. No, I think I think that's right, and it is complicated. All of this, you know, nuance and context. It's it's complicated. But this is really, we're still talking about the fundamentals and the basics. This isn't even the complicated bit. This is all before we even get to Section 230. So you know, if we back up, in addition to state action, which is a uniquely American requirement, there are a number of statutory safe harbours that protect social media platforms from liability, Communications Decency Act and Digital Millennium Copyright Act in the US and the E commerce directive in the EU. Section 230 is one of the most talked about, this is part of the communication Decency Act, or CDA, and essentially offers social media platforms, unconditional immunity for the speech of others, and for the removal and filtering of the speech of others. 

TW.  So, you know, in Europe, the ECD isn't that different, by the way, but but there are some important exceptions. This is all just to say, the First Amendment in these principles that protect free speech and free expression are the starting line for for where this all starts to get really complicated, then you throw in on top of that all these laws that are unique to each country, and then you start throwing treaties on top of that, as we've talked about, it becomes really, really complicated, and and I think maybe part of the issue is, we've plastered over all of these, all of these issues, because we don't want to talk about nuance and context or because it's hard to talk about nuance and context; and we're not really getting at some of the core issues, and so we're left with a landscape that is is pretty confusing, and not particularly, or a framework, rather, that's not particularly well placed to deal with the new challenges we're facing.

SJ.  I mean, I think this is one of the frustrations for most people, you know, actually, most people in the West, the people on the extremes who want to take advantage of the system and do and say outrageous things, exasperate the relatively silent, massive amount of people in the middle. But because there's nothing, we can do about this, we would love to talk, I think about nuance and context. But all we hear is like massive amounts of volume being shouted over our heads from either side. So, you know, I think this is I'm so excited about this series, because I think this is so important to the future of social media and society as a whole. So, this is really exciting.

TW.  It is exciting and this episode is the last of our sort of groundwork laying episode. So, we got into it a little bit today. But I'm really looking forward to doing some deeper dives and talking about things that, you know, we're seeing in our everyday lives that that are related to this area, and to that end, next time, we are going to do a deeper dive on hate speech in particular. So we will look at Europe in the US but we'll also take some specific examples and talk through how courts are likely to deal with them in different jurisdictions and why and I think some of some of the results are going to be really surprising but but definitely interesting.

In the meantime, we'll post a transcript of this episode with references on our website.

SJ.  Until next time, I'm Steven Jones,

TW.  and I'm Taryn Ward, 

SJ.  Thank you for joining us for Breaking the Feed, Social Media: Beyond the Headlines.

Join the Conversation

Join the waitlist to share your thoughts and join the conversation.

Brock Melvin
Sue Gutierrez
Adrian Faiers
Mike Perez Perez
chris dickens
The Bright Team
The Bright Team

Two lawyers, two doctors, and an army officer walk into a Zoom meeting and make Bright the best digital social community in the world. The team’s education and diversity of experience have given us the tools to confront some of the toughest tech and social problems.

Join the Waitlist

Join the waitlist today and help us build something extraordinary.