What’s Next In Social, Part 2 Podcast Transcript
The Bright Team
The Bright Team • Jan 10

What’s Next In Social, Part 2 Podcast Transcript

Breaking the Feed, Social Media: Beyond the Headlines

We think we're likely to see some changes from some of the larger social media platforms, more copycat and niche social options emerge, and eventually, one or more challenge brands that offer something genuinely different. We'll discuss why and for the first time touch on the role AI Is likely to play.

Taryn Ward  Hi. I'm Taryn Ward,

Steven Jones and I'm Steven Jones,

Join the Waitlist

TW.  and this is Breaking the Feed, Social Media: Beyond the Headlines

SJ.  We're taking a closer look at the core issues around social media, including the rise and fall of social media empires, to better understand the role social media plays in our everyday lives and society.

TW.  Last episode, we looked at some of the difficulties or barriers to entry challenger social media networks face. In this episode, we'll think about what that means for the future of social media. 

TW.  We'll start with the question: will we see a truly new social media offering in the next few years? Or will any new options inevitably repeat what's already out there? 

TW.  It's a fair question, I think, and not one that has an easy answer. We think we're likely to see some changes from some of the larger platforms, more copycat and niche options merge, and eventually, one or more challenger brands that offers something genuinely different. Last episode, we talked about why it's difficult to start a new social media network. Aside from the financial considerations, there was a cold start problem, then of course, getting the timing right and competing with well-established and well-funded incumbent brands that are some of the best-known in the world: Facebook, TikTok, Instagram, Twitter.

SJ.  All of these networks, and many others, have demonstrated that in social media surveillance, capitalism, means profits, collect data, sell data, use data to sell advertisements, or at least it can mean profit.

TW.  Over the past dozen or more episodes, we've seen social media grow and evolve. From America Online paid subscriptions to AOL Instant Messenger, to MySpace, to Facebook, to Twitter, to Instagram, to TikTok, we'll take a deeper look at the privacy implications of moving to an advertising revenue model and another series. But all of these networks run on data. Now, in order to operate these networks, need users to create content to squirrel content and to engage with content, and it's been hugely profitable. 

SJ.  Yeah, that's right. In 2022, Meta had a total annual revenue of over $116 billion US dollars, which is why of course, Mark can blow $34 billion trying to develop the Metaverse and then stop and LinkedIn, our favourite I love to hate it network, $13 billion US dollars, and Snapchat $4.6 billion to name just three of the big ones. That's a lot of money flowing into a small number of hands there, Taryn.

TW.  Yeah, over the course of one year, it's staggering, actually, and it goes back to you know, the last episode where we talked about why this is continuing, and this is all really to say that this model isn't going anywhere. Why? Why would anyone who's a part of these companies make a change? When those are your revenue numbers for a single year, governments may successfully start to impose some guardrails, we certainly appreciate they're working diligently to do so now. But until there are real alternatives available, their options are limited to because connecting online is critical in so many ways. Partly these changes are because more and more people are so dissatisfied with the existing model and way of engaging online. More and more, we hear people say not just that they're worried about what's out there. But they really want something different.

SJ.  Yeah, I mean, that's been a constant feature of our conversation, even where people don't say, “Well, I don't trust the social media companies. I don't trust Amazon or Google with my data”, and Amazon is still let's face it, extraordinarily popular. People love online shopping on Amazon. It's annoying, even I have to resort to doing that at some time. But, you know, they do want a different social experience, and I think it goes back to that idea that we talked about very briefly last time that in the beginning, they served a purpose like Twitter was about sharing news events, not what it was designed for. But that's what people used it for. Facebook was about sharing with family members and colleagues who are presumably a long, long way away. Not what it was designed for, what it evolved into. But now it's all a machine to keep your attention on the screen. It's like small snippets of video, which may or not may or may or may not be informative, accurate, entertaining, but they're there and designed and delivered to you in a way which will absolutely keep your eyes on the screen so they can show you another advert, and so they can get a couple of extra data points about what interests you. So, they can refine their model on what things you are likely to buy from the people who want to sell you things. That's that is it. So, there is no engagement. We're not actually connected to people in any meaningful way. I think that's the tragedy. It was quite good and was ruined by the emergence of surveillance capitalism.

TW.  I think the point you made there was really important, and that is that even if we can’t always say what we want to change or what we're dissatisfied with, or what we want the new thing to be. I think we all have a sense that there's something missing, there's something lacking, and you know, this is why a lot of experts are saying you shouldn't use social media first thing in the morning, you shouldn't use it just before bed, and you should really set some boundaries in place in terms of when you're using it, how you're engaging, and monitor how it's affecting your mood.

TW.  That really would have been unthinkable when we were first starting because it was so exciting, and so new and so great, and we were all so full of hope, and so I think it is really interesting to sort of step back, and we've you, and I have obviously had a lot of time to do this because it's been our job for so long. But I think it's a great opportunity for people to just step back for a second and think like, "Am I satisfied with the social media networks I'm using right now? And if not, why, what, in trying to pinpoint exactly what that feeling is, what is it missing?"

TW.  When we talk about what that something else might be? Not just in terms of what people want but what are we actually likely to see in terms of alternatives, and we think that many of the alternatives offered are likely to continue down the path of the niche networks we've looked at during other episodes. So, these are really exclusive networks for specific communities of people or networks that have a very limited functionality but have broader appeal. We think that over the next three years, we're likely to see more and more of these emerge. Why? Although concerns about privacy, freedom of expression, and missing disinformation are hugely important. Most people feel abuse and harassment most acutely in their daily lives, and niche networks can, in some cases, solve for this or at least appear to do so. It also means that you're more likely to see content from people you actually know, or friends or people who have a lot of overlap rather than posts from random people you really don't care very much about, and as things become more and more polarised, it's understandable that people would seek comfort from others who see the world the same way they do, even if they understand the risks around echo chambers because the lack of support and protection on larger and more generally available platforms is so lacking.

TW.  Similarly, niche function apps may provide a way for people to maintain a sense of connection with people they disagree with by focusing on only playing a game together or sharing a single photo together without the risk of a deeper and potentially more charged interaction.

SJ.  So, I mean, I think that that is all true. But I do think that the problem with these mesh networks, and we talked about this when we talked about Tribel, for example, the you you're specifically building something for progressive left-wing Democrats in the US and similar people globally. But all they hear is what they want to hear, and people already go down these YouTube tunnels, so, they're only hearing the perspectives that they want to hear. They're not hearing any counterarguments, there is no even internal debate in their own heads about, you know, whether their viewpoint is well founded, and as we know, there was absolutely no sense checking or fact requirement to make a social media post, you can in fact, perfectly legitimately say that purple foods on Monday and yellow foods on Tuesday will help you lose weight. It is obviously complete rubbish. But you can do it, you know, we've seen which was it was a Harvard or Yale this last month's try to engage with ADHD influencers on TikTok so that they're actually spreading factual information, rather than just stuff because there's a bit of a problem. 

SJ.  So, you know, these echo chambers, which tell people what they want to hear that they confirm their biases are, are just going to push polarisation, I think, in the same way, that, you know, the polarised discussion on Twitter, Twitter does that and the way that the, the algorithm for you feeds into that. So, I'm a bit worried, to be honest, I think you're right. Obviously, we think this, this is what's gonna happen. But I think it's a problem.

SJ.  We also think that there are going to be more copycats, especially in terms of Twitter and TikTok, partly because founders can smell the blood in the water because these platforms are likely to face increased regulatory scrutiny and public concern over the next 18 months, and because of the funding issues that we discussed in our in our previous episode, the these platforms are likely to carry over many of what appeal to people from the original platforms, issues and at all, and I think that's going to be a bit of a problem. What do you think, Taryn?

TW. I think that's right. I think the reason we've spent so much time talking about copycat and niche networks is that that's sort of the obvious path, and both in terms of funding is it, whereas we talked about, but also it feels familiar, and there's a sense of, “Oh, I know how to do this. I know what this is.”

TW.  I'm not sure. Which is more worrying. To be honest, I don't know if it's niche networks or copycats, I think they have different ways. But I think, really, the problem is, we're going to repeat the same problems, as you said, with the copycat networks, and we're going to create new ones, even if we solve one of the issues by creating a niche network, we're only going to exacerbate or create new ones, and I think that's really worrying.  On a happier note, we also think there's an opportunity for something truly different that moves away from the existing revenue model and towards the focus on the experience of the people actually using the network, we don't think we will be the only challenger to offer some form of subscription-based social media option, especially now that it's become largely accepted as something consumers are willing to pay for. We also think it's likely that we'll see some form of nonprofit platform emerge, think Wikipedia model, but not Wikipedia, that is likely controlled by a foundation, and even possible that we'll see networks created and supported by government actors openly or not. But those are some of our general predictions.

TW.  There's one more thing that I think worth talking about, and this is, this is really the first time we've spoken about AI or artificial intelligence, which often really means machine learning. But we'll leave that for now. Because right now, they're often used interchangeably, and sort of the that's become the collective understanding. But we expect there's going to be we expect there will be some new offerings, centred around AI, and what that looks like will depend in part on how regulators respond to AI development. We've seen already a seriousness we haven't when it comes to social media more generally, and I would cautiously but optimistically suggest that this is because regulators have learned their lesson. That sounds a bit harsh when I say it out loud. But in other words, we've seen what can happen when big tech and big social, in particular, operates in a space without much oversight, and regulators are working overtime right now to close any knowledge gaps, and there's not a repeat of this in the AI space. Often when this comes up, you only hear the extremes, right? Either AI is going to be our ruined, maybe immediately overnight, but definitely sooner than we think, or AI is going to save us all, and any regulation is going to mean we can't compete with China, and they overtake us, and it will be our ruin.

TW.  Fear is at the heart of both of these approaches, and often is when we're looking at such extremes, but it really muddies the water in terms of having a rational conversation about what the actual threats and opportunities are right now, forget the future if we can't talk about what the actual threats and opportunities are right now, it's impossible to accurately speculate about what could come next. So, I think we really need to start there. But that's a bit of an aside.

TW.  I would suggest that AI can play a role and a positive role in building a new kind of social network, in some cases, largely to address problems that are created or made worse by new AI products. So, the first thing it's worth thinking about is what it will mean for content moderation. AI brings with it the potential for a lot of fake or bot accounts. So, this is a problem solved and, to some extent, created by AI, but bear with me. So, AI brings the potential to have a lot more bot accounts, the ability to create really bad content quickly, and not just bad content in terms of harmful, but content that's just not good, words and images that fill feeds brain space and time without contributing anything, or even really entertaining us in a meaningful way. AI can help with some of this, as long as we use a human-on-the-loop approach where a real person who is reviewing each of these AI decisions, and there's a meaningful sense of oversight. It's also, AI is also really great at quickly detecting dangerous and abusive content and identifying potentially dangerous patterns of behaviour, which is especially important for networks with young people, and this is one area where we're not just talking about AI addressing problems presented by AI, but AI addressing problems that were there long before this was a term that was part of our collective understanding.

TW.  I have a good example of this and see the you and I have spent some time talking about this. If an account claims to be 15 years old and is engaging with other people around the same age, AI tools are very good at predicting whether or not this person is actually within two years of the age claimed and also identifying flagging behaviours that indicate grooming in a way that would be really difficult for a human person without spending hours and hours, which is really difficult to do at scale. AI can flag these potential problems and then allow human moderators to investigate more efficiently. So, really, to lessen the load of the burden on moderators, which we know is really heavy, it's also really great at detecting modified images. People are bad at this. No one wants to hear that we don't like to believe or admit that. But we are, we're really bad at it, and it's only becoming more difficult. Sometimes, it doesn't matter, or at least, sometimes, the stakes are lower. But being able to know the difference between a genuine image and one that's been heavily modified is going to become incredibly important. Finally, and this is definitely a last but not least, AI offers some great opportunities in terms of inclusion. So, we can look at written content-to-voice, and voice-content-text in a way that's going to make it a lot easier for people to engage online, where it would have been difficult before. That's not to say there are risks, there are so many risks. But there's also a huge opportunity, especially where social networks are willing to keep users or consumers or, in our case, members at the centre of their decision-making.

SJ.  Wow, that was a that was a lot of ground you  just covered there, Taryn, and a really awesome, a really awesome summary, and, you know, I often think maybe it's that regulators and civil servants and lawyers have sort of like learned from social media. 

SJ.  But what I actually think is that most of those people are Gen X now, the ones who were in a position to make changes, and they grew up having watched “WarGames” when they were kids and know that AI intuitively can be a problem unless you teach it Tic-Tac-Toe or Noughts and Crosses for those of us who are English, and then of course, James Cameron subjected them to Skynet, not that long afterwards, in relative terms, and potential problems of AI taking over the world, and, and so they were primed to be worried about AI. Or, as you more accurately say, in the present context, machine learning and they were not primed to worry about online conversations because that was always considered to be awesome, and I think the reason 

SJ.  I think my version is probably more true is because politicians and policymakers are struggling to keep up with things at the cutting edge, and this was true. You know, with bioterrorism which was a concern of mine. Previously, you know, we, we had me I went to meetings with them with the FBI, and people from DNA companies and universities and health agencies, and the FBI was trying to get people to sign up to sort of voluntary codes of practice, because Congress in the United States wasn't able to pass legislation, and agencies weren't able to make regulations fast enough to keep up with the speed that tech evolved, and the problem with voluntary codes of practices is in it's in the name. It's voluntary and open to interpretation, and there's no teeth in that, and I think, you know, we really do need decision-makers to make decisions based on what's best for people, and I think your last point there, it's like, all of this technology is great if it is focused on human-centred design if it's it if it's designed to make the lives of the people who use it better, more engaging, more connected, happier, healthier, then that's good design and the ways you can use AI, as you pointed out, absolutely brilliant, that you there's incredible things you can do. 

SJ.  But the reality is, I'm not sure that our, you know, either the UK Government who is caught in this thing, it's like you've got these voices who say there's so much money to be made, and look you, you know, it's going to be money into the treasury coffers, and these other people whose like, everyone's going to die there, they get stuck in this sort of legislative paralysis. Look, I'm on the online safety belt to pass and how much revision it went and how much debate there was about doing anything at all. So, you know, I think there is going to be an opportunity for someone and, you know, hopefully, we're part of that picture to develop these cool, member-centred networks, which use AI in a way which enhances people's lives and drives connection and inclusion. But I think we're both probably worried about those ice bath-loving oatmeal latte-swigging Stanford graduates who live in the valley and worked for a startup and had an exit, having control of this because that's what white men who did those things do. Right. I mean, what do you think?

TW.  Yeah, I mean, I think I offered probably a very optimistic view of regulators and I use that term really broadly, and I think yours is on the more pessimistic side.  I don't think you're necessarily wrong. Obviously, I like my version for a lot of different reasons.

SJ.  So do I! 

TW.  I would offer I would offer maybe a third that that maybe sits in the middle a little bit, and it's less of an answer and more of a just a question and a way of thinking about things. So, you know, back, back in the day back, years and years ago, when I was still a law school, one of the things I was really interested in was how courts make decisions, and there's a perception, especially in the US about the Supreme Court, making these big decisions that changes the trajectory of law, and it changes how we understand the law, but it also makes these huge cultural changes, and there are different examples of this, but one of them that often comes up is marriage.

TW.  So, there's a famous case where the Court said, basically, that interracial marriage was a right, you had the right if you were a black person to marry a white person, and vice versa, and the court is often patted on the back for this. School integration is another is another one. But let's stick with marriage because then we can talk about gay marriage, which came, you know, not so long ago, and I think people probably remember more easily.

TW.  The reality is, when you look around the surrounding circumstances, the court wasn't early, they weren't on the cutting edge. They were responding to what society already wanted and demanded, not, you know, what were the voters thinking? What was the general feeling around these issues, and if you start to dig into that, it looks a little bit different. So, it stops looking like the Supreme Court is out there sticking its neck out and starts to look more like they sort of waited a bit, not that they weren't applying the law the way that they should have, I'm not making not making any of those accusations. But they waited until public sentiment was well behind them and then chose their priorities and acted, and I know it's not the same in the UK. But I think about, you know, here in the UK, recently, there was this big AI Summit, and Rishi Sunak sat down with Elon Musk, which I'm intentionally not commenting on in detail, but I will, but I will say this for a sitting Prime Minister to sit down with the head of a social media company who is also currently I think, the wealthiest man alive to talk about AI. In this way, it was a really important moment. I'm not saying whether it's good or bad. I think, you know, my feelings are genuinely that it's it's more bad than good. But the bigger point is, this is a priority, and it's it's been a priority in the UK Government it's been a priority in the EU in the White House last week, issued a new executive order that is really important, and we're gonna dive into, but my point largely is, you and I may be may both be a little right. But I think it's the voters who are really leading the charge, and if people start to worry about this and care about this, we may see changes that are that are faster and better than what we saw in the social media context. 

SJ.  Yeah, I mean, I'm going to fit in two Dr. Ian Malcolm quotes from Jurassic Park here, and I think that they fit, although they were slightly different topics. He was talking about genetic power. 

Genetic power is the most awesome force the planet has ever seen. But you wield it, like a kid who's found his dad's gun”, 

and I think that's the problem that we're both worried about, and society's worried about with tech bros. Having their hands on this, and then there was another quote, which was,

your scientists were so preoccupied with whether or not they could they didn't stop to think about whether they should?” 

And that is absolutely true, and that's the role of government and the law and regulation, and I think you're absolutely right about those like governments respond to what voters want, that's predicated on voters having a reasonable understanding of what's going on, and I think that, you know, podcasts like this, and perhaps much, many, many more ways of discussing this in a reasonable, non-inflammatory way, is what's needed to help people do that.

TW.  I think let's leave it with those quotes sort of hanging in there because I think they are really powerful things to take home and think about. Next time, we'll wrap up our series on the existing social media landscape with a look at how these changes provide consumers, brands, and influencers new opportunities.

TW.  In the meantime, we'll post a transcript of this episode with references on our website. You can find this and more about us at TheBrightApp.com.

SJ.  Until next time, I'm Steven Jones,

TW.  and I'm Taryn Ward.

SJ.  Thank you for joining us for Breaking the Feed, Social Media: Beyond the Headlines.

Join the Conversation

Join the waitlist to share your thoughts and join the conversation.

Brock Melvin
Sue Gutierrez
Adrian Faiers
Mike Perez Perez
chris dickens
The Bright Team
The Bright Team

Two lawyers, two doctors, and an army officer walk into a Zoom meeting and make Bright the best digital social community in the world. The team’s education and diversity of experience have given us the tools to confront some of the toughest tech and social problems.

Join the Waitlist

Join the waitlist today and help us build something extraordinary.