#readwise
# Making Sense of Social Media

## Metadata
- Author: [[Sam Harris]], [[Tristan Harris]], [[Jonathan Haidt]], [[Cass Sunstein]]
- Full Title: Making Sense of Social Media
- URL: https://share.snipd.com/episode/ab00d329-578d-4dd5-b681-6f3b9982b469
## Highlights
### Asymmetric Attention Battle
- Smartphones are intimately woven into our lives, constantly routing our attention.
- When using social media, your brain is battling against supercomputers designed to capture your attention.
Transcript:
Tristan Harris:
On the attention economy, obviously we've always had it. We've had television competing for attention and radio and we've had evolutions of the attention economy before, competition between books, competition between newspapers, competition between television to more engaging television to more channels of television. So in many ways, this isn't new. But I think what we really need to look at is what was mediating where that attention went to. Mediating is a big word. Smartphones, we check out a hundred times or something like that per day. They are intimately woven into the fabric of our daily lives and ever more so because of we pre-established addiction or just this addictive checking that we have at any moment of anxiety, We turn to our phone to look at it. So it's intimately woven into where the attention starting place will come from. It's also taken over our fundamental infrastructure for our basic verbs. Like if I want to talk to you or talk to someone else, my phone has become the primary vehicle for just about for many, many verbs in my life, whether it's ordering food or speaking to someone Or, you know, figuring out where to go on a map, we are increasingly reliant on the central node of our smartphone to be a router for where all of our attention goes. So that's the first part of this intimately woven nature and the fact that it's our social, it's part of the social infrastructure by which we rely on. We can't avoid it. And part of what makes technology today inhumane is that we're reliant on infrastructure that's not safe or contaminated for many reasons that we'll get into later.
Tristan Harris: **A second reason that's different is the degree of asymmetry between, let's say, that newspaper editor or journalist who is writing that enticing article to get you to turn to the next page versus the level of asymmetry of when you watch a YouTube video** and you think, yeah, this time I'm just going to watch one video and then I've got to go back to work and you wake up from a trance, you know, two hours later and you say, man, what happened to me, I should have had more self-control. What that misses is there's literally the Google's billions of dollars of supercomputing infrastructure on the other side of that slab of glass in your hand pointed at your brain, doing predictive analytics on what would be the perfect next video to keep you here. And the same is true on Facebook. You think, okay, I've sort of been scrolling through the same for a while, but I'm just going to swipe up one more time and then I'm done. **Each time you swipe up with your finger, you know, you're activating a Twitter or a Facebook or a TikTok supercomputer that's doing predictive analytics, which has billions of data points on exactly the thing that'll keep you here.**^nn10k0
And I think it's important to expand this metaphor in a way that you've talked about on, I think in your show before, about just the power, increasing power and computational power of AI. When you think about a supercomputer pointed at your brain trying to figure out what's the perfect next thing to show you, that's on one side of the screen. On the other side of the screen is my prefrontal cortex, which has evolved millions of years ago and doing the best job it can to do goal articulation, goal retention and memory and sort Of staying on tasks, self-discipline, et cetera. So who's going to win in that battle?
([Time 0:15:28](https://share.snipd.com/snip/b59e6fd3-4e75-457e-a744-47de7d0b1fec))
---
### Social Media's Harmful Transformation
- Social media changed radically between 2009 and 2012 when Facebook added the like button and Twitter copied it; then Twitter added the retweet button and Facebook copied it; and then both algorithmized their news feeds.
- This shift meant conversations became public and were rated, leading to inauthenticity, dishonesty, and intimidation.
- Jonathan Haidt believes this evolution has created an 'outrage machine' that foreign actors exploited to interfere with democracy.
- Haidt says that if things keep going the way they're going, our country is going to fail catastrophically.
Transcript:
Jonathan Haidt:
I wrote an article in the Atlantic last November with Tobias Rose Stockwell, where we show how beginning in 2009, when Facebook added the like button and then Twitter copied it and then They both algorithmized their news feeds much more. So between 2009 and 2012, the nature of human connectivity changed radically in ways that I think are very, very bad for democracy. That is, it wasn't just that we could now talk to each other privately for free. It's that a lot more of our conversation was now in public being rated, which means it was inauthentic, often dishonest, and with a lot more intimidation. You know, I hate to talk to people about it. You know, I hate Twitter, I hate going on Twitter. I'm also fascinated by it. And I've, you know, it's like opening a garbage can and watching rats and cockroaches fighting and there's something fascinating about it. But things really changed after 2012 and the Russians noticed it and they've been trying to mess with our democracy for 50 years in 2014 is when they realized, hey, there's this great Outrage machine that the Americans have built for us. And it's, we don't have to go over there. We don't have to fly agents over to mess them up. We can just sit here in St. Petersburg to do it. So, you know, I think that, you know, I hear your incomprehension. I hear your, you know, your frustration. Things are terribly wrong.
Sam Harris:
So yeah, I know I agree that the style of communication and it's created an information space where it really is just total war all the time. Yes, that's right. In information terms.
Jonathan Haidt:
That's right. Yeah. And that's no way. Yeah. And I don't think our democracy can survive that. Going the way they're going, our country is going to fail catastrophically. I'm not predicting that it will because I don't think things will keep on going the way they're going. But the trends are really bad and they've been really bad for at least 10 years, more than that even.
([Time 1:13:54](https://share.snipd.com/snip/b728ea74-ccac-4e73-8690-2bc19bae8401))
---
### Group Polarization
- Like-minded people in groups tend to become more extreme in their views. This is because they primarily hear arguments supporting their views and few opposing arguments.
Transcript:
Sam Harris:
Want to talk about social media and how Twitter and Facebook have been behaving themselves. But before we get there, I think we should talk about the phenomenon of hyperpolarization in groups. And this is a general phenomenon that you describe in the book where like-minded people become more extreme once they begin associating with one another. And this is, it may sound paradoxical on its face, but it really functions by dynamics that are fairly easy to understand. Perhaps you should explain maybe the Colorado study is the place to start here, but talk about what happens in groups among the like-minded.
Cass Sunstein: What we did in Colorado was to get a bunch of people in Boulder, which is a left-of-center, together to talk about climate change, affirmative action, and same-sex unions. We asked them for their views privately and anonymously. Then we had them discuss the issues together and come to a verdict. And then we asked them to record their views privately and anonymously. And there was reason to expect that if you got a group of people together, they'd end up coming to the middle of what the group members privately thought, and that would be their verdict, And then they'd all be in the middle. But that's not what happened. They were kinder to the left on all three issues. They went way to the left on all three issues as a result of talking to each other. So **the left-of-center people in Boulder had some diversity on climate change and affirmative action before they talked to each other. After they talked to each other, they were more extreme.** They were more confident, and they were pretty well unified on all of those issues. **This isn't just a left-of-center phenomenon. We did the same thing in Colorado Springs, which is right-of-center. And as the people in Boulder went whoosh to the left, the people in Colorado Springs went whoosh to the right. And it's just because they were talking with like-minded others. So the basic rule is that usually people who are inclined in a certain direction end up after talking to each other, thinking a more extreme version of what they thought before they started to talk.** ^m13598
And we can explain, I think, why sometimes in primaries both of our political parties go left and go right, has something to do with this. Why within cults people end up sometimes getting all extreme? That's often the phenomenon of group polarization as it's called. Why terrorists often get radicalized? And also why people who do great things like attack extreme injustice. Why they get radicalized? Because they're all talking to each other. And you say that the mechanisms are pretty intuitive. I think you're completely right. That the leading one is if you're a group of people who think let's say that the minimum ways should be raised. That's what they tend to think. Some of them aren't sure. They're talking with each other. They'll hear a lot of arguments about why the minimum ways should be raised, because that's what most of them think. And they won't hear a lot of the arguments the other way. And the arguments that they hear will be kind of tentative as well as few. And then if they're listening to each other after they've heard all the arguments, they go, minimum wage really should be raised a lot. And it's just because of the arguments they're hearing. And if you have a group of people who tend to think the minimum wage should not be raised, exactly the mirror image of what I've described will happen. And I'm smiling as I talk because we actually taped our conversations in Colorado. And so I've seen them. And in real time, you can completely see the process where the right people on the right are going more right because they're talking to people who think conservative thoughts and the Conservative thoughts are going to look numerous and excellent. And the disagreement will seem rare and kind of stupid.
([Time 1:17:36](https://share.snipd.com/snip/ee79d436-323f-4d1d-86d2-85a7798116da))
---
### Deepfake Paradox
- Deepfakes are created using generative adversarial networks (GANs), where a generator creates fakes and a discriminator tries to detect them.
- This process results in an algorithm that cannot distinguish its own fakes, complicating third-party detection.
Transcript:
Narrator:
In our compilation on AI, there's a plea from the author Max Tegmark that we must adopt a new safety protocol, one that's different from what we've been familiar with. We're somewhat used to letting new innovations experiment on their own until their dangers and downsides become apparent and too painful. And only then do we react and adjust retroactively. He uses the example of designing a car and then later realizing it needed seatbelts. In that compilation, he stresses that we don't want to take that approach when it comes to technologies that potentially pose an existential threat, like artificial superintelligence Or nuclear weapons. Playing catch up with the dangers and damage that those technologies can deliver may be suicidal. When it comes to the threats of social media to democracies or even our own psychological health, the same plea may be relevant. Playing catch up to the exploitability of the system may be too slow, too legislatively clunky, and always a labored step behind the aggressive and agile forces that continually discover New vulnerabilities. Deepfake technology presents this type of dilemma as it raises the possibility of destroying access to a shared truth. A Deepfake is a piece of synthetic media, a convincing video or piece of audio of someone doing or saying something they haven't actually done or said. This can be a video of someone delivering a speech with words they never uttered, appearing at a crime scene they were nowhere near, or cast in a movie without their knowledge. If you're thinking that the answer to this challenge will be to consistently stay ahead in a race between the ability to make convincing deepfakes and the ability to detect and zap them, There may be a major complication due to the technological process of creating deepfakes in the first place. The most current and most successful programming method uses what are called GANS, or generative adversarial networks. This won't be a comprehensive lesson, but they work something like this. Imagine you employ an art counterfeiter. This guy is pretty talented at making paintings in the style of famous artists, but he's not perfect. You set him up in one room of your house, and you call him the generator. Then you also employ a police officer who's trained at detecting counterfeit pieces of art. He inspects them closely and determines which ones come from the real master painters, and which ones are faked by thieves. You call him the discriminator. You also have a room full of some real, authentic masterpieces that you only show to the discriminator so he can learn the techniques and styles of Van Gogh, Picasso, and whoever else You like. Then, you start the process. Your counterfeiter is going to start making a bunch of fakes. Now, you have the choice to either bring one of his fakes over to the discriminator to inspect, or you can bring one of the real masterpieces that he has not yet seen. The discriminator will try to determine each piece presented to him. He can say fake or real. If he's right, meaning that he can tell when he's looking at something real versus something that's counterfeit, then you go back to the counterfeiter and break the bad news to him. You ask him to try something else to improve his technique. You keep playing this game over and over again until your discriminator is wrong, meaning he thinks that a fake piece of art is the real deal. And now here's the part that's important. You tell the discriminator that he's wrong and you instruct him to improve his detection process. He adjusts something about his method to better detect the fakes until he starts getting them all correct again. This little internal competitive arms race starts to accelerate until you're generating an impressive collection of counterfeit pieces of art. This technical process inherently creates an algorithm that can't even tell its own fakes. And it presents a paradox of a third party agency, like a social network administrator algorithm, not being able to easily identify the fakes themselves. The importance of keeping certain powerful detection algorithm secret starts to sound just like familiar traditional warfare, where it's paramount to keep technical knowledge Of weapon building away from your enemies. This whole deep fake thing may seem like science fiction and a long way off. But about 20% of the audio of my voice used in the last two minutes was synthetically produced. I never actually spoke those exact words. And we'd be surprised if you could tell which words were synthetic and which were authentic. There are also popular websites like This Person Does Not Exist.com, where you can see the alarming success of a GAN algorithm on display.
([Time 1:44:56](https://share.snipd.com/snip/e4d1926d-562b-4a67-a5f1-96360a1f7a90))
---