#readwise
# Making Sense of Existential Threat & Nuclear War

This episode was accompanied by a nice [[making-sense-of-existential-threat-and-nuclear-war.pdf|guide]].
## Metadata
- Author: [[Making Sense with Sam Harris - Subscriber Content]]
- Full Title: Making Sense of Existential Threat & Nuclear War
- URL: https://www.samharris.org/podcasts/essentials/making-sense-of-existential-threat-and-nuclear-war
- People: [[Sam Harris]], [[Jay Shapiro]], [[Yuval Noah Harari]], [[Nick Bostrom]], [[Toby Ord]]
## Highlights
**The Drake Equation, Fermi's Paradox, and the Great Filter**
Key takeaways:
- **The Drake equation is a formula used to estimate the number of technologically advanced civilizations in the universe.**
- **The equation narrows down the estimation by considering factors such as the total number of stars, planets, and life-suitable planets.**
- **Even with conservative estimates, the Drake equation suggests there could be between 1,000 and 100 million advanced civilizations in the Milky Way galaxy alone.**
- **Fermi's Paradox questions why we don't see or hear evidence of these advanced civilizations in the cosmos.**
- **Possible explanations for Fermi's Paradox include being early to the party, rareness of life like ours, or the existence of a great filter that prevents civilizations from advancing.**
- **If civilizations have successfully advanced, we should see evidence of their continued existence through big engineering projects.**
Transcript:
Jay Shapiro:
In 1961, the astronomer Francis Drake jotted down a fairly simple, back-of-the-napkin formula to calculate just **how many technologically advanced civilizations we should expect to be out there in the cosmos right now**. It came to be known as the **Drake equation**. The equation starts with an extremely large number, the estimate of the total number of stars in the universe. Then we narrow that number down to how many of those stars have planets orbiting them. Then we narrow that number down to how many of those planets are likely to be suitable for the evolution of life. Then we narrow that down to the number of those life-suitable planets that have actually had life emerge. Then we narrow that down to how many of those life forms are intelligent. And then, finally, we narrow that down to how many of those intelligent life forms advanced to the stage of a technological civilization. **Even if we're quite conservative with our estimate at each step of the narrowing process**, maybe we guess that only one in every 100,000 life-suitable planets actually did achieve even basic microbial life, or that only one in every one million forms of intelligent life became technologically advanced; even if we apply these stringent factors, the results of the equation, and our remaining number suggest that **there's still ought to be between 1,000 and 100 million advanced civilizations just in the Milky Way galaxy alone**. And there are, of course, billions of galaxies just like ours. So even if the correct number is just in the hundreds in our Milky Way, when you look out in the cosmos, there should be millions of civilizations out there. A physicist named **Enrico Fermi asked the simple question, if this is true, where is everybody?** How come when we look out into the cosmos, we don't see, or hear, obvious evidence of a plethora of advanced life forms zipping about in their ships, symmetrically geofarming entire galaxies Into power plants, or what have you? **This question became known as Fermi's Paradox**. There is no shortage of hypotheses to address Fermi's question, but **just about all of the responses can be categorized under three general answer types. One answer is that we're just early.** Perhaps all of Drake's math was right and everybody will show up, but we just happen to be amongst the first to the party. The cosmos itself may have just recently reached a state of habitability, after the chaos from the initial inflation and the big bang sent heat and debris flying about in every direction. Maybe it just recently settled down and allowed life like ours to flourish, and we humans are just an early riser. **Another answer is that we're very rare. Maybe Drake's numbers were not nearly conservative enough, and life such as ours is just an exceedingly unlikely cosmic event**. Perhaps there are only a small handful of civilizations out there, and given the vastness of the cosmos, it's no surprise that we wouldn't have had any close neighbors who happen to be advanced enough to say hello. Maybe the neighborhood is just very quiet. **Or perhaps the most disturbing answer**, the one we're going to be dealing with in this compilation, is this one.
**Maybe there is a great filter. What if there is a certain unavoidable technological phase that every intelligent life's advancement must confront? A technological phase that is just so hard to get through that almost no civilization successfully crosses the threshold, and that explains why it appears that no one is out there. It may be that we humans are on a typical trajectory, and are destined to be erased, and soon. But even if there is a filter, and even if just the tiniest percentage of civilizations have been able to get through it, and continue advancing without tripping over themselves, pretty soon they'd have the knowledge of how to do monumentally big engineering projects, if they so choose. We should see evidence of their continued existence, right?** ([Time 0:02:08](https://share.snipd.com/snip/48a2148f-c7ab-42fa-9c11-f496f5df4762)) ^no0oym
---
**Imagining filter layers stacked for civilization's permanent safety**
Key takeaways:
- The filter analogy can be visualized as multiple filter layers stacked on top of each other.
- **A civilization reaches a point of permanent safety when it acquires knowledge to survive and avoid self-destruction.**
- **Getting through all the filters requires a deep understanding of self-preservation and long-term survival.**
Transcript:
Jay Shapiro:
Let's make sure we're imagining this filter analogy correctly. Maybe a single filter isn't quite right. Maybe we should be picturing thicker and thicker filter layers stacked one on top of the other. Maybe there would be a moment when you really do leave them all behind. **That point of permanent safety would be when a civilization achieves a kind of knowledge so powerful that it understands how to survive and avoid its own self-destruction perpetually, and really does get through all of those filters.** ([Time 0:06:33](https://share.snipd.com/snip/c673e2f1-28bc-4c77-9986-37346b2f0e2c))
---
**Harnessing Energy: Creative Power and Destructive Potential**
Key takeaways:
- Harnessing energy is key to both creative and destructive power.
- Knowledge required for massive engineering projects can also lead to civilization destruction.
- Avoiding the destruction caused by harnessing energy becomes increasingly impossible.
- Countless civilizations may cease to exist shortly after gaining energy harnessing capabilities.
- Humanity's discovery of filter potential technologies suggests the existence of multiple filters.
- Surviving current challenges may lead to encountering even more difficult technologies.
- The sustainability of this pattern is questionable.
Transcript:
Jay Shapiro:
It seems that harnessing energy is key to both creative and destructive power, and that they must go hand in hand. **You could imagine the kind of knowledge it would take to pull off a huge engineering project, like building a device that could siphon all of the energy from a black hole at the center of a galaxy, for example; and you can recognize that this same knowledge would presumably also contain the power to destroy the civilization which discovered it, either maliciously or accidentally**; and the odds of avoiding that fate trend towards impossible over a short amount of time. No one makes it through. **This is the great filter answer to Enrico Fermi.** That there are countless civilizations out there that blip out of existence almost as quickly as they achieve the technical prowess to harness even a small percentage of the potential energy available to them. Is this what happens out there? Does this answer Fermi? How many filters are there? We **humans are** a relatively young species, and already we seem to be **discovering a few technologies that have some filter potential.**
**If we get through our current challenges, are we bound to just discover another, even more difficult technology to survive alongside? Is this tenable?** ([Time 0:07:26](https://share.snipd.com/snip/ddb5b0f3-47b6-4fc4-8c8a-281fbd61d3c4)) ^s1ls28
---
**The Haunting Words of Robert Oppenheimer**
Key takeaways:
- The complete erasure and annihilation of civilization was once thought to be reserved for the gods.
- The realization that humans have the power to destroy civilization is a stark moment.
- Robert Oppenheimer recalls the scene and his thoughts after witnessing a successful test detonation of a nuclear bomb.
- The majority of people were silent in response to the realization.
- Oppenheimer quotes a line from the Hindu scripture, the Bhagavad Gita, to express his thoughts on becoming the destroyer of worlds.
Transcript:
Jay Shapiro:
The complete erasure and annihilation of a civilization was the talent once thought to be reserved only for the gods. As a reminder of just how stark the moment was when we realized we may have that power in our own hands, perhaps for the first time sensing that great filter on our horizon, it's worth playing a haunting and now very famous audio clip which lays the realization bear. Upon witnessing a successful test detonation of a nuclear bomb south of Los Alamos, Robert Oppenheimer, the physicist leading the Manhattan Project, recalls the scene and his thoughts.
**Robert Oppenheimer:
We knew the world would not be the same. Few people laughed. Few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita. Vishnu was trying to persuade the prince that he should do his duty and to impress him takes on his multi-armed form and says, now I am become death, the destroyer of worlds.** ([Time 0:09:02](https://share.snipd.com/snip/c20d57eb-7bf1-4f7a-a3c6-df717063db5d)) ^qld8qz
---
**The Great Filter hypothesis and the power of knowledge**
Key takeaways:
- Bostrom uses colorful analogies to illustrate difficult concepts
- **The great filter hypothesis suggests the existence of potentially dangerous knowledge**
- White marbles represent benign knowledge with minimal threat
- Black marbles symbolize highly destructive knowledge
- The attitude of science has been to rapidly acquire knowledge without considering potential dangers
- The splitting of the uranium-235 atom represents a potentially destructive black marble
Transcript:
Jay Shapiro:
**Bostrom** has a talent for painting colorful analogies to prime our thinking about these difficult topics. One of his analogies that brings the great filter hypothesis into vivid clarity goes like this. **Imagine a giant urn filled with marbles**, which are mostly white in color, but range in shades of gray. **Each of these marbles represents a kind of knowledge that we can pluck from nature and apply technologically**. Picture reaching in and pulling out the knowledge of how to make a hairdryer, or the automobile, or a toaster oven, or even something more abstract like the knowledge of how to alter the genome to choose eye color or some other aesthetic purpose. Reaching into this urn, rummaging around and **pulling out a marble, is the act of scientific exploration and achievement**. Now, **white marbles represent the kinds of knowledge that carry with them very little existential threat**. Maybe pulling a marble like this would be gaining knowledge of how to manufacture glass. That's a marble that we pulled out of the urn around 3500 BCE in Egypt. That little bit of knowledge mostly improves life on Earth for humans and has all kinds of lovely applications for food preservation, artistic expression, window manufacturing, eyesight correction, and much more. It likely carries with it some kind of minor threat as well, though it's difficult to imagine how that specific advancement would inherently threaten the existence of the species. You can imagine thousands of white marbles that feel as benign, positive, and generally harmless as this one. But **Bostrom asks us to consider what a black marble would be. Is there some kind of knowledge that, when plucked out of nature, is just so powerful that every civilization is eradicated shortly after pulling it from the urn? Are there several of these black marbles hiding in the urn somewhere? Are we bound to grab one eventually? Sam points out that it has generally been the attitude of science to just pull out as many marbles as fast as we possibly can and let everyone know about it the moment you have a good grip? And we operate as if the black marbles aren't in the urn, as if they simply don't exist.** What shade of grey was the marble that represented the moment we obtained the knowledge of how to split the nucleus of a uranium-235 atom and trigger and target its fission chain reaction in a warhead? Was that a black marble? ([Time 0:18:19](https://share.snipd.com/snip/ea223823-6947-4ae6-8ba6-73729092d9ca))
---
**The Vulnerable World Hypothesis: Explained**
Key takeaways:
- The vulnerable world hypothesis posits that there is a level of technological development that can lead to the destruction of the world.
- **The semi-anarchic default condition is a state where there are various actors with different motives and no reliable way of resolving global coordination problems.**
- Preventing individuals from committing actions strongly disapproved by the majority is also challenging.
Transcript:
Sam Harris:
Let's start with the vulnerable world hypothesis. What do you mean by that phrase?
Nick Bostrom:
Well the hypothesis is, roughly speaking, that there is some level of technological development at which the world gets destroyed by default as it were. So then what does it mean to get destroyed by default? **I define something I call the semi-anarchic default condition, which is a condition in which there are a wide range of different actors with a wide range of different human recognizable motives. But then more importantly, two conditions hold. One is that there is no very reliable way of resolving global coordination problems. And the other is that we don't have a very extremely reliable way of preventing individuals from committing actions that are extremely strongly disapproved of by a great majority of other people.** ([Time 0:21:25](https://share.snipd.com/snip/d9fadc4a-688c-42f3-ae7c-8f698c4077b1))
---
**The Possibility of a Black Ball in the Urn and Its Impact on Civilization**
Key takeaways:
- There is a possibility of there being a black ball in the urn
- **The current strategy is to hope that the urn doesn't contain a black ball**
- The world could be vulnerable to technologies that empower individuals to cause large-scale destruction
Transcript:
Nick Bostrom:
So what if in this urn there is a black ball in there somewhere? Is there some possible technology that could be such that whichever civilization discovers it, environment gets destroyed? And what if there is such a black ball in the urn? I mean, we can ask about how likely that is to be the case. **We can also look at what is our current strategy with respect to this possibility. And it seems to me that currently our strategy with respect to the possibility that the urn might contain a black ball is simply to hope that it doesn't.** So we keep extracting balls as fast as we can. We have become quite good at that, but we have no ability to put balls back into the urn. We cannot uninvent our inventions. So the first part of this paper tries to identify what are the types of ways in which the world could be vulnerable, the types of ways in which there could be some possible black ball technology that one might invent. And the first and most obvious type of way the world could be vulnerable is if there is some technology that greatly empowers individuals to cause sufficiently large quantities of destruction. ([Time 0:22:25](https://share.snipd.com/snip/438682ac-4744-434d-9d49-7c2af7e1a2cc))
---
**The Difficulty of Creating Nuclear Weapons and the Importance of Luck**
Key takeaways:
- In the last century, the discovery of atomic energy and its release was a challenging process that required special materials like plutonium or highly enriched uranium.
- The difficulty in obtaining these materials for nuclear weapons production limited the capability to states.
- **If there had been an easier way to release atomic energy, it could have posed a significant threat to human civilization.**
- Fortunately, it is now known that creating an atomic detonation through baking sand in a microwave oven is physically impossible.
Transcript:
Nick Bostrom:
We **in the last century discovered how to split the atom and release the energy that is contained within** some of the energy that's contained within the nucleus, and **it turned out that this is quite difficult to do**. You need special materials, you need plutonium or highly enriched uranium. So really only states can do this kind of stuff to produce nuclear weapons. **But what if it had turned out that there had been an easier way** to release the energy of the atom? What if you could have made a nuclear bomb by baking sand in the microwave oven or something like that? So **that might have been the end of human civilization** in that it's hard to see how you could have cities, let us say, **if anybody who wanted to could destroy millions of people. So maybe we were just lucky**. Now we know, of course, that it is physically impossible to create an atomic detonation by baking sand in the microwave oven. But before you actually did the relevant nuclear physics, how could you possibly have known how it would turn out? ([Time 0:23:47](https://share.snipd.com/snip/a8899411-594a-4bad-b70d-a65f1e92300c)) ^mp8qq0
---
**The Potential Instability of Nuclear Weapon Technology**
Key takeaways:
- The world came close to the brink of nuclear catastrophe multiple times, highlighting the potential instability of nuclear technology.
- A scenario with a 'safe first strike' capability would have created a less stable situation, potentially leading to crisis instability and increased incentives for pre-emptive strikes.
- **Mutually assured destruction, despite its drawbacks, contributed to stability by deterring the use of nuclear weapons.**
- Certain technological advancements, such as the ability to destroy enemy nuclear warheads or detect nuclear submarines, could have intensified the arms race and increased the likelihood of nuclear weapon use.
- **Future agreements to ban military technologies may face challenges due to the difficulty of enforcing such bans.**
Transcript:
Nick Bostrom:
**The world actually came quite close to the brink on several occasions and we might have been quite lucky to get through**. It might not have been that we were in such a stable situation. It would rather might have been that this was a kind of slightly black ballish technology and we just had enough luck to get through. But you could imagine it could have been worse, **you could imagine properties of this technology that would have created stronger incentives say for a first strike** so that you would have crisis instability. **If it had been easier let us say in a first strike to take out all the adversaries nuclear weapons then it might not have taken a lot in a crisis situation to just have enough fear that you would have to strike first** for fear that the adversary otherwise would do the same to you.
Sam Harris:
Yeah, remind people that **in the aftermath of the Cuban Missile Crisis the people who were closest to the action felt that the odds of an exchange had been something like a coin toss** and something like 30 to 50 percent and what you're envisioning is a situation where what you describe as safe first strike which is there's just no reasonable fear that you're not going to be able to annihilate your enemy provided you strike first. That would be a far less stable situation and it's also forgotten that **the status quo of mutually assured destruction was actually a step towards stability**. I mean it was before the Russians had or the Soviets had their own arsenals there was a greater game theoretic concern that we would be more tempted to use ours because nuclear deterrence wasn't a thing yet.
Nick Bostrom:
Yeah, so some degree of stabilizing influence. Although of course maybe at the expense of the outcome being even worse if only one side were destroyed then the safe first strike might just be one side being destroyed. Yeah and so if it had been possible say with one nuclear warhead to wipe out enemies nuclear warheads within a wider radius then it's actually the case or if it had been easier to detect nuclear submarines so that you could be more confident that you had actually you know are able to target all of the other sides nuclear capability then that could have resulted in a more unstable arms race one that would with a sort of high degree of certainty result in the weapons being used and and you can consider other possible future ways in which say the world might find itself locked into arms race dynamics or it's not that anybody wants to destroy the world but **it might just be very hard to come to an agreement that avoids the arms being built up and then used in a crisis.** Nuclear weapon reduction treaties you know they're concerned about verification but in principle you can kind of have like nuclear weapons are quite big and they use very special materials there might be **other military technologies where even if both sides agree that they wanted to just ban this military technology it might just the nature of the technology it might be such that it would be very difficult or impossible to enforce.** ([Time 0:30:23](https://share.snipd.com/snip/21a9f36a-4797-4147-8ed4-33013c957851))
---
**The Challenge of Global Coordination in Disarming Nuclear Weapons**
Key takeaways:
- Global coordination problems are a concept used in economics and game theory to describe situations best solved by everyone moving in the same direction.
- **Humans are notoriously difficult to coordinate and synchronize, making global coordination problems challenging.**
- Coordination problems can entrench themselves and worsen, even if most people agree they are harmful.
- **There is usually a disincentive for first movers in coordination problems**, such as climate change or political revolutions.
- The global coordination problem framework applies well to disarmament of nuclear weapons.
- First movers in nuclear disarmament may be at a disadvantage even if everyone agrees on disarmament.
- The first strike in nuclear war strategy is often aimed at decapitating the opponent's ability to strike back.
- If one side has disarmed while the other has only pretended to disarm, the effect is still devastating, highlighting the persistence of coordination problems.
Transcript:
Jay Shapiro:
Bostram also mentioned something in passing that's worth keeping in mind as we look closer at the nuclear weapon question what he referred to as **global coordination problems. This is a concept sometimes used in economics and game theory and it describes a situation that would be best solved by everyone simultaneously moving in the same direction but of course people can't be sure what's in anyone else's mind and humans are famously difficult to coordinate and synchronize** in any case. So often these types of problems entrench themselves and worsen even if most people agree that they are incredibly harmful. Another **relevant feature of a coordination problem is that there's usually a strong disincentive for first movers. This can be applied to climate change, political revolutions, or even something like a great number of people secretly desiring to quit social media but not wanting to lose connections or marketing opportunities.** Laying the global coordination problem framework onto disarmament of nuclear weapons is an easy fit. The first movers who dismantles their bombs may be at a huge disadvantage even if everyone privately agrees that we all ought to disarm. In fact as you also heard Bostram point out when thinking about nuclear war strategy the **first strike is often aimed at decapitating the opponent's ability to strike back. Of course if your opponent has already willingly disarmed say in accordance with the mutual treaty while you have retained your weapons and only pretended to disarm the effect is just as devastating** so the coordination problem tends to persist. ([Time 0:33:59](https://share.snipd.com/snip/8f8d28a8-ddf1-4ae6-9678-59064a9f8f77))
---
**The Precipice: Humanity's Time of Heightened Risk**
Key takeaways:
- The development of the atomic bomb in the 20th century ushered in a new era where humanity possesses immense destructive power
- Human wisdom has shown slow progress over time, while our power to harm ourselves has increased
- The risks we face from our own actions are much higher than natural risks
- We have a limited timeframe to address these elevated risks, estimated to be a couple of centuries or maybe five centuries
- **This period of heightened risk will have significant historical importance and may be known as 'the precipice'**
- Humanity must navigate this perilous time to reach safer and more prosperous times ahead
Transcript:
Toby Ord:
It was in the 20th century, and I think particularly **with the development of the atomic bomb, that we first entered this new era where our power is so great that we have the potential to destroy ourselves. And in contrast, the wisdom of humanity has grown only, falteringly, if at all, over this time**. I think it's been growing. And **by wisdom, I mean both wisdom in individuals, but also ways of governing societies**, which for all their problems are better now than they were 500 years ago. So there has been improvement in that, and there has been improvement in international relations compared to where we were, say, in the 20th century. But it's a slow progress. **And so it leaves us in this situation where we have the power to destroy ourselves without the wisdom to ensure that we don't, and where the risks that we impose upon ourselves are many, many times higher than this background rate of natural risks.** And in fact, if I'm roughly right about the size of these risks, where I said one in six, a die roll, that **we can't survive many more centuries with risk like that, especially as I think that we should expect this power to continue to increase if we don't do anything about it**, and the chances to continue to go up of failing irrevocably. And because our whole bankroll is at stake, if we fail once on this level, then that's it. So that would mean that this time period where these risks are so elevated, can't last all that long. Either we get our act together, which is what I hope will happen, and we acknowledge these risks, and we bring them down, we fight the fires of today, and we put in place the systems to ensure that the risks never get so high again. Either we succeed like that, or we fail forever. Either way, I think this is going to be a short period of something like a couple of centuries or maybe five centuries. You could think of it as analogous to a period like the Renaissance or the Enlightenment or something like that. But a time where there's a really cosmic significance ultimately, where **if humanity does survive it, and we live for hundreds of thousands more years, that we'll look back and that this will be what this time is known for, this period of heightened risk, and it also will be one of the most famous times in the whole of human history. And I say in the book that school children will study it, and it'll be given a name, and I think we need a name now, and that's why I have been calling it the precipice.** The analogy there is to think of humanity being on this really long journey over these 2000 centuries, kind of journey through the wilderness, occasional times of hardship and also times of sudden progress and heady views, and that in the middle of the 20th century, we found ourselves coming through a high mountain pass and realizing that we'd got ourselves into this very dangerous predicament. And the only way onwards was this narrow ledge along the edge of a cliff with a steep precipice at the side, and we're kind of inching our way along, and we've got to get through this time, And if we can, then maybe we can reach much safer and more prosperous times ahead. ([Time 1:38:51](https://share.snipd.com/snip/b2b2d290-0461-41e7-9a7b-68644aa9f465))
---
**The Need for Global Cooperation in the 21st Century**
Key takeaways:
- **All of the major problems humankind will face in the 21st century are global problems that cannot be solved on a national basis.**
- **Nationalism prevents us from solving major problems like climate change, global inequality, nuclear war, and dangerous technologies.**
- Climate change cannot be solved by a single nation alone, necessitating global cooperation.
- Global cooperation is essential to prevent nuclear war and regulate disruptive technologies like bioengineering and artificial intelligence.
- A nationalist approach is inadequate to address challenges like climate change, nuclear war, and disruptive technology.
- Global governance, not a global government, is needed to address these global challenges.
- **Human stupidity has historically been a powerful force, but humans have also shown the capacity for wisdom and cooperation.**
- **While there is hope, it is important to not underestimate human stupidity.**
- **Although global cooperation is currently better than a century ago, we are on the precipice of a dangerous decline.**
- Solving these existential problems requires global cooperation rather than every country acting in self-interest.
Transcript:
Yuval Noah Harari:
**There is first the question of need, and then there is the question of possibility of a global community or global identity. In terms of need, I think it's really essential because all the major problems humankind will face in the 21st century will be global problems that simply cannot be solved on a national basis**. The traditional criticism of nationalism was that nationalism leads to conflict and war and so forth, and this is still true, but I think **the real objection to nationalists' worldview today is that nationalism prevents us from solving the major problems which are climate change, global inequality, the threat of nuclear war, and the threat of destructive technologies like artificial intelligence and bioengineering.** ^x07ebp
It's obvious that with climate change, no single nation can solve the problem by itself, which is why nationalists tend to just deny the problem. At first sight, it seems strange that almost all the people who deny climate change are also from the nationalist right. I mean, what's the connection? Why don't you have left-wing socialists denying climate change? But the answer, I think it's obvious that there is no nationalist answer to climate change, so as an extreme nationalist, you just have to deny the problem. But the problem is real, and unless we have a global effort, then we will face a real ecological catastrophe in the coming decades.
Similarly, if you think about nuclear war, this is an old lesson that humans have learned in the last 50 or 60 years. **The only way to prevent nuclear war is through global cooperation. It's not something that a single country can do by itself. And this is also true of technological disruption. If you think about the dangerous potential of bioengineering and artificial intelligence, regulation on a national basis is not going to help us. If the United States say, regulates or decides to stop all genetic experiments on human beings or decides not to give artificial intelligence control of weapons or something like that, it's not going to help if China or North Korea continue to do it, especially because these are high-risk, high-gain technologies, and nobody would like to stay behind. If the Chinese are doing it and they gain some crucial advantage because of that, then the US will break its own ban because it wouldn't like to stay behind. The only serious way to prevent the worst outcomes is through global regulation.** So this is why I think we now need maybe not a global government, but global governance, a global cooperation. I just don't see what could be the nationalist answer to climate change or nuclear war or disruptive technology.
Now, this is about the need. **About the possibility it's a different question because humans don't always make the best decisions. Human stupidity has been one of the most powerful forces in history. It's not the only force, there is also human wisdom. I mean, sometimes humans do the right thing if you look at, again, at nuclear weapons, so in the 1950s, many people were convinced that sooner or later the Cold War will end in a nuclear war which will destroy human civilization. And it didn't happen because the Americans and Soviets and Chinese and Europeans, they managed to cooperate enough to prevent this catastrophe. So there is hope, I'm not saying it's hopeless, but as I said, you should never underestimate human stupidity.** ^s7jsl0
At present, I think things are not so bad. I mean, they are going in a negative direction, but we are still in a far better position than let's say a century ago. If you remember where humankind was in 1917, then we are still in a relatively good place in terms of global cooperation, but we are on the edge of a very, very deep chasm and the way down can be maybe very long, but you can cover it very, very fast. So I hope that humans will cooperate in their best interests. I mean, if it's every country to itself, I don't think we can solve these existential problems. ([Time 1:46:05](https://share.snipd.com/snip/de62a504-d386-4921-b6c5-e58bf5c7fa65))
---