15 ways the world could end

In day-to-day life, we all navigate a range of risks: each time we cross the road, for example, we face a relatively slight chance of serious injury or death. Some risks are more serious than others, so we devote more of our time and effort to mitigating them. For instance, other things being equal, it makes sense to devote more effort to reducing the risks of dying in a car accident than to avoiding contracting a rare and relatively harmless illness. The seriousness of a risk depends on three things: scope (the number of people it would affect), severity (how badly these people would be affected), and probability (how likely it is to occur).

An existential risk is a risk that threatens the premature extinction of humanity, or the permanent and drastic destruction of its potential for desirable future development. A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale. Any event that could cause human extinction—i.e. an ELE (Extinction Level Event)—or permanently and drastically curtail humanity's potential is an existential risk. But an existential risk need not actually kill everyone. For example, if a global catastrophe leaves some survivors alive but unable to rebuild society, then it would still qualify as an existential catastrophe.

Existential risks are especially worth focusing on because of their impact on the long-term future of humanity. For individuals, premature death is concerning because it would deprive them of a future that would otherwise last for decades. Similarly, premature extinction matters because it would deprive humanity of a future potentially lasting a million years or more. The sheer scale of the future at stake makes reducing existential risk hugely valuable.

Over the course of the roughly 200,000 year history of our species, humanity has been at risk of extinction as a result of natural catastrophes, such as asteroids and super-volcanic eruptions. Anthropogenic—human-caused—risks are a much newer phenomenon. Technological progress can give us the tools to improve society and to reduce existential risks, for example by providing the means to deflect large asteroids. However, technologies can also create new risks: with the invention of nuclear weapons, humanity gained the practical capacity to bring about its own extinction for the first time.

A crucial political task for the international community will be to manage technological progress so that we enjoy the benefits while minimizing the risks of existential catastrophe. This also highlights the importance of focusing the attention of research communities and decision-makers on existential risk. Because many of these risks arise from emerging technologies, humanity should not necessarily expect to already have the knowledge and tools needed to manage them. As a result, research into existential risks needs to be a top priority.

Examples of Existential and Global Catastrophic Risks

The 2015 Paris Agreement represented a global effort to safeguard future generations from damaging climate change. But climate change is not the only serious risk to humanity. Our collective commitment to our children and future generations needs to extend to all existential risks—those with the potential to permanently curtail humanity’s opportunity to flourish. These risks include nuclear war, engineered pandemics, and other catastrophes resulting from emerging technologies. These disasters would lead to immediate harm, but in their most extreme forms, they have the potential to wipe out humanity entirely. Such risks may seem unlikely and distant. Indeed, in any one year they are improbable. But small probabilities accumulate—and because disaster risk reduction is a global public good, individual nations will tend to underinvest in it.

Nuclear weapons and climate change themselves would have once been unimaginable. It may be that emerging technologies introduce new risks that are even harder to manage.

Potential global catastrophic risks include anthropogenic risks (technology risks, governance risks), and natural or external risks.

Examples of anthropogenic risks involving technology include:

1. hostile artificial intelligence,

2. biotechnology threats,

3. nanotechnology weapons.

Examples of other anthropogenic risks include:

4. Insufficient global governance creating risks in the social and political domain potentially leading to a global war, with or without a nuclear and/or other WMD (biochemical weapon) holocaust,

5. bioterrorism using genetically modified organisms,
6. cyberterrorism destroying critical infrastructures like the electrical grid;

Risks in the domain of Earth system governance include:

7. global warming,

8. environmental degradation, including extinction of species,

9. famine as a result of non-equitable resource distribution, human overpopulation, crop failures and non-sustainable agriculture.

Examples of non-anthropogenic risks include:

10. an asteroid impact event,
11. a supervolcanic eruption,
12. a lethal gamma-ray burst,
13. a geomagnetic storm destroying all electronic equipment,
14. natural long-term climate change, or
15. extraterrestrial life impacting life on Earth, i.e. alien superpower(s) causing human/global extinction and/or alien colonization.


Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century, have had their probabilities predicted with considerable precision (although some scholars claim the actual rate of large impacts could be much higher than originally calculated). Similarly,  infrequent but cataclysmic volcanic eruptions can occur that are of sufficient magnitude to cause catastrophic climate change, similar to the supervolcanic eruption that occurred about 75,000 years ago at the site of present-day Lake Toba in Sumatra, Indonesia, which probably caused a global volcanic winter of six to ten years and possibly a 1,000-year-long cooling episode.

The relative danger posed by other threats is much more difficult to calculate. In 2008, a group of experts at the Global Catastrophic Risk Conference held at the University of Oxford suggested a 19% chance of human extinction over the next century. The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during a human-extinction event than in a car crash.

Unfortunately, with the exceptions of some concerned scientists and scholars, and a few politicians, mostly in the UK and Northern Europe, we have mostly ignored threats to our existence that many leading risk scholars believe are the most serious, namely the potentially omnicidal threats posed by nuclear and other weapons of mass destruction (biological, chemical, and/or molecular nano-technological); global warming; pandemics; as well as those associated with such emerging technologies as biotechnology, synthetic biology, and artificial intelligence. In general, these technologies are not only becoming more powerful at an exponential rate, but increasingly accessible to small groups and even ‘lone wolves’. The result is that a growing number of individuals are being empowered to wreak unprecedented havoc on civilization

What can be done?

Since all existential risks, known and unknown, present a fundamentally global political challenge, greater international cooperation might reduce the threat they pose. Reducing existential risk is likely to require extensive international coordination, cooperation, and action. Some approaches can succeed at a purely national or regional level, or within particular research communities.

However, because most existential risks are essentially transnational, the international community has a large role to play. There are several actions that the international community could take to reduce existential risk either immediately, such as scenario planning for pandemics, or in the near future, such as creating a geo-engineering research governance. However, similar types of international action typically require a high level of agreement and buy-in from decision-makers in nation-states and international institutions. One can see this in the history of international action to prevent climate change, wherein the current lack of global consensus—particularly by the U.S., India, and Russia—about the sources of the problem and how to address it may lead to catastrophe in the near future.

Hence, insufficient consensus and the lack of global governance create risks in the social and political domain that inhibit concerted action to address humanity’s existential challenges, and governance mechanisms develop more slowly than technological and social change, thus compounding those risks.

To address this, additional venues need to be created to provide training and educational opportunities to individuals and groups with the power to take action on existential risks. Providers of executive and online education on domestic and international security could include short courses on existential risks. Perhaps universities and institutes in Prague and elsewhere could collaborate on the design and implementation of such courses.

While there is frequently uncertainty about the sources of risk and the best responses to them, the scope, severity and probability of many emerging risks mean that research and action to help overcome them should be a high global priority.

Written by: 

Follow us

Go to top