All
Section
Appendix
1.1

Overview of Catastrophic AI Risks

There are a range of risks from AI systems, including both present-day risks and risks that may emerge in the near future. These include risks of catastrophic outcomes, which can be split into risks from malicious use, AI races, organizational risks, and rogue AIs.

Summary

We review a variety of AI risks that could lead to severe or catastrophic societal outcomes. These risks are organised into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling AI systems that may outperform humans in many tasks. For each category of risk, we look at various specific hazards falling under this category and stories that illustrate how such risks might play out.

Further reading

C. Nelson and S. Rose, "Understanding AI-Facilitated Biological Weapon Development," Centre for Long-Term Resilience, October 2023.

Y. Shavit et al., "Practices for Governing Agentic AI Systems," OpenAI, December 2023.

P. Scharre, Army of None. W.W. Norton, 2018.

P. Park et al., "AI Deception: A Survey of Examples, Risks, and Potential Solutions," ArXiv abs/2308.14752, 2023.

Discussion Questions

Review Questions