There are many cases where intelligent agents can, despite acting rationally and in accordance with their own self-interest, collectively produce outcomes that none of them wants. Making individual AI systems reliable is not sufficient to avoid all risks from AI. Rather, we need to understand how these dynamics affect interactions between humans and AIs in order to prevent harmful, potentially catastrophic outcomes.


To design effective policies to capture AI's benefits and manage its risks, we need to consider some fundamental variables such as the speed of progress in AI capabilities, and the breadth of access to highly powerful AI systems. We outline several possible scenarios regarding the speed of AI development and review arguments for and against a dramatic acceleration of economic growth due to AI progress. We also explore how varying concentrations of power, both in the number of AIs and the access to these AIs, can alter the risks and benefits we face.

We then discuss potential governance approaches at different level to ensure AI development is well-regulated and beneficial. We examine how corporations can be held to safety standards and how national and international bodies can implement effective legal oversight of AI activities. Some forms of international cooperation may be needed to mitigate competitive pressures.

Further reading

J. Sevilla, L. Heim, A. Ho, T. Besiroglu, M. Hobbhahn, and P. Villalobos, "Compute trends across three eras of machine learning," in 2022 International Joint Conference on Neural Networks (IJCNN), July 2022, pp. 1-8, IEEE.

E. Erdil and T. Besiroglu, "Explosive growth from AI automation: A review of the arguments," arXiv preprint arXiv:2309.11690, 2023.

S. M. Khan and A. Mann, "AI Chips: What They Are and Why They Matter," Center for Security and Emerging Technology, April 2020. [Online]. Available:

Y. Shavit, "What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring," arXiv preprint arXiv:2303.11341, 2023.

M. Anderljung et al., "Frontier AI regulation: Managing emerging risks to public safety," arXiv preprint arXiv:2307.03718, 2023.

M. Maas, "Literature Review of Transformative AI Governance," 2023 draft.

R. Trager et al., "International Governance of Civilian AI: A Jurisdictional Certification Approach," LPP Working Paper No. 3-2023, August 31, 2023. [Online]. Available: or

L. Ho et al., "International Institutions for Advanced AI," arXiv preprint arXiv:2307.04699, 2023.

Discussion Questions

Review Questions