All
Section
Appendix
8.8

Conclusion

We have introduced some of the key parameters affecting AI governance, such as the speed of AI development and the distribution of AI access, as well as levers to govern AI at the national and international level.

No items found.

8.8 Conclusion

In the introduction, we laid out the purpose of this chapter: understanding the fundamentals of how to govern AI. In other words, we wanted to understand how to organize, manage, and steer the development and deployment of AI technologies using an array of tools including norms, policies, and institutions. To set the scene, we considered a set of actors and tools that governance needs to consider.

Growth. We explored how much AI might accelerate economic growth. AI has the potential to significantly boost economic growth by augmenting the workforce, improving labor efficiency, and accelerating technological progress. However, the extent of this impact is debated, with some predicting explosive growth while others believe it will be tempered by social and economic factors. The semi-endogenous growth theory suggests that population growth, by expanding the labor force and fostering innovation, has historically driven economic acceleration. Similarly, AI could enhance economic output by substituting for human labor and self-improving, creating a positive feedback loop. Nonetheless, constraints such as limited physical resources, diminishing returns on research and development, gradual technology adoption, regulatory measures, and tasks that resist automation could moderate the growth induced by AI. Therefore, while AI’s contribution to economic growth is likely to be significant, whether it will result in unprecedented expansion or face limitations remains uncertain.

Distribution. We then explored three key dimensions of the distribution of advanced AI systems: benefits and costs of AI, access to AI, and power of AI systems. Equitable distribution of the economic and social impacts of AI will be crucial to ensure that productivity gains are shared broadly rather than accruing only to a small group like AI developers and investors. There are conflicting considerations with regard to access to AI, which make it is unclear whether availability should be tightly restricted or widely open to the general public. Limiting access risks misuse by powerful groups whereas open access risks misuse by malicious actors. In terms of distributing power among AI systems, concentrating capabilities and control in a single AI or small group of AIs poses risks like permanently locking in certain values or goals. However, distributing power more widely among a large, diverse ecosystem of AIs also has downsides, like increasing the potential for misuse or making it more difficult to correct AI systems that begin behaving undesirably.

Corporate Governance. Turning to the various stakeholders that will shape AI governance, we discussed how aspects of corporate structure and governance like legal form, ownership models, policies, practices, and assurance mechanisms can help steer technology companies away from solely maximizing profits and shareholder value. Instead, they can guide corporate AI work in directions that prioritize broader societal interests like safety, fairness, privacy, and ethics. However, achieving this through corporate governance alone may prove challenging, making complementary approaches at other levels vital.

National Governance. For governance at the national level, we explored policy tools governments can use to align AI development with public interests, both in the public and private sectors. These included safety regulations, liability rules that make AI developers internalize potential damages, investments to improve societal resilience against AI risks, and measures for maintaining national competitiveness in AI while still ensuring domestic safety. Combinations of these policy mechanisms can help nations steer AI progress in their jurisdictions towards beneficial ends.

International Governance. At the international level, governance of AI systems is made challenging by issues like verifying adherence to agreements. However, international cooperation is essential for managing risks from AI and distributing benefits globally. Approaches like international safety standards for civilian AI applications, agreements to limit military uses of AI, and proposals for concentrating advanced AI development within select transnational groups, may all help promote global flourishing. A lack of any meaningful international governance could lead to a dangerous spiral of competitive dynamics and erosion of safety standards.

Compute Governance. Next, we explored how governing access to and use of the computing resources that enable AI development could provide an important lever for influencing the trajectory of AI progress. Compute is an indispensable input to developing advanced AI capabilities. It also has properties like physicality, excludability, and quantifiability that make governing it more feasible than other inputs like data and algorithms. Compute governance can allow control over who is granted access to what levels of computational capabilities, controlling who can create advanced AIs. It also facilitates setting and enforcing safety standards for how compute can be used, enabling the steering of AI development.

Conclusion. This chapter provides an overview of a diverse selection of governance solutions spanning from policies within technology firms to agreements between nations in global institutions. The arrival of transformative AI systems will require thoughtful governance at multiple levels in order to steer uncertain technological trajectories in broadly beneficial directions aligned with humanity’s overarching interests. The deliberate implementation of policies, incentives and oversight will be essential to realizing the potential of AI to improve human civilization rather than destroy it.

Review Questions