All
Section
Appendix
7.3

Cooperation

Cooperation between AI stakeholders is important in order to mitigate risks from AI. Similarly, cooperation between AI systems seems required to enable them to operate autonomously in the real world. However, making AIs cooperative is not an unalloyed good and brings risks that need to be understood and addressed.

No items found.

Review Questions

What are 2 mechanisms that can promote cooperation? Briefly describe how each mechanism may generate undesirable AI behavior.

Answer:

Group selection refers to when agents within a group cooperate due to inter-group competition. This could generate undesirable AI behavior if AI systems favor other AIs other humans.

Institutional mechanisms enable agents to cooperate due to external coercion using an enforceable set of rules or norms. An "AI Leviathan" able to enforce rules may be a good way for humanity to combat malicious or power-seeking AIs. But humanity must ensure its relationship with the AI Leviathan is symbiotic and transparent, otherwise humans risk losing control.

View Answer
Hide Answer

How can direct reciprocity potentially backfire in promoting cooperation between AIs and humans?

Answer:

As AIs become more advanced, the cost-benefit ratio for cooperation with humans from an AI's perspective may become unfavorable. AIs may favor cooperation with other AIs over humans since it may take humans much longer to reciprocate favors compared to AIs.

View Answer
Hide Answer