AI Safety, Ethics and Society Course

Take part in our free online course running from July-October 2024.

The past decade has seen swift progress in AI research. Today’s state of the art systems dramatically outperform humans in narrow areas such as understanding how proteins fold or playing chess and Go. They are closing in on expert performance in other areas: for example, they achieve marks that outperform the average doctor and lawyer in professional exams. 

Advances in AI could transform society for the better, for example by accelerating scientific innovation. However, they also present significant risks to society if managed poorly, including large-scale accidents, misuse, or loss of control. Researchers, policy-makers and others will need to mobilize major efforts to successfully address these challenges.

This course aims to provide a comprehensive introduction to how current AI systems work, why many experts are concerned that continued advances in AI may pose severe societal-scale risks, and how society can manage and mitigate these risks. The course does not assume prior technical knowledge of AI.

Why take this course? 

By taking this course, you will be able to:

  • Explore a variety of risks from advanced AI systems. The course explores a range of potential societal impacts as AI systems become more powerful, from automation to weaponization. It also describes rigorous frameworks to analyze and evaluate AI risks, along with proposed mitigation strategies.
  • Broaden your knowledge of the core challenges to safe and beneficial AI deployment and the opportunities to address these. A full understanding of the risks posed by AI requires knowledge from a variety of disciplines, not just machine learning. This course provides a structured overview of the core challenges to safe deployment of advanced AI systems and demonstrates the relevance of concepts and frameworks from engineering, economics and other disciplines.
  • Build connections with others interested in these topics. You will be part of a diverse cohort of participants who bring a variety of expertise and viewpoints. The connections formed during the course can provide meaningful support in navigating and contributing to the field of AI safety.
  • Receive tailored guidance during the course and support with your next steps. Facilitators will help you to understand course material, develop your own perspectives on each subject, and encourage constructive discussions with your peers. We will support you in identifying your next steps, whether that involves building upon your end-of-course project, pursuing further research, or applying for relevant opportunities.

Course structure

The course consists of two parts. In the first part, which lasts 9 weeks, participants will go through the course content and take part in small-group discussions. In the second part, for 4 weeks, participants will work on a personal project to consolidate or extend what they have learnt during the course. The expected time commitment is around 5 hours per week, allowing participants to take the course alongside full-time work or study.

Taught content

During the 9 weeks of this course section, you will commit 2-3 hours per week to go through the assigned readings and other content. The first session will be an ice-breaker to get to know the other members of your cohort and to discuss your goals in taking the course. The following 8 sessions will cover the content in the course curriculum. Each week, you will also take part in a 2-hour group session with your cohort (via video call) led by an experienced facilitator. These group sessions provide an opportunity to raise questions, compare and debate different perspectives, and build connections with peers.

When you are accepted to the course, we will request your availability to ensure the group sessions are scheduled at a time that is convenient for you. 

Projects

You will have 4 weeks to pursue a personal project that builds on the knowledge acquired during the previous phase of the course. You can focus on any topic that is related to the course, and invest as much time as you prefer. For example, you could write a short report that dives into a specific question relating to AI's impacts that you find interesting, or a critique of claims about AI safety that you disagree with. We can provide suggestions on potentially valuable projects.

There will be weekly online sessions of 1-2 hours with your cohort to check in on your progress and receive feedback. At the end of this phase you will share your project with other course participants. 

Based on attending the first phase of the course and submitting an output from your project, you will be awarded a certificate of completion.

Cohorts and tracks

We will aim to group participants based on their level of experience. If there is sufficient interest, we may run an advanced track that skips weeks 1 (AI Fundamentals) and 2 (Overview of Catastrophic AI Risks). Applicants will be asked to register their interest in the advanced track.

Dates

The course is expected to run from around Monday July 8th to October 4th 2024. Exact dates and times for each cohort's weekly meetings will be finalized after participants have been accepted and have confirmed their availabilities.

The deadline for applications to take part is May 31st 2024, Anywhere on Earth.

Requirements

  • Participants commit to make themselves available for at least 5 hours per week for course readings and discussions
  • You will need a reliable internet connection and webcam to join video calls
  • The course is free of charge

How is this course different from other courses on AI safety?

While there is some overlap with other courses in terms of the topics covered, this course has several distinctive features:

  • The course has a relatively broad scope in terms of the societal impacts and risks from AI covered, discussing not only loss of control or misalignment, but also other risks such as malicious use, accidents and enfeeblement
  • The course focuses strongly on connecting AI safety to other well-established research fields, demonstrating the relevance of existing concepts and frameworks that have stood the test of time, such as:
    • The importance of structural and organizational sources of risk
    • Safety engineering and risk management
    • Complex systems theory
    • Different lenses to analyze competitive dynamics in AI development and deployment, including game theory, theories of bargaining and evolutionary theory

Questions

If you have any other questions, you can contact us at events@safe.ai.