AGI Safety Fundamentals

Course site.

Week 1

Visualising the deep learning revolution

Post. Lists a large number of breakthroughs in AI: vision, generating images, language modelling, game playing, mathematics & theorem proving, protein folding. A lot of these breakthroughs happened in 2022. The post claims that the model that has historically performed best for predicting progress has focused on compute scaling.

On the Opportunities and Risks of Foundation Models

Paper, reading up to page 6.

Outlines the emergence and homogenization of machine learning, deep learning, and foundation models. Foundation models are now used for most downstream tasks (homogenization) because they have broad capabilities (emergence).

Four Background Claims

Post.

  1. Humans have a very general ability to solve problems and achieve goals across diverse domains.
  2. AI systems could become much more intelligent than humans.
  3. If we create highly intelligent AI systems, their decisions will shape the future.
  4. Highly intelligent AI systems won’t be beneficial by default.

AGI Safety From First Principles (Section 1, Section 2.1)

Paper. Argues that we will reach superintelligence via generalization rather than building “tool AIs” for each domain. This is due to the lack of large amounts of training data in all domains (e.g. being a CEO).

Biological Anchors: A Trick That Might Or Might Not Work (Part 1)

Post.

The Biological Anchors report tries to estimate when we’ll see transformative AI:

The report comes to a 50% chance by 2052.