AGI Safety Fundamentals
Week 1
Visualising the deep learning revolution
Post. Lists a large number of breakthroughs in AI: vision, generating images, language modelling, game playing, mathematics & theorem proving, protein folding. A lot of these breakthroughs happened in 2022. The post claims that the model that has historically performed best for predicting progress has focused on compute scaling.
On the Opportunities and Risks of Foundation Models
Paper, reading up to page 6.
Outlines the emergence and homogenization of machine learning, deep learning, and foundation models. Foundation models are now used for most downstream tasks (homogenization) because they have broad capabilities (emergence).
Four Background Claims
Post.
- Humans have a very general ability to solve problems and achieve goals across diverse domains.
- AI systems could become much more intelligent than humans.
- If we create highly intelligent AI systems, their decisions will shape the future.
- Highly intelligent AI systems won’t be beneficial by default.
AGI Safety From First Principles (Section 1, Section 2.1)
Paper. Argues that we will reach superintelligence via generalization rather than building “tool AIs” for each domain. This is due to the lack of large amounts of training data in all domains (e.g. being a CEO).
Biological Anchors: A Trick That Might Or Might Not Work (Part 1)
Post.
The Biological Anchors report tries to estimate when we’ll see transformative AI:
- Estimate how much compute is required to get human
level intelligence.
- e.g. look at neuron FLOPs and neural network FLOPs across different RL horizons.
- e.g. look at evolution FLOPs.
- e.g. look at gene FLOPs.
- Discount improvements in algorithm speed.
- Discount reductions in compute cost.
The report comes to a 50% chance by 2052.