The Machine Learning Safety Scholars program is a paid 9-week summer program designed to skill up undergraduate students in machine learning so that they may effectively pursue empirical AI safety research in the future.
The course has three parts:
- Machine learning, with lectures and assignments from MIT
- Deep learning, with lectures and assignments from the University of Michigan, NYU, and Hugging Face
- ML safety, with lectures and assignments produced by Dan Hendrycks at UC Berkeley
The first two sections are based on public materials, and we plan to make the ML safety course publicly available soon as well. The purpose of the program is not to provide proprietary lessons but to better facilitate learning:
- The program has a Slack, regular office hours, and active support available for all Scholars.
- The program has designated “work hours” where students will cowork and meet each other.
- We will pay Scholars a $4,500 stipend upon completion of the program. This is comparable to pay for undergraduate research roles.
MLSS is fully remote, so participants are able to work from wherever they’re located. The program will last 9 weeks, beginning on Monday, June 20th, and ending on August 19th. We expect each week of the program to cover the equivalent of about 3 weeks of the university lectures we are drawing our curriculum from. As a result, the program will likely take roughly 30-40 hours per week, depending on speed and prior knowledge.
Content & schedule
Machine learning (content from the MIT open course)
- Week 1 - Basics, Perceptrons, Features
- Week 2 - Features continued, Margin Maximization (logistic regression and gradient descent), Regression
Deep learning (content from a University of Michigan course and an NYU course)
- Week 3 - Introduction, Image Classification, Linear Classifiers, Optimization, Neural Networks. ML Assignments due.
- Week 4 - Backpropagation, CNNs, CNN Architectures, Hardware and Software, Training Neural Nets I & II. DL Assignment 1 due.
- Week 5 - RNNs, Attention, NLP (from NYU), Hugging Face tutorial (parts 1-3), RL overview. DL Assignment 2 due.
ML safety
- Week 6 - Risk Management Background (e.g., accident models), Robustness (e.g., optimization pressure). DL Assignment 3 due.
- Week 7 - Monitoring (e.g., emergent capabilities), Alignment (e.g., honesty). Project proposal due.
- Week 8 - Systemic Safety (e.g., improved epistemics), Additional X-Risk Discussion (e.g., deceptive alignment). All ML Safety assignments due.
- Week 9 - Final Project
Eligibility
The program is designed for motivated undergraduates interested in empirical AI safety research. We will accept Scholars who will be enrolled undergraduate students after the conclusion of the program (this includes graduated/soon graduating high school students about to enroll in their first year of undergrad).
Prerequisites:
- Differential calculus
- At least one of linear algebra or introductory statistics (e.g., AP Statistics)
- Programming (ability to write code in Python, or quickly learn how to)
We don’t assume any ML knowledge, though we expect that the course could be helpful even for people who have some knowledge of ML already (e.g., fast.ai or Andrew Ng’s Coursera course).
Questions
Please address questions to Thomas Woodside.
Application
The application deadline has passed.