We’re distributing $20k in total as prizes for submissions that make effective arguments for the importance of AI safety. The goal is to generate short-form content for outreach to policymakers, management at tech companies, and ML researchers.
Objectives of the arguments
To mitigate AI risk, it’s essential that we make the crucialty of safety legible to relevant stakeholders. To this end, we are initiating a pair of competitions to build effective arguments for a range of audiences. In particular, our audiences include policymakers, tech executives, and ML researchers.
- Policymakers may be unfamiliar with the latest advances in machine learning, and may not have the technical background necessary to understand some/most of the details. Instead, they may focus on societal implications of AI as well as which policies are useful.
- Tech executives are likely aware of the latest technology, but lack a mechanistic understanding. They may come from technical backgrounds and are likely highly educated. They will likely be reading with an eye towards how these arguments concretely affect which projects they fund and who they hire.
- Machine learning researchers can be assumed to have high familiarity with the state of the art in deep learning. They may have previously encountered talk of x-risk but were not compelled to act. They may want to know how the arguments could affect what they should be researching. We’d like arguments to be written for at least one of the three audiences listed above. Some arguments could speak to multiple audiences, but we expect that trying to speak to all at once could be difficult. After the competition ends, we will test arguments with each audience and collect feedback. We’ll also compile top submissions into a public repository for the benefit of the x-risk community.
We aren’t soliciting arguments for very specific technical strategies towards safety. We are simply looking for sound arguments that AI risk is real and important.
Competition details
The present competition addresses short arguments (paragraphs and one-liners) with a total prize pool of $20K. The prizes will be split among, roughly, 20-40 winning submissions. The prize distribution will be determined by effectiveness and epistemic soundness as judged by our competition team. Arguments must not be misleading.
The first competition will run until May 27th, 11:59 PT (submission form is now closed). If you win an award, we will contact you via your email.
Paragraphs
We are soliciting argumentative paragraphs (of any length) that build intuitive and compelling explanations of AI existential risk.
- Paragraphs could cover various hazards and failure modes, such as weaponized AI, loss of autonomy and enfeeblement, objective misspecification, value lock-in, emergent goals, power-seeking AI, and so on.
- Paragraphs could make points about the philosophical or moral nature of x-risk.
- Paragraphs could be counterarguments to common misconceptions.
- Paragraphs could use analogies, imagery, or inductive examples.
- Paragraphs could contain quotes from intellectuals: “If we continue to accumulate only power and not wisdom, we will surely destroy ourselves” (Carl Sagan), etc.
One-liners
Effective one-liners are statements (25 words or fewer) that make memorable, “resounding” points about safety. Here are some (unrefined) examples:
- Vladimir Putin said that whoever leads in AI development will become “the ruler of the world.” (source)
- Inventing machines that are smarter than us is playing with fire.
- Intelligence is power: we have total control of the fate of gorillas, not because we are stronger but because we are smarter. (based on Russell)
One-liners need not be full sentences; they might be evocative phrases or slogans. As with paragraphs, they can be arguments about the nature of x-risk or counterarguments to misconceptions.
Conditions of the prizes
Acceptance of a prize constitutes consent to the addition of one’s submission to the public domain. We expect that top paragraphs and one-liners will be collected into executive summaries in the future. After some experimentation with target audiences, arguments will be used for various outreach projects.
Submissions are no longer being accepted. In around a month we will divide up prizes.