SafeBench

$1.2 million in prizes for pioneering benchmarks in AI system safety.

Get started Open 7/5–10/5

Get started

Select a category

We've created a list of 20 concrete categories that we believe are key to the safety of advanced AI systems. Read our proposed categories and select an area that piques your interest.

Categories

Develop a benchmark

Formalize your idea in a writeup following our guidelines. Propose any concept that would take less than $300k to create.

Guidelines

Submit

Jul 5–Oct 5, 2022

Submit your benchmark idea to the competition. Early submissions will be eligible for potential feedback and resubmission.

Submit   View public submissions

Judging

Oct–Dec 2022

Judging will be done by a panel of experts.

Winners announced

Jan 2023

We will select a first-place recipient to receive \$50k and a second-place recipient to receive \$10k in each category. For promising ideas, we may also follow up to provide engineering expertise or funding to make them a reality.

FAQ

Can I make more than one submission?
Yes. Submissions will be evaluated independently.
Do I need a team?
We welcome submissions from teams as well as independent researchers. If you’re looking for a team, make sure to join our Slack channel. If part of a team, the prizes will be split evenly among the authors.

Sponsored by the Future Fund Regranting Program