Competition

The first iteration of the Moral Uncertainty Research Competition is open for submissions until May 31st 2023 (anywhere on Earth).

Prizes

We offer a prize pool of up to $100,000 for novel methods* achieving high scores on our leaderboard:

  • First to obtain ≥75% AUROC. ($20,000)
  • First to obtain ≥80% AUROC. ($20,000)
  • First to obtain ≥85% AUROC. ($20,000)
  • First to obtain ≥90% AUROC. ($20,000)
  • First to obtain ≥95% AUROC. ($20,000)

*Leaderboard scores are not sufficient prize criteria. See Research Contributions and Rules for more information.

Leaderboard

Leaderboard rankings are determined by AUROC score.

MethodAccAUROC
1DeBERTa-v3-large

Baseline

92.270.7
2GPT-3 (Davinci)

Baseline

91.869.1
3BERT-base

Baseline

89.167.4
4RoBERTa-large

Baseline

90.865.2
5BERT-large

Baseline

86.759.2
6ALBERT-xxlarge-v2

Baseline

82.055.8
7DeBERTa-v2-xxlarge

Baseline

71.852.0
Random Performance50.050.0


Questions?

Please email us or submit an issue on GitHub.


Stay in the Loop

Like this competition? Follow @ml_safety on Twitter for updates, related resources, and more competitions!