Skip to content

codeaudit/test-1

 
 

Repository files navigation

Measuring Massive Multitask Language Understanding

This is the repository for Measuring Massive Multitask Language Understanding by Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.

This repository contains OpenAI API evaluation code, and the test is available for download here.

Test Leaderboard

If you want to have your model added to the leaderboard, please reach out to us or submit a pull request.

Results of the test:

Model Authors Humanities Social Science STEM Other Average
GPT-3 Brown et al., 2020 40.8 50.4 36.7 48.8 43.9
UnifiedQA Khashabi et al., 2020 38.0 41.5 32.2 42.1 38.5
Random Baseline N/A 25.0 25.0 25.0 25.0 25.0

Citation

If you find this useful in your research, please consider citing the test and also the ETHICS dataset it draws from:

@article{hendryckstest2020,
  title={Measuring Massive Multitask Language Understanding},
  author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
  journal={arXiv preprint arXiv:2009.03300},
  year={2020}
}

@article{hendrycksethics2020,
  title={Aligning AI With Shared Human Values},
  author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
  journal={arXiv preprint arXiv:2008.02275},
  year={2020}
}

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%