Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[New Task] Add Paloma benchmark #1928

Merged
merged 7 commits into from
Jun 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions lm_eval/tasks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@
| okapi/mmlu_multilingual | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (34 languages) |
| [okapi/truthfulqa_multilingual](okapi/truthfulqa_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) |
| [openbookqa](openbookqa/README.md) | Open-book question answering tasks that require external knowledge and reasoning. | English |
| [paloma](paloma/README.md) | Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. | English |
| [paws-x](paws-x/README.md) | Paraphrase Adversaries from Word Scrambling, focusing on cross-lingual capabilities. | English, French, Spanish, German, Chinese, Japanese, Korean |
| [pile](pile/README.md) | Open source language modelling data set that consists of 22 smaller, high-quality datasets. | English |
| [pile_10k](pile_10k/README.md) | The first 10K elements of The Pile, useful for debugging models trained on it. | English |
Expand Down
68 changes: 68 additions & 0 deletions lm_eval/tasks/paloma/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Paloma

### Paper
Title: Paloma: A Benchmark for Evaluating Language Model Fit

Abstract: https://arxiv.org/abs/2312.10523v1

Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. It assesses the performance of various models across 585 distinct domains.

Homepage: https://allenai.org/olmo


### Note

If you are running the entire `paloma` benchmark (or just `paloma_dolma_100_programing_languages`) with a HuggingFace model, make sure to pass `logits_cache=False` to `--model_args`, for example:
```
lm_eval --model hf --model_args pretrained=EleutherAI/pythia-160m,logits_cache=False --tasks paloma
```


### Citation
```
@article{paloma,
title={{Paloma}: A Benchmark for Evaluating Language Model Fit},
author={Magnusson, Ian and Bhagia, Akshita and Hofmann, Valentin and Soldaini, Luca and Harsh Jha, Ananya and Tafjord, Oyvind and Schwenk,Dustin and Walsh, Evan Pete and Elazar, Yanai and Lo, Kyle and Groenveld,Dirk and Beltagy,Iz and Hajishirz,Hanneneh and Smith, Noah A. and Richardson,Kyle and Dodge,Jesse},
journal={technical report},
year={2023},
url={https://paloma.allen.ai/}
}
```

### Groups and Tasks

#### Groups

* `paloma`

#### Tasks

* `paloma_4chan_meta_sep`
* `paloma_c4_100_domains`
* `paloma_c4_en`
* `paloma_dolma_100_programing_languages`
* `paloma_dolma_100_subreddits`
* `paloma_dolma-v1_5`
* `paloma_falcon-refinedweb`
* `paloma_gab`
* `paloma_m2d2_s2orc_unsplit`
* `paloma_m2d2_wikipedia_unsplit`
* `paloma_manosphere_meta_sep`
* `paloma_mc4`
* `paloma_ptb`
* `paloma_redpajama`
* `paloma_twitterAAE_HELM_fixed`
* `paloma_wikitext_103`

### Checklist

For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
22 changes: 22 additions & 0 deletions lm_eval/tasks/paloma/paloma.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
group:
- paloma
dataset_path: allenai/paloma
output_type: loglikelihood_rolling
validation_split: val
test_split: test
doc_to_text: ""
doc_to_target: !function paloma_utils.doc_to_target
lintangsutawika marked this conversation as resolved.
Show resolved Hide resolved
should_decontaminate: true
doc_to_decontamination_query: !function paloma_utils.doc_to_target
metric_list:
- metric: word_perplexity
aggregation: weighted_perplexity
higher_is_better: false
- metric: byte_perplexity
aggregation: weighted_perplexity
higher_is_better: false
- metric: bits_per_byte
aggregation: bits_per_byte
higher_is_better: false
metadata:
version: 1
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_4chan_meta_sep.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_4chan_meta_sep
task_alias: 4chan Corpus
dataset_name: 4chan_meta_sep
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_c4_100_domains.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_c4_100_domains
task_alias: C4-100-domains
dataset_name: c4_100_domains
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_c4_en.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_c4_en
task_alias: C4
dataset_name: c4_en
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_dolma-v1_5.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_dolma-v1_5
task_alias: Dolma V1.5
dataset_name: dolma-v1_5
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: _paloma_template
task: paloma_dolma_100_programing_languages
task_alias: 100 PLs
dataset_name: dolma_100_programing_languages
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_dolma_100_subreddits.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_dolma_100_subreddits
task_alias: Dolma-100-subreddits
dataset_name: dolma_100_subreddits
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_falcon-refinedweb.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_falcon-refinedweb
task_alias: Falcon Refinedweb
dataset_name: falcon-refinedweb
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_gab.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_gab
task_alias: Gab Corpus
dataset_name: gab
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_m2d2_s2orc_unsplit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_m2d2_s2orc_unsplit
task_alias: M2D2 S2ORC
dataset_name: m2d2_s2orc_unsplit
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_m2d2_wikipedia_unsplit.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_m2d2_wikipedia_unsplit
task_alias: M2D2 Wikipedia
dataset_name: m2d2_wikipedia_unsplit
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_manosphere_meta_sep.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_manosphere_meta_sep
task_alias: Manosphere Corpus
dataset_name: manosphere_meta_sep
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_mc4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_mc4
task_alias: mC4-en
dataset_name: mc4
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_ptb.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_ptb
task_alias: Penn Treebank
dataset_name: ptb
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_redpajama.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_redpajama
task_alias: RedPajama
dataset_name: redpajama
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_twitterAAE_HELM_fixed.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_twitterAAE_HELM_fixed
task_alias: Twitter AAE
dataset_name: twitterAAE_HELM_fixed
2 changes: 2 additions & 0 deletions lm_eval/tasks/paloma/paloma_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
def doc_to_target(doc):
return str(doc["text"])
4 changes: 4 additions & 0 deletions lm_eval/tasks/paloma/paloma_wikitext_103.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
include: paloma.yaml
task: paloma_wikitext_103
task_alias: Wikitext-103
dataset_name: wikitext_103
Loading