Skip to content

Commit

Permalink
Add portuguese templates for boolq, ag_news, imdb (EleutherAI#181)
Browse files Browse the repository at this point in the history
* add boolq_pt template and christykoh as included user

* add templates.yaml for boolqpt

* pt yaml

* add ag_news template, translated to pt

* add ag_news template, translated to pt

* save eval runs to separate subfolders by target dataset

* change prompt answer chouces to portuguese

* add imdb_pt template

* fix pt answer choice

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: Reagan Lee <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
3 people committed Apr 12, 2023
1 parent ad6d619 commit 90282c4
Show file tree
Hide file tree
Showing 5 changed files with 813 additions and 1 deletion.
2 changes: 1 addition & 1 deletion elk/promptsource/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@

# These are users whose datasets should be included in the results returned by
# filter_english_datasets (regardless of their metadata)
INCLUDED_USERS = {"Zaid", "craffel", "lauritowal"}
INCLUDED_USERS = {"Zaid", "craffel", "lauritowal", "christykoh"}


def highlight(input):
Expand Down
189 changes: 189 additions & 0 deletions elk/promptsource/templates/boolq_pt/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
dataset: boolq_pt
templates:
3e386463-1715-4578-9cba-07d11a0d3b61: !Template
answer_choices: False ||| True
id: 3e386463-1715-4578-9cba-07d11a0d3b61
jinja: 'Passagem: {{passage}}
Depois de ler esta passagem, tenho uma pergunta: {{question}}? Verdadeiro ou falso?
|||
{% if label != -1 %}
{{answer_choices[label]}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- pt
metrics:
- Accuracy
original_task: true
name: after_reading
reference: ''
492f0f88-4370-46cd-839b-1de37a55aeda: !Template
answer_choices: No ||| Yes
id: 492f0f88-4370-46cd-839b-1de37a55aeda
jinja: "{{ passage }} \nPergunta: {{ question }}\nResposta: ||| \n{% if label !=\
\ -1 %}\n{{ answer_choices[label] }}\n{% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- pt
metrics:
- Accuracy
original_task: true
name: GPT-3 Style
reference: Same as Figure G29, p. 58 of the GPT-3 paper
6cb6a026-c070-470a-b75d-bb8fdf424e35: !Template
answer_choices: No ||| Yes
id: 6cb6a026-c070-470a-b75d-bb8fdf424e35
jinja: "{{ passage }}\n\nDepois de ler isso, eu me pergunto {{ question }}? |||\n{% if\
\ label != -1 %}\n{{ answer_choices[label] }} \n{% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- pt
metrics:
- Accuracy
original_task: true
name: "I wonder\u2026"
reference: ''
7cf7acdf-e3a2-459f-a3e8-2e2d27dd6aa5: !Template
answer_choices: No ||| Yes
id: 7cf7acdf-e3a2-459f-a3e8-2e2d27dd6aa5
jinja: 'Texto: {{passage}}
Responda sim/não à seguinte pergunta: {{question}}? Sim ou não? |||
{% if label != -1 %}
{{answer_choices[label]}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- pt
metrics:
- Accuracy
original_task: true
name: yes_no_question
reference: ''
7d21d974-0624-4d4f-9e8c-644e2d009cb5: !Template
answer_choices: No ||| Yes
id: 7d21d974-0624-4d4f-9e8c-644e2d009cb5
jinja: "{{ passage }}\n\nDepois de ler isso, você poderia me dizer {{ question }}? \
\ ||| {% if label != -1 %}{{ answer_choices[label] }}\n{% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- pt
metrics:
- Accuracy
original_task: true
name: "could you tell me\u2026"
reference: ''
922d3e87-ac58-4731-84d1-f0a40e47afb5: !Template
answer_choices: No ||| Yes
id: 922d3e87-ac58-4731-84d1-f0a40e47afb5
jinja: "EXAME\n1. Responda sim ou não.\nDocumento: {{passage}}\nPergunta: {{question}}? \
\ ||| \n{% if label != -1 %}\n{{answer_choices[label]}}\n{% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- pt
metrics:
- Accuracy
original_task: true
name: exam
reference: ''
9a1bf459-8047-437c-9def-f21e960429cc: !Template
answer_choices: No ||| Yes
id: 9a1bf459-8047-437c-9def-f21e960429cc
jinja: 'Com base na seguinte passagem, {{ question }}? {{ passage }}
|||
{% if label != -1 %}
{{ answer_choices[label] }}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- pt
metrics:
- Accuracy
original_task: true
name: based on the following passage
reference: "Adapted from Perez et al. 2021 and Schick & Sch\xFCtz 2021."
9f4c6b0a-437b-40c0-b467-db4b7218d38d: !Template
answer_choices: False ||| True
id: 9f4c6b0a-437b-40c0-b467-db4b7218d38d
jinja: 'Exercício: leia o texto e responda à questão com Verdadeiro ou Falso.
Texto: {{passage}}
Pergunta: {{question}}? |||
{% if label != -1 %}
{{answer_choices[label]}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- pt
metrics:
- Accuracy
original_task: true
name: exercise
reference: ''
b2b3cb60-d6e3-491c-a09a-8201e13e417e: !Template
answer_choices: No ||| Yes
id: b2b3cb60-d6e3-491c-a09a-8201e13e417e
jinja: '{{ passage }}
Com base na passagem anterior, {{ question }}? ||| {% if label != -1 %}{{ answer_choices[label]
}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- pt
metrics:
- Accuracy
original_task: true
name: based on the previous passage
reference: "Adapted from Perez et al. 2021 and Schick & Sch\xFCtz 2021."
eb78772c-e81e-4b8a-a77b-b75efd1c212a: !Template
answer_choices: False ||| True
id: eb78772c-e81e-4b8a-a77b-b75efd1c212a
jinja: '{{passage}}
P: {{question}}? Verdadeiro ou falso? |||
{% if label != -1 %}
{{answer_choices[label]}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- pt
metrics:
- Accuracy
original_task: true
name: valid_binary
reference: ''
Loading

0 comments on commit 90282c4

Please sign in to comment.