lm_eval/tasks/eus_trivia/eus_trivia.yaml (16 lines of code) (raw):
dataset_path: HiTZ/EusTrivia
dataset_name: default
task: eus_trivia
doc_to_text: !function utils.doc_to_text
doc_to_choice: !function utils.doc_to_choice
validation_split: null
test_split: test
fewshot_split: test
output_type: multiple_choice
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.0