lmms_eval/tasks/olympiadbench/olympiadbench_test_cn.yaml (25 lines of code) (raw):

dataset_path: lmms-lab/OlympiadBench dataset_kwargs: token: True task : "olympiadbench_test_cn" test_split: test_cn output_type: generate_until doc_to_visual: !function cn_utils.olympiadbench_doc_to_visual doc_to_text: !function cn_utils.olympiadbench_doc_to_text doc_to_target: "answer" generation_kwargs: until: - "ASSISTANT:" max_new_tokens: 1024 temperature: 0 top_p: 0 num_beams: 1 do_sample: false process_results: !function cn_utils.olympiadbench_process_results metric_list: - metric: submission aggregation: !function cn_utils.olympiadbench_aggregate_results higher_is_better: true - metric: exact_match aggregation: mean higher_is_better: true