lmms_eval/tasks/mmmu/mmmu_val.yaml (17 lines of code) (raw):
dataset_path: lmms-lab/MMMU
task: "mmmu_val"
test_split: validation
output_type: generate_until
doc_to_visual: !function utils.mmmu_doc_to_visual
doc_to_text: !function utils.mmmu_doc_to_text
doc_to_target: "answer"
# The return value of process_results will be used by metrics
process_results: !function utils.mmmu_process_results
# Note that the metric name can be either a registed metric function (such as the case for GQA) or a key name returned by process_results
generation_kwargs:
max_new_tokens: 16
image_aspect_ratio: original
metric_list:
- metric: mmmu_acc
aggregation: !function utils.mmmu_aggregate_results
higher_is_better: true
metadata:
- version: 0.0