lmms_eval/tasks/mmbench/mmbench_en_dev.yaml (10 lines of code) (raw):

task: "mmbench_en_dev" test_split: dev include: _default_template_mmbench_en_yaml metric_list: - metric: gpt_eval_score aggregation: !function en_utils.mmbench_aggregate_dev_results_eval higher_is_better: true - metric: submission aggregation: !function en_utils.mmbench_aggregate_dev_results_submission higher_is_better: true