lm_eval/tasks/super_glue/record/t5-prompt.yaml (22 lines of code) (raw):

tag: - super-glue-t5-prompt task: super_glue-record-t5-prompt dataset_path: super_glue dataset_name: record validation_split: validation output_type: generate_until process_docs: !function t5_utils.process_docs doc_to_text: !function t5_utils.doc_to_text doc_to_target: "{{idx.passage|string}}+{{idx.query}}_{{answers}}" generation_kwargs: until: - "</s>" metric_list: - metric: !function t5_utils.em aggregation: !function t5_utils.squad_em_agg higher_is_better: true - metric: !function t5_utils.f1 aggregation: !function t5_utils.squad_f1_agg higher_is_better: true metadata: version: 0.0