huggingface / lm-evaluation-harness
Duplication

Places in code with 6 or more lines that are exactly the same.

Intro
Learn more...
Duplication Overall
system29% (15,169 lines)
dependency graphs: 2D graph | 3D graph | 3D graph (with duplicates)...
Duplication per Extension
py41% (9,027 lines)
yaml21% (6,142 lines)
Duplication per Component (primary)
lm_eval30% (15,155 lines)
scripts1% (14 lines)
templates0% (0 lines)
ROOT0% (0 lines)
Longest Duplicates
The list of 50 longest duplicates.
See data for all 4,773 duplicates...
Size#FoldersFilesLinesCode
1630 x 2 lm_eval/tasks/ifeval
lm_eval/tasks/leaderboard/ifeval
25:1682 (100%)
25:1682 (100%)
view
829 x 2 lm_eval/tasks/ifeval
lm_eval/tasks/leaderboard/ifeval
30:1612 (100%)
30:1612 (100%)
view
155 x 2 lm_eval/tasks/bbh/cot_zeroshot
lm_eval/tasks/bbh/zeroshot
9:224 (100%)
9:224 (100%)
view
110 x 2 lm_eval/tasks/ifeval
lm_eval/tasks/leaderboard/ifeval
20:168 (100%)
20:168 (100%)
view
101 x 2 lm_eval/tasks/tinyBenchmarks
lm_eval/tasks/truthfulqa
9:176 (100%)
7:176 (100%)
view
84 x 2 lm_eval/tasks/ifeval
lm_eval/tasks/leaderboard/ifeval
8:112 (79%)
7:111 (81%)
view
75 x 2 lm_eval/filters
lm_eval/tasks/mmlu/flan_n_shot/generative
75:184 (64%)
8:112 (100%)
view
75 x 2 lm_eval/filters
lm_eval/tasks/mmlu/flan_cot_zeroshot
75:184 (64%)
8:112 (100%)
view
75 x 2 lm_eval/tasks/mmlu/flan_cot_zeroshot
lm_eval/tasks/mmlu/flan_n_shot/generative
8:112 (100%)
8:112 (100%)
view
64 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
9:118 (91%)
6:115 (92%)
view
60 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:92 (96%)
2:92 (96%)
view
60 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:92 (96%)
2:92 (96%)
view
60 x 2 lm_eval/tasks/mmlu
lm_eval/tasks/mmlusr
13:74 (46%)
13:74 (52%)
view
60 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:92 (96%)
2:92 (96%)
view
58 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:93 (96%)
2:93 (96%)
view
58 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:93 (96%)
2:93 (96%)
view
58 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:93 (96%)
2:93 (96%)
view
53 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:103 (88%)
7:100 (89%)
view
53 x 2 lm_eval/tasks/afrimgsm
lm_eval/tasks/mgsm
153:209 (27%)
162:218 (28%)
view
47 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
9:88 (88%)
6:85 (90%)
view
46 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
9:66 (88%)
6:63 (90%)
view
42 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
9:70 (87%)
6:67 (89%)
view
42 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:61 (95%)
2:61 (95%)
view
42 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:61 (95%)
2:61 (95%)
view
42 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:61 (95%)
2:61 (95%)
view
41 x 2 lm_eval/tasks/squad_completion
lm_eval/tasks/swde
13:98 (93%)
13:98 (93%)
view
41 x 2 lm_eval/tasks/fda
lm_eval/tasks/squad_completion
13:98 (93%)
13:98 (93%)
view
41 x 2 lm_eval/tasks/fda
lm_eval/tasks/swde
13:98 (93%)
13:98 (93%)
view
39 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
9:59 (86%)
6:56 (88%)
view
38 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:54 (95%)
2:54 (95%)
view
38 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:54 (95%)
2:54 (95%)
view
38 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/bbh/fewshot
2:54 (95%)
2:54 (95%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/leaderboard/bbh_mc
lm_eval/tasks/leaderboard/bbh_mc
7:58 (86%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/leaderboard/bbh_mc
lm_eval/tasks/leaderboard/bbh_mc
7:58 (86%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/leaderboard/bbh_mc
lm_eval/tasks/leaderboard/bbh_mc
7:58 (86%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
37 x 2 lm_eval/tasks/afrimgsm
lm_eval/tasks/mgsm
72:118 (19%)
94:140 (20%)
view
37 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:61 (84%)
7:58 (86%)
view
35 x 2 lm_eval/tasks/bbh/cot_fewshot
lm_eval/tasks/bbh/cot_fewshot
2:36 (97%)
2:36 (97%)
view
33 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:54 (82%)
7:51 (84%)
view
33 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:54 (82%)
7:51 (84%)
view
33 x 2 lm_eval/tasks/bbh/fewshot
lm_eval/tasks/leaderboard/bbh_mc
10:54 (82%)
7:51 (84%)
view
33 x 2 lm_eval/tasks/leaderboard/bbh_mc
lm_eval/tasks/leaderboard/bbh_mc
7:51 (84%)
7:51 (84%)
view
Duplicated Units
The list of top 12 duplicated units.
See data for all 12 unit duplicates...
Size#FoldersFilesLinesCode
64 x 3 lm_eval/filters
lm_eval/tasks/mmlu/flan_n_shot/generative
lm_eval/tasks/mmlu/flan_cot_zeroshot
0:0 
0:0 
0:0 
view
62 x 2 lm_eval/tasks/tinyBenchmarks
lm_eval/tasks/truthfulqa
0:0 
0:0 
view
45 x 2 lm_eval/tasks/leaderboard/ifeval
lm_eval/tasks/ifeval
0:0 
0:0 
view
40 x 2 lm_eval/tasks/bbh/cot_zeroshot
lm_eval/tasks/bbh/zeroshot
0:0 
0:0 
view
43 x 2 lm_eval/tasks/leaderboard/ifeval
lm_eval/tasks/ifeval
0:0 
0:0 
view
18 x 3 lm_eval/tasks/gpqa/cot_zeroshot
lm_eval/tasks/gpqa/cot_n_shot
lm_eval/tasks/gpqa/generative
0:0 
0:0 
0:0 
view
17 x 2 lm_eval/tasks/leaderboard/gpqa
lm_eval/tasks/gpqa/zeroshot
0:0 
0:0 
view
12 x 3 lm_eval/tasks/mmlusr/question_only
lm_eval/tasks/mmlusr/question_and_answer
lm_eval/tasks/mmlusr/answer_only
0:0 
0:0 
0:0 
view
18 x 2 lm_eval/tasks/afrimmlu/translate
lm_eval/tasks/afrimmlu
0:0 
0:0 
view
17 x 3 lm_eval/tasks/squad_completion
lm_eval/tasks/swde
lm_eval/tasks/fda
0:0 
0:0 
0:0 
view
8 x 3 lm_eval/tasks/hellaswag
lm_eval/tasks/tinyBenchmarks
lm_eval/tasks/okapi/hellaswag_multilingual
0:0 
0:0 
0:0 
view
12 x 2 lm_eval/tasks/leaderboard/ifeval
lm_eval/tasks/ifeval
0:0 
0:0 
view