huggingface / lmms-eval
Duplication

Places in code with 6 or more lines that are exactly the same.

Intro
Learn more...
Duplication Overall
system24% (2,507 lines)
dependency graphs: 2D graph | 3D graph | 3D graph (with duplicates)...
Duplication per Extension
py23% (2,026 lines)
yaml36% (435 lines)
ipynb38% (46 lines)
Duplication per Component (primary)
lmms_eval24% (2,461 lines)
tools38% (46 lines)
ROOT0% (0 lines)
Longest Duplicates
The list of 50 longest duplicates.
See data for all 273 duplicates...
Size#FoldersFilesLinesCode
73 x 2 lmms_eval/tasks/refcoco+
lmms_eval/tasks/refcoco
6:135 (100%)
6:135 (100%)
view
73 x 2 lmms_eval/tasks/refcoco+
lmms_eval/tasks/refcocog
6:135 (100%)
6:135 (100%)
view
73 x 2 lmms_eval/tasks/refcoco
lmms_eval/tasks/refcocog
6:135 (100%)
6:135 (100%)
view
65 x 2 lmms_eval/models
lmms_eval/models
50:134 (43%)
47:131 (42%)
view
35 x 2 lmms_eval/tasks/mmbench
lmms_eval/tasks/mmbench
61:100 (37%)
60:99 (37%)
view
34 x 2 lmms_eval/tasks/mmbench
lmms_eval/tasks/mmbench
9:57 (36%)
8:56 (36%)
view
33 x 2 lmms_eval/tasks/ferret
lmms_eval/tasks/llava-bench-coco
30:69 (23%)
32:71 (24%)
view
32 x 2 lmms_eval/models
lmms_eval/models
84:121 (17%)
57:94 (15%)
view
32 x 2 lmms_eval/tasks/coco_cap
lmms_eval/tasks/coco_cap
10:45 (78%)
10:45 (78%)
view
30 x 2 lmms_eval/tasks/scienceqa
lmms_eval/tasks/scienceqa
4:35 (90%)
4:34 (90%)
view
30 x 2 lmms_eval/tasks/llava-bench-coco
lmms_eval/tasks/llava-in-the-wild
148:198 (22%)
145:195 (22%)
view
30 x 2 lmms_eval/tasks/textcaps
lmms_eval/tasks/textcaps
15:48 (68%)
13:46 (71%)
view
30 x 2 lmms_eval/models
lmms_eval/models
138:185 (20%)
135:182 (19%)
view
29 x 2 lmms_eval/models
lmms_eval/models
90:126 (15%)
67:103 (19%)
view
29 x 2 lmms_eval/models
lmms_eval/models
138:184 (19%)
242:288 (10%)
view
29 x 2 lmms_eval/models
lmms_eval/models
242:288 (10%)
135:181 (18%)
view
29 x 2 lmms_eval/models
lmms_eval/models
90:126 (15%)
64:100 (18%)
view
26 x 2 lmms_eval/models
lmms_eval/models
64:95 (16%)
63:94 (12%)
view
26 x 2 lmms_eval/models
lmms_eval/models
67:98 (17%)
63:94 (12%)
view
24 x 2 lmms_eval/tasks/ok_vqa
lmms_eval/tasks/vizwiz_vqa
21:48 (52%)
21:48 (52%)
view
24 x 2 lmms_eval/tasks/llava-bench-coco
lmms_eval/tasks/llava-in-the-wild
53:81 (17%)
51:79 (18%)
view
22 x 2 lmms_eval/tasks/ferret
lmms_eval/tasks/llava-in-the-wild
21:47 (15%)
21:47 (16%)
view
21 x 2 lmms_eval/models
lmms_eval/models
151:179 (7%)
103:131 (13%)
view
21 x 2 lmms_eval/models
lmms_eval/models
106:134 (14%)
151:179 (7%)
view
20 x 2 lmms_eval/tasks/coco_cap
lmms_eval/tasks/refcoco
64:90 (24%)
73:101 (27%)
view
20 x 2 lmms_eval/tasks/coco_cap
lmms_eval/tasks/refcoco+
64:90 (24%)
73:101 (27%)
view
20 x 2 lmms_eval/tasks/coco_cap
lmms_eval/tasks/refcocog
64:90 (24%)
73:101 (27%)
view
19 x 2 lmms_eval/tasks/mmbench
lmms_eval/tasks/mmbench
8:37 (24%)
9:38 (20%)
view
19 x 2 lmms_eval/tasks/mmbench
lmms_eval/tasks/mmbench
8:37 (24%)
8:37 (20%)
view
18 x 2 lmms_eval/models
lmms_eval/models
62:82 (9%)
51:71 (11%)
view
18 x 2 lmms_eval/models
lmms_eval/models
80:104 (12%)
117:141 (6%)
view
18 x 2 lmms_eval/models
lmms_eval/models
117:141 (6%)
77:101 (11%)
view
18 x 2 lmms_eval/models
lmms_eval/models
62:82 (9%)
54:74 (12%)
view
18 x 2 lmms_eval/tasks/ferret
lmms_eval/tasks/llava-in-the-wild
82:101 (12%)
82:101 (13%)
view
17 x 2 lmms_eval/models
lmms_eval/models
86:106 (6%)
52:71 (11%)
view
17 x 2 lmms_eval/models
lmms_eval/models
103:126 (9%)
117:140 (6%)
view
17 x 2 lmms_eval/models
lmms_eval/models
63:82 (9%)
86:106 (6%)
view
17 x 2 lmms_eval/models
lmms_eval/models
55:74 (11%)
86:106 (6%)
view
16 x 2 lmms_eval/tasks/cmmmu
lmms_eval/tasks/cmmmu
331:350 (5%)
376:395 (5%)
view
16 x 2 lmms_eval/tasks/ferret
lmms_eval/tasks/llava-in-the-wild
51:69 (11%)
51:69 (12%)
view
16 x 2 lmms_eval/tasks/llava-bench-coco
lmms_eval/tasks/llava-in-the-wild
18:33 (42%)
19:34 (41%)
view
16 x 2 lmms_eval/tasks/llava-bench-coco
lmms_eval/tasks/llava-in-the-wild
32:49 (11%)
30:47 (12%)
view
15 x 2 lmms_eval/models
lmms_eval/models
153:177 (8%)
143:169 (10%)
view
15 x 2 lmms_eval/models
lmms_eval/models
153:177 (8%)
140:166 (9%)
view
15 x 2 lmms_eval/models
lmms_eval/models
99:117 (9%)
98:121 (7%)
view
15 x 2 lmms_eval/tasks/ferret
lmms_eval/tasks/hallusion_bench
32:47 (10%)
12:27 (6%)
view
15 x 2 lmms_eval/tasks/hallusion_bench
lmms_eval/tasks/llava-bench-coco
12:27 (6%)
34:49 (11%)
view
15 x 2 lmms_eval/tasks/llava-bench-coco
lmms_eval/tasks/llava-in-the-wild
130:145 (11%)
127:142 (11%)
view
15 x 2 lmms_eval/models
lmms_eval/models
153:177 (8%)
247:273 (5%)
view
15 x 2 lmms_eval/tasks/hallusion_bench
lmms_eval/tasks/llava-in-the-wild
12:27 (6%)
32:47 (11%)
view
Duplicated Units
The list of top 3 duplicated units.
See data for all 3 unit duplicates...
Size#FoldersFilesLinesCode
41 x 3 lmms_eval/tasks/refcocog
lmms_eval/tasks/refcoco
lmms_eval/tasks/refcoco+
0:0 
0:0 
0:0 
view
29 x 2 lmms_eval/tasks/mmbench
lmms_eval/tasks/mmbench
0:0 
0:0 
view
23 x 2 lmms_eval/tasks/vizwiz_vqa
lmms_eval/tasks/ok_vqa
0:0 
0:0 
view