amazon-research / transformer-gan
Duplication

Places in code with 6 or more lines that are exactly the same.

Intro
  • For duplication, we look at places in code where there are 6 or more lines of code that are exactly the same.
  • Before duplication is calculated, the code is cleaned to remove empty lines, comments, and frequently duplicated constructs such as imports.
  • You should aim at having as little as possible (<5%) of duplicated code as high-level of duplication can lead to maintenance difficulties, poor factoring, and logical contradictions.
Learn more...
Duplication Overall
  • 9% duplication:
    • 5,002 cleaned lines of cleaned code (without empty lines, comments, and frequently duplicated constructs such as imports)
    • 477 duplicated lines
  • 30 duplicates
system9% (477 lines)
Duplication per Extension
py8% (396 lines)
yml38% (81 lines)
Duplication per Component (primary)
model9% (264 lines)
model/utils6% (56 lines)
model/training_config29% (49 lines)
model/inference_config69% (32 lines)
BERT4% (29 lines)
metrics20% (29 lines)
data5% (18 lines)
ROOT0% (0 lines)

Duplication Between Components (50+ lines)

G BERT BERT metrics metrics BERT--metrics 58

Download: SVG DOT (open online Graphviz editor)

Open 3D force graph...

Show more details on duplication between components...
Longest Duplicates
The list of 20 longest duplicates.
See data for all 30 duplicates...
Size#FoldersFilesLinesCode
19 x 2 model
model
lamb.py
lamb.py
52:83 (16%)
175:206 (16%)
view
16 x 2 model/training_config
model/training_config
experiment_baseline.yml
experiment_spanbert.yml
3:18 (35%)
3:18 (21%)
view
16 x 2 model/training_config
model/training_config
experiment_cnn.yml
experiment_spanbert.yml
4:19 (36%)
4:19 (21%)
view
15 x 2 model/training_config
model/training_config
experiment_baseline.yml
experiment_cnn.yml
4:18 (33%)
4:18 (34%)
view
15 x 2 BERT
metrics
main.py
bert_score.py
499:515 (2%)
185:202 (10%)
view
14 x 2 model
model
lamb.py
lamb.py
20:49 (12%)
143:172 (12%)
view
14 x 2 model
model
train.py
train.py
1221:1234 (1%)
1239:1252 (1%)
view
13 x 2 model/utils
model/utils
bleu.py
classifier.py
45:61 (14%)
19:35 (8%)
view
13 x 2 model
model
mem_transformer.py
mem_transformer.py
580:597 (2%)
630:646 (2%)
view
12 x 2 model
model
data_utils.py
data_utils.py
211:222 (2%)
308:321 (2%)
view
10 x 2 model/inference_config
model/inference_config
inference_conditional.yml
inference_unconditional.yml
1:10 (43%)
1:10 (43%)
view
10 x 2 model
model
data_utils.py
data_utils.py
224:234 (2%)
324:334 (2%)
view
9 x 2 model
model
data_utils.py
data_utils.py
284:293 (2%)
356:365 (2%)
view
8 x 2 model
model
train.py
train.py
909:916 (<1%)
1080:1087 (<1%)
view
8 x 2 BERT
metrics
main.py
bert_score.py
50:61 (1%)
49:60 (5%)
view
8 x 2 model
model
train.py
train.py
1239:1246 (<1%)
1259:1266 (<1%)
view
8 x 2 model
model
train.py
train.py
1221:1228 (<1%)
1259:1266 (<1%)
view
8 x 2 model/utils
model/utils
adaptive_softmax.py
proj_adaptive_softmax.py
56:66 (13%)
117:127 (7%)
view
7 x 2 model
model
data_utils.py
data_utils.py
217:223 (1%)
375:381 (1%)
view
7 x 2 model/utils
model/utils
classifier.py
classifier.py
153:161 (4%)
190:198 (4%)
view