amazon-research / transformer-gan
File Size

The distribution of size of files (measured in lines of code).

Intro
  • File size measurements show the distribution of size of files.
  • Files are classified in four categories based on their size (lines of code): 1-100 (very small files), 101-200 (small files), 201-500 (medium size files), 501-1000 (long files), 1001+(very long files).
  • It is a good practice to keep files small. Long files may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
File Size Overall
  • There are 36 files with 5,167 lines of code.
    • 0 very long files (0 lines of code)
    • 2 long files (1,590 lines of code)
    • 4 medium size files (1,550 lines of codeclsfd_ftr_w_mp_ins)
    • 9 small files (1,241 lines of code)
    • 21 very small files (786 lines of code)
0% | 30% | 29% | 24% | 15%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: zoomable circles | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
py0% | 32% | 31% | 25% | 10%
yml0% | 0% | 0% | 0% | 100%
cfg0% | 0% | 0% | 0% | 100%
in0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
model0% | 34% | 55% | 8% | 2%
BERT0% | 89% | 0% | 0% | 10%
model/utils0% | 0% | 0% | 59% | 40%
data0% | 0% | 0% | 99% | <1%
metrics0% | 0% | 0% | 100% | 0%
model/training_config0% | 0% | 0% | 0% | 100%
ROOT0% | 0% | 0% | 0% | 100%
model/inference_config0% | 0% | 0% | 0% | 100%
Longest Files (Top 36)
File# lines# units
train.py
in model
969 16
main.py
in BERT
621 11
mem_transformer.py
in model
452 24
transformer_gan.py
in model
420 8
data_utils.py
in model
411 27
generate.py
in model
267 4
performance_event_repo.py
in data
183 18
bert_score.py
in metrics
160 5
classifier.py
in model/utils
159 11
music_encoder.py
in data
146 8
config_helper.py
in model/utils
135 6
discriminator.py
in model
121 11
lamb.py
in model
117 5
helpers.py
in model/utils
115 8
proj_adaptive_softmax.py
in model/utils
105 3
bleu.py
in model/utils
95 13
experiment_spanbert.yml
in model/training_config
75 -
tokenization_midi.py
in BERT
74 4
data_parallel.py
in model/utils
73 6
adaptive_softmax.py
in model/utils
62 2
batch_generate.py
in model
59 1
log_uniform_sampler.py
in model/utils
54 3
cfg
setup.cfg
in root
49 -
experiment_baseline.yml
in model/training_config
45 -
experiment_cnn.yml
in model/training_config
44 -
exp_utils.py
in model/utils
36 2
config_inference.py
in model/utils
35 1
inference_conditional.yml
in model/inference_config
23 -
inference_unconditional.yml
in model/inference_config
23 -
setup.py
in root
20 -
utils.py
in root
14 1
__init__.py
in root
1 -
__init__.py
in BERT
1 -
__init__.py
in model
1 -
__init__.py
in data
1 -
in
MANIFEST.in
in root
1 -
Files With Most Units (Top 20)
File# lines# units
data_utils.py
in model
411 27
mem_transformer.py
in model
452 24
performance_event_repo.py
in data
183 18
train.py
in model
969 16
bleu.py
in model/utils
95 13
main.py
in BERT
621 11
classifier.py
in model/utils
159 11
discriminator.py
in model
121 11
helpers.py
in model/utils
115 8
transformer_gan.py
in model
420 8
music_encoder.py
in data
146 8
data_parallel.py
in model/utils
73 6
config_helper.py
in model/utils
135 6
lamb.py
in model
117 5
bert_score.py
in metrics
160 5
tokenization_midi.py
in BERT
74 4
generate.py
in model
267 4
log_uniform_sampler.py
in model/utils
54 3
proj_adaptive_softmax.py
in model/utils
105 3
exp_utils.py
in model/utils
36 2
Files With Long Lines (Top 4)

There are 4 files with lines longer than 120 characters. In total, there are 15 long lines.

File# lines# units# long lines
transformer_gan.py
in model
420 8 9
bert_score.py
in metrics
160 5 3
batch_generate.py
in model
59 1 2
generate.py
in model
267 4 1