tensorflow / tensor2tensor
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 5,348 units with 63,280 lines of code in units (85.9% of code).
    • 44 very long units (6,229 lines of code)
    • 153 long units (10,311 lines of code)
    • 563 medium size units (17,415 lines of code)
    • 864 small units (12,571 lines of code)
    • 3,724 very small units (16,754 lines of code)
9% | 16% | 27% | 19% | 26%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py9% | 16% | 27% | 19% | 26%
cc43% | 0% | 24% | 28% | 3%
js8% | 20% | 28% | 19% | 23%
h0% | 0% | 0% | 0% | 100%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
tensor2tensor/models12% | 17% | 28% | 18% | 23%
tensor2tensor/utils15% | 17% | 25% | 19% | 20%
tensor2tensor/layers11% | 20% | 37% | 16% | 13%
tensor2tensor/data_generators3% | 10% | 21% | 24% | 40%
tensor2tensor/insights15% | 18% | 23% | 17% | 24%
tensor2tensor/rl4% | 14% | 16% | 19% | 44%
tensor2tensor/visualization0% | 33% | 27% | 16% | 22%
tensor2tensor/envs0% | 7% | 25% | 30% | 36%
tensor2tensor/serving0% | 0% | 44% | 40% | 14%
tensor2tensor0% | 0% | 0% | 0% | 100%
tensor2tensor/metrics0% | 0% | 0% | 0% | 100%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def evolved_transformer_decoder()
in tensor2tensor/models/evolved_transformer.py
283 29 12
def multihead_attention()
in tensor2tensor/layers/common_attention.py
232 40 39
def discrete_bottleneck()
in tensor2tensor/layers/discretization.py
213 24 32
def body_sharded()
in tensor2tensor/models/research/attention_lm_moe.py
211 33 2
def ae_transformer_internal()
in tensor2tensor/models/research/transformer_vae.py
180 32 6
def _fast_decode()
in tensor2tensor/models/transformer.py
177 38 7
def input_fn()
in tensor2tensor/utils/data_reader.py
173 35 13
def grouped_attention_multihead()
in tensor2tensor/layers/common_attention.py
170 18 13
def body()
in tensor2tensor/models/research/autoencoders.py
165 28 2
def beam_search()
in tensor2tensor/utils/beam_search.py
158 17 11
def multihead_attention()
in tensor2tensor/layers/vqa_layers.py
157 27 29
def body_sharded()
in tensor2tensor/models/research/aligned.py
154 34 2
def _fast_decode_tpu()
in tensor2tensor/models/transformer.py
153 21 6
def _define_collect()
in tensor2tensor/rl/ppo_learner.py
152 12 8
def decode_from_file()
in tensor2tensor/utils/decoding.py
152 29 6
def masked_relative_local_attention_1d()
in tensor2tensor/layers/common_attention.py
146 21 9
def body()
in tensor2tensor/models/research/transformer_symshard.py
146 14 2
def _sample()
in tensor2tensor/models/mtf_transformer.py
145 16 3
def apply_nas_layers()
in tensor2tensor/models/neural_architecture_search/nas_model.py
143 23 24
def _layer_stack()
in tensor2tensor/models/mtf_transformer.py
142 15 10