tensorflow / neural-structured-learning
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 882 units with 13,827 lines of code in units (71.3% of code).
    • 12 very long units (1,837 lines of code)
    • 29 long units (1,940 lines of code)
    • 153 medium size units (4,674 lines of code)
    • 199 small units (3,024 lines of code)
    • 489 very small units (2,352 lines of code)
13% | 14% | 33% | 21% | 17%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py17% | 13% | 31% | 20% | 17%
cc0% | 17% | 42% | 26% | 12%
h0% | 0% | 26% | 15% | 57%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
research/gam32% | 16% | 27% | 13% | 9%
research/a2n15% | 22% | 33% | 16% | 11%
research/carls2% | 13% | 39% | 26% | 17%
research/kg_hyp_emb13% | 0% | 14% | 27% | 43%
research/multi_representation_adversary0% | 23% | 28% | 17% | 29%
neural_structured_learning/estimator0% | 100% | 0% | 0% | 0%
neural_structured_learning/lib0% | 8% | 43% | 28% | 18%
research/gnn-survey0% | 0% | 47% | 32% | 19%
neural_structured_learning/keras0% | 0% | 35% | 38% | 25%
neural_structured_learning/tools0% | 0% | 39% | 40% | 19%
neural_structured_learning/experimental0% | 0% | 62% | 25% | 12%
research/neural_clustering0% | 0% | 37% | 31% | 31%
neural_structured_learning/configs0% | 0% | 80% | 0% | 19%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def __init__()
in research/gam/gam/trainer/trainer_classification_gcn.py
234 18 38
def evaluate()
in research/a2n/train.py
217 28 0
def __init__()
in research/gam/gam/trainer/trainer_classification.py
204 32 38
def train()
in research/gam/gam/trainer/trainer_cotrain.py
190 29 3
def __init__()
in research/gam/gam/trainer/trainer_agreement.py
146 22 33
def __init__()
in research/gam/gam/trainer/trainer_cotrain.py
137 5 66
def train()
in research/gam/gam/trainer/trainer_agreement.py
133 10 4
def main()
in research/gam/gam/experiments/run_train_gam_graph.py
133 25 1
def main()
in research/gam/gam/experiments/run_train_gam.py
132 22 1
def train()
in research/gam/gam/trainer/trainer_classification.py
107 25 4
def embed_single_feature()
in research/carls/models/caml/sparse_features.py
102 28 9
def main()
in research/kg_hyp_emb/train.py
102 19 1
def train()
in research/gam/gam/trainer/trainer_classification_gcn.py
95 21 4
def featurize_each_example()
in research/a2n/dataset.py
91 23 2
std::string DebugString()
in research/carls/base/input_context_helper.cc
88 28 1
def _get_agreement_reg_loss()
in research/gam/gam/trainer/trainer_classification.py
85 7 4
def train()
in research/a2n/train.py
81 10 0
def _get_agreement_reg_loss()
in research/gam/gam/trainer/trainer_classification_gcn.py
80 10 3
def train()
in research/multi_representation_adversary/multi_representation_adversary/trainer.py
80 6 9
def _get_encoding()
in research/gam/gam/models/wide_resnet.py
79 9 5