tensorflow / benchmarks
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 641 units with 9,244 lines of code in units (84.9% of code).
    • 4 very long units (822 lines of code)
    • 23 long units (1,648 lines of code)
    • 95 medium size units (2,909 lines of code)
    • 141 small units (2,091 lines of code)
    • 378 very small units (1,774 lines of code)
8% | 17% | 31% | 22% | 19%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py8% | 17% | 31% | 22% | 19%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
scripts/tf_cnn_benchmarks10% | 19% | 28% | 23% | 18%
perfzero/lib/perfzero25% | 11% | 24% | 24% | 13%
scripts/tf_cnn_benchmarks/models/tf1_only0% | 25% | 40% | 16% | 17%
perfzero/lib0% | 26% | 34% | 19% | 19%
scripts/tf_cnn_benchmarks/models0% | 7% | 36% | 24% | 30%
scripts/tf_cnn_benchmarks/models/experimental0% | 0% | 37% | 38% | 23%
perfzero/dockertest0% | 0% | 41% | 22% | 35%
perfzero/scripts0% | 0% | 100% | 0% | 0%
scripts/tf_cnn_benchmarks/platforms/default0% | 0% | 0% | 0% | 100%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def __init__()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
337 92 4
def benchmark_with_session()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
195 59 9
def add_benchmark_parser_arguments()
in perfzero/lib/perfzero/perfzero_config.py
178 1 1
def _run_internal()
in perfzero/lib/perfzero/benchmark_method_runner.py
112 12 7
def benchmark_one_step()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
98 36 15
def parse_arguments()
in perfzero/lib/cloud_manager.py
98 5 2
def conv()
in scripts/tf_cnn_benchmarks/convnet_builder.py
91 20 14
def _benchmark_graph()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
90 30 3
def expanded_conv()
in scripts/tf_cnn_benchmarks/models/tf1_only/mobilenet_conv_blocks.py
90 19 3
def ssd_decode_and_crop()
in scripts/tf_cnn_benchmarks/ssd_dataloader.py
88 5 4
def _build_nasnet_base()
in scripts/tf_cnn_benchmarks/models/tf1_only/nasnet_model.py
82 24 8
def postprocess()
in scripts/tf_cnn_benchmarks/models/tf1_only/ssd_model.py
77 18 2
def add_setup_parser_arguments()
in perfzero/lib/perfzero/perfzero_config.py
76 5 1
def _preprocess_graph()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
75 6 3
def train_image()
in scripts/tf_cnn_benchmarks/preprocessing.py
75 10 11
def _eval_once()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
69 18 7
def add_inference()
in scripts/tf_cnn_benchmarks/models/inception_model.py
66 3 2
def mobilenet_base()
in scripts/tf_cnn_benchmarks/models/tf1_only/mobilenet.py
66 12 8
def add_inference()
in scripts/tf_cnn_benchmarks/models/tf1_only/ssd_model.py
65 4 2
def create_config_proto()
in scripts/tf_cnn_benchmarks/benchmark_cnn.py
60 18 1