awslabs / syne-tune
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 1,660 units with 15,855 lines of code in units (80.2% of code).
    • 4 very long units (659 lines of code)
    • 22 long units (1,532 lines of code)
    • 166 medium size units (5,089 lines of code)
    • 245 small units (3,519 lines of code)
    • 1,223 very small units (5,056 lines of code)
4% | 9% | 32% | 22% | 31%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py4% | 9% | 32% | 22% | 31%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
benchmarking/cli67% | 15% | 6% | 6% | 4%
benchmarking/nursery42% | 0% | 0% | 32% | 24%
syne_tune/optimizer<1% | 10% | 35% | 20% | 33%
benchmarking/blackbox_repository0% | 19% | 40% | 18% | 22%
benchmarking/training_scripts0% | 23% | 38% | 29% | 9%
syne_tune0% | 5% | 7% | 40% | 46%
syne_tune/backend0% | 0% | 37% | 18% | 43%
benchmarking/utils0% | 0% | 63% | 23% | 13%
benchmarking/definitions0% | 0% | 24% | 70% | 4%
benchmarking/benchmark_loop0% | 0% | 67% | 16% | 15%
syne_tune/remote0% | 0% | 30% | 44% | 24%
ROOT0% | 0% | 0% | 0% | 100%
tst/backend0% | 0% | 0% | 0% | 100%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def parse_args()
in benchmarking/cli/launch_utils.py
281 10 1
def scheduler_factory()
in benchmarking/cli/scheduler_factory.py
149 46 3
def objective()
in benchmarking/nursery/lstm_wikitext2/lstm_wikitext2.py
123 13 1
def issm_likelihood_computations()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/issm.py
106 17 6
def on_trial_result()
in syne_tune/optimizer/schedulers/hyperband.py
100 35 3
def make_searcher_and_scheduler()
in benchmarking/cli/launch_utils.py
97 29 1
def convert_dataset()
in benchmarking/blackbox_repository/conversion_scripts/scripts/nasbench201_import.py
91 21 2
def sample_posterior_joint()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/issm.py
90 13 10
def issm_likelihood_slow_computations()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/issm.py
86 15 6
def _create_common_objects()
in syne_tune/optimizer/schedulers/searchers/gp_searcher_factory.py
72 16 2
def prepare_data_with_pending()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/issm.py
71 13 4
def objective()
in benchmarking/training_scripts/resnet_cifar10/resnet_cifar10.py
70 9 1
def _prepare_data_internal()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/issm.py
69 21 9
def convert_dataset()
in benchmarking/blackbox_repository/conversion_scripts/scripts/fcnet_import.py
67 12 2
def resource_kernel_likelihood_computations()
in syne_tune/optimizer/schedulers/searchers/bayesopt/gpautograd/learncurve/freeze_thaw.py
67 12 4
def _common_defaults()
in syne_tune/optimizer/schedulers/searchers/gp_searcher_factory.py
66 5 2
def get_batch_configs()
in syne_tune/optimizer/schedulers/searchers/gp_fifo_searcher.py
65 15 4
def __init__()
in syne_tune/optimizer/schedulers/fifo.py
64 12 3
def run()
in syne_tune/tuner.py
62 22 1
def _draw_fantasy_values()
in syne_tune/optimizer/schedulers/searchers/bayesopt/models/gpiss_model.py
60 12 2