amazon-research / gluonmm
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 219 units with 3,341 lines of code in units (51.3% of code).
    • 0 very long units (0 lines of code)
    • 7 long units (428 lines of code)
    • 50 medium size units (1,493 lines of code)
    • 68 small units (951 lines of code)
    • 94 very small units (469 lines of code)
0% | 12% | 44% | 28% | 14%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py0% | 12% | 44% | 28% | 14%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
src/transformers/utils0% | 40% | 36% | 15% | 6%
src/transformers/data0% | 22% | 46% | 15% | 15%
scripts/image_classification0% | 78% | 0% | 21% | 0%
src/transformers/models0% | 0% | 48% | 35% | 16%
scripts/action_recognition0% | 0% | 71% | 28% | 0%
src/transformers/pipelines0% | 0% | 0% | 63% | 36%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def train_classification()
in src/transformers/utils/video_action_recognition.py
77 10 10
def main_worker()
in scripts/image_classification/train.py
67 25 1
def train_classification()
in src/transformers/utils/image_classification.py
66 11 10
def build_transform()
in src/transformers/data/datasets/img_cls_datasets.py
62 12 2
def test_classification()
in src/transformers/utils/video_action_recognition.py
53 5 6
def validate_classification()
in src/transformers/utils/image_classification.py
52 8 5
def __init__()
in src/transformers/data/datasets/kinetics_datasets.py
51 4 12
def __init__()
in src/transformers/models/vit/vision_transformer.py
47 17 20
def main_worker()
in scripts/action_recognition/train.py
47 15 1
def forward()
in src/transformers/models/vidtr/multihead_attention.py
46 2 7
def validate_classification()
in src/transformers/utils/video_action_recognition.py
45 6 6
def __init__()
in src/transformers/models/swin/swin_transformer.py
45 7 26
def __init__()
in src/transformers/models/swin/swin_transformer.py
43 8 14
def forward()
in src/transformers/models/vidtr/multihead_attention.py
43 2 7
def __init__()
in src/transformers/models/cait/cait.py
43 5 27
def forward()
in src/transformers/models/vidtr/multihead_attention.py
41 2 7
def __init__()
in src/transformers/models/vidtr/vidtr_compact.py
39 4 11
def __init__()
in src/transformers/models/vidtr/vidtr_split.py
37 5 11
def resize_clip()
in src/transformers/data/datasets/transforms/functional.py
37 17 3
def forward_pre()
in src/transformers/models/vidtr/vidtr_compact.py
36 4 6