facebookresearch / Mask2Former
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 360 units with 5,486 lines of code in units (39.8% of code).
    • 0 very long units (0 lines of code)
    • 17 long units (1,143 lines of code)
    • 59 medium size units (1,860 lines of code)
    • 92 small units (1,343 lines of code)
    • 192 very small units (1,140 lines of code)
0% | 20% | 33% | 24% | 20%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py0% | 20% | 33% | 24% | 20%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
mask2former_video/data_video0% | 29% | 41% | 18% | 10%
ROOT0% | 57% | 10% | 21% | 9%
mask2former/data0% | 26% | 32% | 29% | 12%
mask2former0% | 61% | 24% | 11% | 2%
tools0% | 57% | 29% | 5% | 7%
mask2former_video/modeling0% | 10% | 23% | 26% | 38%
mask2former/modeling0% | 0% | 33% | 30% | 35%
mask2former_video0% | 0% | 85% | 10% | 4%
mask2former/evaluation0% | 0% | 100% | 0% | 0%
demo_video0% | 0% | 23% | 55% | 21%
datasets0% | 0% | 64% | 0% | 35%
mask2former_video/utils0% | 0% | 75% | 0% | 25%
mask2former/utils0% | 0% | 0% | 56% | 43%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def accumulate()
in mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py
90 35 2
def main()
in tools/evaluate_pq_for_semantic_segmentation.py
83 13 0
def load_ytvis_json()
in mask2former_video/data_video/datasets/ytvis.py
81 28 4
def pq_compute_single_image()
in tools/evaluate_pq_for_semantic_segmentation.py
76 29 4
def __call__()
in mask2former/data/dataset_mappers/mask_former_panoptic_dataset_mapper.py
73 16 2
def build_optimizer()
in train_net_video.py
68 17 3
def build_optimizer()
in train_net.py
68 17 3
def summarize()
in mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py
68 16 1
def evaluateVid()
in mask2former_video/data_video/datasets/ytvis_api/ytvoseval.py
66 46 5
def build_evaluator()
in train_net.py
65 24 4
def __call__()
in mask2former_video/data_video/dataset_mapper.py
65 19 2
def add_maskformer2_config()
in mask2former/config.py
62 6 1
def __call__()
in mask2former/data/dataset_mappers/mask_former_instance_dataset_mapper.py
62 16 2
def __call__()
in mask2former/data/dataset_mappers/mask_former_semantic_dataset_mapper.py
57 14 2
def from_config()
in mask2former/maskformer_model.py
53 6 2
def forward()
in mask2former/maskformer_model.py
53 15 2
def forward()
in mask2former_video/modeling/transformer_decoder/video_mask2former_transformer_decoder.py
53 5 4
def __init__()
in mask2former/modeling/backbone/swin.py
50 1 3
def forward()
in mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py
49 4 4
def _eval_predictions()
in mask2former/evaluation/instance_evaluation.py
47 13 3