facebookresearch / Opacus-lab
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 52 units with 335 lines of code in units (54.3% of code).
    • 0 very long units (0 lines of code)
    • 0 long units (0 lines of code)
    • 0 medium size units (0 lines of code)
    • 10 small units (138 lines of code)
    • 42 very small units (197 lines of code)
0% | 0% | 0% | 41% | 58%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py0% | 0% | 0% | 41% | 58%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
opacus_lab/models/GPT20% | 0% | 0% | 50% | 49%
opacus_lab/models/GPT2/model0% | 0% | 0% | 33% | 66%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def finetunable_GPT2_params()
in opacus_lab/models/GPT2/train.py
17 7 2
def train()
in opacus_lab/models/GPT2/train.py
16 1 0
def refactor_attention()
in opacus_lab/models/GPT2/refactor.py
15 1 1
def __init__()
in opacus_lab/models/GPT2/model/transformer.py
15 1 0
def set_up_optim()
in opacus_lab/models/GPT2/train.py
15 1 0
def factorize_linear_layer()
in opacus_lab/models/GPT2/model/transformer.py
14 2 2
def forward()
in opacus_lab/models/GPT2/model/masking.py
13 1 3
def __init__()
in opacus_lab/models/GPT2/model/attention.py
11 3 3
def gelu_new()
in opacus_lab/models/GPT2/model/feedforward.py
11 1 2
def test()
in opacus_lab/models/GPT2/train.py
11 1 0
def refactor_feedforward()
in opacus_lab/models/GPT2/refactor.py
9 1 1
def refactor_block()
in opacus_lab/models/GPT2/refactor.py
9 2 1
def test_refactor()
in opacus_lab/models/GPT2/refactor.py
8 1 2
def forward()
in opacus_lab/models/GPT2/model/masking.py
8 1 3
def lrp_linear_layer()
in opacus_lab/models/GPT2/model/transformer.py
8 2 2
def refactor_head()
in opacus_lab/models/GPT2/refactor.py
7 1 1
def __init__()
in opacus_lab/models/GPT2/model/attention.py
7 1 4
def forward()
in opacus_lab/models/GPT2/model/attention.py
7 1 0
def __init__()
in opacus_lab/models/GPT2/model/feedforward.py
7 3 2
def __init__()
in opacus_lab/models/GPT2/model/transformer.py
7 1 0