facebookresearch / xformers
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 1,032 units with 7,058 lines of code in units (47.4% of code).
    • 2 very long units (314 lines of code)
    • 12 long units (768 lines of code)
    • 46 medium size units (1,470 lines of code)
    • 70 small units (1,005 lines of code)
    • 902 very small units (3,501 lines of code)
4% | 10% | 20% | 14% | 49%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
py6% | 12% | 26% | 18% | 35%
cpp0% | 37% | 50% | 6% | 5%
pyi0% | 0% | 0% | 2% | 97%
h0% | 0% | 0% | 30% | 69%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
xformers/benchmarks12% | 32% | 22% | 16% | 16%
xformers/sparse21% | 0% | 25% | 28% | 24%
xformers/components0% | 12% | 21% | 16% | 50%
xformers/triton0% | 0% | 45% | 15% | 39%
experimental/ragged_inference0% | 0% | 49% | 23% | 27%
xformers/factory0% | 0% | 26% | 14% | 59%
ROOT0% | 0% | 65% | 0% | 34%
xformers0% | 0% | 32% | 36% | 30%
stubs/torch0% | 0% | 0% | 2% | 97%
xformers/helpers0% | 0% | 0% | 68% | 32%
stubs/numpy0% | 0% | 0% | 0% | 100%
stubs0% | 0% | 0% | 0% | 100%
stubs/triton0% | 0% | 0% | 0% | 100%
stubs/fvcore0% | 0% | 0% | 0% | 100%
stubs/sklearn0% | 0% | 0% | 0% | 100%
stubs/matplotlib0% | 0% | 0% | 0% | 100%
stubs/recommonmark0% | 0% | 0% | 0% | 100%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
def benchmark()
in xformers/benchmarks/LRA/run_tasks.py
208 24 2
def __torch_function__()
in xformers/sparse/csr_tensor.py
106 22 4
def grid_search()
in xformers/benchmarks/LRA/run_grid_search.py
92 9 1
def bench_matmul()
in xformers/benchmarks/benchmark_triton_blocksparse.py
87 14 2
def bench_linear()
in xformers/benchmarks/benchmark_triton_fused_linear.py
75 17 1
def get_arg_parser()
in xformers/benchmarks/LRA/run_tasks.py
67 1 0
def bench_matmul_with_mask()
in xformers/benchmarks/benchmark_core.py
65 4 0
def bench_dropout()
in xformers/benchmarks/benchmark_triton_dropout.py
62 13 3
at::Tensor spmm_sputnik()
in xformers/components/attention/csrc/cpu/spmm.cpp
56 2 6
def bench_sddmm()
in xformers/benchmarks/benchmark_core.py
54 5 0
at::Tensor sddmm_sputnik()
in xformers/components/attention/csrc/cpu/sddmm.cpp
53 1 5
def _compute_orthogonal_landmarks()
in xformers/components/attention/ortho.py
53 8 2
at::Tensor sparse_softmax_backward_sputnik()
in xformers/components/attention/csrc/cpu/sparse_softmax.cpp
52 1 7
def bench_revnet()
in xformers/benchmarks/benchmark_revnet.py
52 10 1
def backward()
in xformers/triton/dropout.py
49 11 2
def get_all_configs()
in experimental/ragged_inference/triton_v2_qk_dotprod.py
48 2 0
def get_all_configs()
in experimental/ragged_inference/triton_v2_matmul.py
48 2 0
def backward()
in xformers/triton/k_layer_norm.py
47 12 2
def bench_bmm()
in xformers/benchmarks/benchmark_core.py
47 4 0
def plot()
in xformers/benchmarks/benchmark_encoder.py
45 1 3