pytorch / pytorch
Unit Size

The distribution of size of units (measured in lines of code).

Intro
  • Unit size measurements show the distribution of size of units of code (methods, functions...).
  • Units are classified in four categories based on their size (lines of code): 1-20 (small units), 20-50 (medium size units), 51-100 (long units), 101+ (very long units).
  • You should aim at keeping units small (< 20 lines). Long units may become "bloaters", code that have increased to such gargantuan proportions that they are hard to work with.
Learn more...
Unit Size Overall
  • There are 46,530 units with 640,723 lines of code in units (53.8% of code).
    • 519 very long units (114,835 lines of code)
    • 1,489 long units (100,941 lines of code)
    • 5,484 medium size units (170,884 lines of code)
    • 7,692 small units (112,592 lines of code)
    • 31,346 very small units (141,471 lines of code)
17% | 15% | 26% | 17% | 22%
Legend:
101+
51-100
21-50
11-20
1-10
Unit Size per Extension
101+
51-100
21-50
11-20
1-10
cpp15% | 19% | 29% | 18% | 16%
cc31% | 19% | 28% | 12% | 8%
c83% | 9% | 5% | <1% | <1%
h13% | 13% | 20% | 16% | 37%
py6% | 10% | 27% | 22% | 32%
mm7% | 36% | 42% | 4% | 8%
hpp0% | 17% | 44% | 18% | 19%
js0% | 22% | 29% | 13% | 34%
java0% | 0% | 22% | 17% | 60%
pyi0% | 0% | 0% | 11% | 88%
m0% | 0% | 0% | 0% | 100%
Unit Size per Logical Component
primary logical decomposition
101+
51-100
21-50
11-20
1-10
aten23% | 17% | 25% | 14% | 18%
caffe223% | 19% | 28% | 14% | 14%
torch12% | 13% | 26% | 20% | 25%
c108% | 6% | 16% | 16% | 52%
tools10% | 10% | 27% | 20% | 30%
android24% | 3% | 17% | 15% | 38%
binaries6% | 36% | 31% | 15% | 8%
ROOT29% | 15% | 30% | 12% | 11%
benchmarks0% | 8% | 40% | 23% | 27%
modules0% | 12% | 33% | 10% | 42%
scripts0% | 0% | 24% | 38% | 37%
ios0% | 0% | 0% | 54% | 45%
mypy_plugins0% | 0% | 0% | 0% | 100%
Alternative Visuals
Longest Units
Top 20 longest units
Unit# linesMcCabe index# params
void do_avg_pool_nhwc_on_AVX_n()
in aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp
1871 180 16
throw propagation_error()
in torch/csrc/jit/passes/shape_analysis.cpp
1660 362 0
static void registerJitOperator()
in torch/csrc/jit/codegen/cuda/parser.cpp
1489 101 0
int nnc_lowerings_lazy_registration()
in torch/csrc/jit/tensorexpr/lowerings.cpp
1425 35 0
void initJITBindings()
in torch/csrc/jit/python/init.cpp
1392 48 1
void initJitScriptBindings()
in torch/csrc/jit/python/script_init.cpp
1306 56 1
def AD_unsqueeze_multiple()
in torch/csrc/jit/runtime/symbolic_script.cpp
1210 49 3
void pytorch_q8gemm_ukernel_8x8__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8gemm/8x8-neon.c
1172 27 10
void pytorch_q8conv_ukernel_8x8__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8conv/8x8-neon.c
1172 21 10
void pytorch_q8dwconv_ukernel_mp8x25_per_channel__sse2()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/mp8x25-sse2-per-channel.c
1019 11 9
void pytorch_q8dwconv_ukernel_mp8x27__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/mp8x27-neon.c
933 12 11
def get_testing_overrides()
in torch/overrides.py
929 15 0
void pytorch_q8dwconv_ukernel_mp8x25__sse2()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/mp8x25-sse2.c
899 11 9
void pytorch_q8dwconv_ukernel_up8x9_per_channel__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/up8x9-neon-per-channel.c
891 15 8
void pytorch_q8dwconv_ukernel_up8x9__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/up8x9-neon.c
871 15 8
void pytorch_q8dwconv_ukernel_mp8x25_per_channel__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/mp8x25-neon-per-channel.c
857 11 9
void pytorch_q8dwconv_ukernel_mp8x25__neon()
in aten/src/ATen/native/quantized/cpu/qnnpack/src/q8dwconv/mp8x25-neon.c
833 11 9
cc
void addGlobalMethods()
in caffe2/python/pybind_state.cc
829 31 1
void initPythonIRBindings()
in torch/csrc/jit/python/python_ir.cpp
827 31 1
void initTensorExprBindings()
in torch/csrc/jit/tensorexpr/tensorexpr_init.cpp
822 21 1