pytorch / functorch
Conditional Complexity

The distribution of complexity of units (measured with McCabe index).

Intro
  • Conditional complexity (also called cyclomatic complexity) is a term used to measure the complexity of software. The term refers to the number of possible paths through a program function. A higher value ofter means higher maintenance and testing costs (infosecinstitute.com).
  • Conditional complexity is calculated by counting all conditions in the program that can affect the execution path (e.g. if statement, loops, switches, and/or operators, try and catch blocks...).
  • Conditional complexity is measured at the unit level (methods, functions...).
  • Units are classified in four categories based on the measured McCabe index: 1-5 (simple units), 6-10 (medium complex units), 11-25 (complex units), 26+ (very complex units).
Learn more...
Conditional Complexity Overall
  • There are 605 units with 6,887 lines of code in units (56.3% of code).
    • 0 very complex units (0 lines of code)
    • 3 complex units (295 lines of code)
    • 18 medium complex units (956 lines of code)
    • 53 simple units (1,522 lines of code)
    • 531 very simple units (4,114 lines of code)
0% | 4% | 13% | 22% | 59%
Legend:
51+
26-50
11-25
6-10
1-5
Alternative Visuals
Conditional Complexity per Extension
51+
26-50
11-25
6-10
1-5
py0% | 7% | 11% | 26% | 54%
cpp0% | 2% | 16% | 17% | 63%
h0% | 0% | 0% | 33% | 66%
Conditional Complexity per Logical Component
primary logical decomposition
51+
26-50
11-25
6-10
1-5
functorch/_src0% | 6% | 18% | 27% | 47%
functorch/csrc0% | 2% | 15% | 18% | 63%
op_analysis0% | 64% | 0% | 0% | 35%
benchmarks0% | 0% | 0% | 37% | 62%
codegen0% | 0% | 0% | 41% | 58%
benchmarks/transformer_fusion_patterns0% | 0% | 0% | 22% | 77%
notebooks/_src0% | 0% | 0% | 0% | 100%
ROOT0% | 0% | 0% | 0% | 100%
functorch0% | 0% | 0% | 0% | 100%
Most Complex Units
Top 20 most complex units
Unit# linesMcCabe index# params
def minimizer()
in functorch/_src/fx_minifier.py
129 31 3
def gen_data()
in op_analysis/gen_data.py
82 27 2
void boxed_reduction_batch_rule()
in functorch/csrc/BatchRulesReduceOps.cpp
84 26 2
def __torch_dispatch__()
in functorch/_src/python_key.py
40 23 4
def compute_code()
in functorch/_src/operator_authoring.py
62 19 1
std::tuple nll_loss_forward_decomposition()
in functorch/csrc/BatchRulesLoss.cpp
66 18 5
def tensorexpr_compile()
in functorch/_src/compilers.py
50 18 2
void batchedTensorForLoopFallback()
in functorch/csrc/BatchedFallback.cpp
100 16 2
void dynamicLayerBackFallback()
in functorch/csrc/DynamicLayer.cpp
74 16 2
def jacrev()
in functorch/_src/eager_transforms.py
43 16 5
def partition_with_recompute_fwd_in_bwd()
in functorch/_src/aot_autograd.py
63 15 2
def ts_compile()
in functorch/_src/compilers.py
27 13 2
void batchedTensorInplaceForLoopFallback()
in functorch/csrc/BatchedFallback.cpp
78 11 2
std::vector genDimFlags()
in functorch/csrc/CompileCache.cpp
24 11 2
static bool allTensors()
in functorch/csrc/DynamicLayer.cpp
30 11 2
void initDimflags()
in functorch/csrc/PointwiseOperatorCompileCache.cpp
25 11 3
void call()
in functorch/csrc/PointwiseOperatorCompileCache.cpp
60 11 1
std::tuple batch_norm_backward_plumbing()
in functorch/csrc/BatchRulesNorm.cpp
86 11 10
std::tuple native_layer_norm_backward_plumbing()
in functorch/csrc/BatchRulesNorm.cpp
78 11 8
def _autograd_grad()
in functorch/_src/eager_transforms.py
18 11 5