pytorch / xla
Conditional Complexity

The distribution of complexity of units (measured with McCabe index).

Intro
  • Conditional complexity (also called cyclomatic complexity) is a term used to measure the complexity of software. The term refers to the number of possible paths through a program function. A higher value ofter means higher maintenance and testing costs (infosecinstitute.com).
  • Conditional complexity is calculated by counting all conditions in the program that can affect the execution path (e.g. if statement, loops, switches, and/or operators, try and catch blocks...).
  • Conditional complexity is measured at the unit level (methods, functions...).
  • Units are classified in four categories based on the measured McCabe index: 1-5 (simple units), 6-10 (medium complex units), 11-25 (complex units), 26+ (very complex units).
Learn more...
Conditional Complexity Overall
  • There are 2,676 units with 24,089 lines of code in units (67.0% of code).
    • 0 very complex units (0 lines of code)
    • 0 complex units (0 lines of code)
    • 31 medium complex units (1,419 lines of code)
    • 92 simple units (2,591 lines of code)
    • 2,553 very simple units (20,079 lines of code)
0% | 0% | 5% | 10% | 83%
Legend:
51+
26-50
11-25
6-10
1-5
Alternative Visuals
Conditional Complexity per Extension
51+
26-50
11-25
6-10
1-5
cpp0% | 0% | 6% | 7% | 86%
py0% | 0% | 4% | 23% | 71%
h0% | 0% | 11% | 0% | 88%
Conditional Complexity per Logical Component
primary logical decomposition
51+
26-50
11-25
6-10
1-5
torch_xla/csrc0% | 0% | 6% | 7% | 86%
torch_xla/distributed0% | 0% | 6% | 30% | 63%
torch_xla/utils0% | 0% | 9% | 9% | 81%
scripts0% | 0% | 6% | 30% | 62%
torch_xla/core0% | 0% | 2% | 13% | 84%
torch_xla/debug0% | 0% | 0% | 29% | 70%
torch_xla/amp0% | 0% | 0% | 69% | 30%
torch_xla0% | 0% | 0% | 41% | 58%
contrib/scripts0% | 0% | 0% | 26% | 73%
ROOT0% | 0% | 0% | 13% | 86%
Most Complex Units
Top 20 most complex units
Unit# linesMcCabe index# params
at::ScalarType TensorTypeFromXlaType()
in torch_xla/csrc/tensor_util.cpp
36 22 1
xla::PrimitiveType GetDevicePrimitiveType()
in torch_xla/csrc/tensor_util.cpp
39 22 2
42 17 2
XlaOpVector Scalar::Lower()
in torch_xla/csrc/ops/scalar.cpp
60 17 1
void TensorToBufferSType()
in torch_xla/csrc/tensor_util.cpp
68 16 5
at::Tensor MakeTensorFromXlaLiteral()
in torch_xla/csrc/tensor_util.cpp
40 16 2
xla::XlaOp CreateMatMul()
in torch_xla/csrc/xla_lower_util.cpp
41 16 2
NodePtr ARange()
in torch_xla/csrc/ops/ops.cpp
68 16 4
static xla::Literal ScalarLiteral()
in torch_xla/csrc/helpers.h
39 16 2
xla::XlaOp ConvertTo()
in torch_xla/csrc/convert_ops.cpp
33 15 4
def validate()
in torch_xla/distributed/cluster.py
35 15 1
42 14 1
xla::PrimitiveType XlaTypeFromTensorType()
in torch_xla/csrc/tensor_util.cpp
32 14 2
def _for_each_instance_rewrite()
in torch_xla/utils/utils.py
39 14 4
def prase_graphs()
in scripts/grab_graphs.py
48 13 3
void PopulateTensorBuffer()
in torch_xla/csrc/tensor_util.cpp
56 13 5
at::Tensor XlaLiteralToTensorHelper()
in torch_xla/csrc/tensor_util.cpp
34 13 2
torch::lazy::hash_t TensorHash()
in torch_xla/csrc/tensor_util.cpp
34 13 1
xla::PrimitiveType TensorTypeToRawXlaType()
in torch_xla/csrc/tensor_util.cpp
30 13 1
xla::PrimitiveType MakeXlaPrimitiveType()
in torch_xla/csrc/tensor_util.cpp
31 13 2