facebookresearch / narwhal
Conditional Complexity

The distribution of complexity of units (measured with McCabe index).

Intro
  • Conditional complexity (also called cyclomatic complexity) is a term used to measure the complexity of software. The term refers to the number of possible paths through a program function. A higher value ofter means higher maintenance and testing costs (infosecinstitute.com).
  • Conditional complexity is calculated by counting all conditions in the program that can affect the execution path (e.g. if statement, loops, switches, and/or operators, try and catch blocks...).
  • Conditional complexity is measured at the unit level (methods, functions...).
  • Units are classified in four categories based on the measured McCabe index: 1-5 (simple units), 6-10 (medium complex units), 11-25 (complex units), 26+ (very complex units).
Learn more...
Conditional Complexity Overall
  • There are 201 units with 2,148 lines of code in units (37.5% of code).
    • 0 very complex units (0 lines of code)
    • 0 complex units (0 lines of code)
    • 10 medium complex units (370 lines of code)
    • 14 simple units (382 lines of code)
    • 177 very simple units (1,396 lines of code)
0% | 0% | 17% | 17% | 64%
Legend:
51+
26-50
11-25
6-10
1-5
Alternative Visuals
Conditional Complexity per Extension
51+
26-50
11-25
6-10
1-5
py0% | 0% | 21% | 22% | 55%
rs0% | 0% | 0% | 0% | 100%
Conditional Complexity per Logical Component
primary logical decomposition
51+
26-50
11-25
6-10
1-5
benchmark0% | 0% | 21% | 22% | 55%
worker0% | 0% | 0% | 0% | 100%
consensus0% | 0% | 0% | 0% | 100%
primary0% | 0% | 0% | 0% | 100%
crypto0% | 0% | 0% | 0% | 100%
config0% | 0% | 0% | 0% | 100%
network0% | 0% | 0% | 0% | 100%
Most Complex Units
Top 20 most complex units
Unit# linesMcCabe index# params
def __init__()
in benchmark/benchmark/config.py
33 18 2
def __init__()
in benchmark/benchmark/config.py
27 14 2
def run()
in benchmark/benchmark/local.py
66 14 2
def _select_hosts()
in benchmark/benchmark/remote.py
22 13 2
def _print_tps()
in benchmark/benchmark/aggregate.py
25 12 2
def _config()
in benchmark/benchmark/remote.py
37 12 4
def run()
in benchmark/benchmark/remote.py
51 12 4
def _print_tps()
in benchmark/data/paper-data/plot-script.py
25 12 2
def plot()
in benchmark/benchmark/plot.py
36 11 2
def _run_single()
in benchmark/benchmark/remote.py
48 11 5
def __init__()
in benchmark/benchmark/config.py
39 10 3
def __init__()
in benchmark/benchmark/logs.py
42 10 5
def create_instances()
in benchmark/benchmark/instance.py
44 8 2
def terminate_instances()
in benchmark/benchmark/instance.py
19 8 1
def plot_latency()
in benchmark/data/paper-data/plot-script.py
38 8 6
def _get()
in benchmark/benchmark/instance.py
21 7 2
def _plot()
in benchmark/benchmark/plot.py
34 7 6
def __init__()
in benchmark/data/paper-data/plot-script.py
17 7 4
def plot_tps()
in benchmark/data/paper-data/plot-script.py
35 7 7
def aggregate()
in benchmark/benchmark/aggregate.py
8 6 2