facebookresearch / parcus
Conditional Complexity

The distribution of complexity of units (measured with McCabe index).

Intro
  • Conditional complexity (also called cyclomatic complexity) is a term used to measure the complexity of software. The term refers to the number of possible paths through a program function. A higher value ofter means higher maintenance and testing costs (infosecinstitute.com).
  • Conditional complexity is calculated by counting all conditions in the program that can affect the execution path (e.g. if statement, loops, switches, and/or operators, try and catch blocks...).
  • Conditional complexity is measured at the unit level (methods, functions...).
  • Units are classified in four categories based on the measured McCabe index: 1-5 (simple units), 6-10 (medium complex units), 11-25 (complex units), 26+ (very complex units).
Learn more...
Conditional Complexity Overall
  • There are 187 units with 3,631 lines of code in units (88.0% of code).
    • 3 very complex units (352 lines of code)
    • 3 complex units (426 lines of code)
    • 16 medium complex units (846 lines of code)
    • 25 simple units (867 lines of code)
    • 140 very simple units (1,140 lines of code)
9% | 11% | 23% | 23% | 31%
Legend:
51+
26-50
11-25
6-10
1-5
Alternative Visuals
Conditional Complexity per Extension
51+
26-50
11-25
6-10
1-5
py9% | 11% | 23% | 23% | 31%
Conditional Complexity per Logical Component
primary logical decomposition
51+
26-50
11-25
6-10
1-5
parsers/Spouse35% | 0% | 27% | 17% | 18%
parsers/MovieReview19% | 18% | 20% | 24% | 16%
training0% | 29% | 26% | 15% | 28%
parsers/Hatespeech0% | 0% | 41% | 42% | 15%
datasets0% | 0% | 0% | 36% | 63%
ROOT0% | 0% | 0% | 100% | 0%
utils0% | 0% | 0% | 55% | 44%
models0% | 0% | 0% | 0% | 100%
Most Complex Units
Top 20 most complex units
Unit# linesMcCabe index# params
def _convert_examples_to_features()
in parsers/Spouse/Spouse_Finetune_Preprocess.py
123 53 3
def _convert_examples_to_features()
in parsers/Spouse/Spouse_Preprocess.py
114 53 3
def _convert_examples_to_features()
in parsers/MovieReview/MovieReview_Preprocess.py
115 53 3
def _convert_examples_to_features()
in parsers/MovieReview/MovieReview_Finetune_Preprocess.py
111 50 3
def compute()
in training/BertBaselineTraining.py
147 38 15
def compute()
in training/NeuralPatternMatchingTraining.py
168 32 16
def process_embeddings()
in parsers/Spouse/Spouse_Dataset_Builder.py
63 22 2
def _convert_examples_to_features()
in parsers/Hatespeech/Hatespeech_Fasttext_Preprocess.py
75 19 2
def process_embeddings()
in parsers/Hatespeech/Hatespeech_Dataset_Fasttext_Builder.py
45 18 2
def process_embeddings()
in parsers/Hatespeech/Hatespeech_Dataset_Builder.py
42 17 2
def _convert_examples_to_features()
in parsers/Hatespeech/Hatespeech_Preprocess_Ngrams.py
41 16 3
def process_embeddings()
in parsers/MovieReview/MovieReview_Dataset_Builder.py
39 16 2
def process_embeddings()
in parsers/Spouse/Spouse_Finetune_Dataset_Builder.py
43 15 1
def compute()
in training/BertFinetuneTraining.py
122 15 13
def process_embeddings()
in parsers/MovieReview/MovieReview_Finetune_Dataset_Builder.py
40 14 2
def compute()
in training/NGramLogRegTraining.py
83 14 14
def get_data_splits()
in training/utils.py
40 14 4
def compute_spouse_embeddings()
in parsers/Spouse/Spouse_Dataset_Builder.py
45 13 3
def bagging()
in training/NeuralPatternMatchingTraining.py
44 13 13
def compute_hatespeech_embeddings()
in parsers/Hatespeech/Hatespeech_Dataset_Builder.py
44 12 3