azure / aoai-model-evaluation-testing
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
0% | 0% | 77% | 15% | 7%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
ipynb0% | 0% | 82% | 17% | 0%
py0% | 0% | 0% | 0% | 100%
jsonl0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
Lab2_nlp_evaluation0% | 0% | 89% | 0% | 10%
Lab4_simulate_datasets0% | 0% | 100% | 0% | 0%
Lab1_ai_evaluation0% | 0% | 62% | 27% | 9%
Lab3_risk_safety_eval0% | 0% | 0% | 100% | 0%
Longest Files (Top 11)
File# lines# units
simulate_evaluate_groundedness.ipynb
in Lab4_simulate_datasets
445 -
421 -
nlp_evaluators.ipynb
in Lab2_nlp_evaluation
340 -
nlp_base_model_evaluators.ipynb
in Lab2_nlp_evaluation
257 -
ai_evaluation.ipynb
in Lab1_ai_evaluation
186 -
safety_evaluation.ipynb
in Lab3_risk_safety_eval
115 -
target_ai_api.py
in Lab1_ai_evaluation/target_ai_api
61 8
target_nlp_api.py
in Lab2_nlp_evaluation/target_nlp_api
61 8
jsonl
ai_data.jsonl
in Lab1_ai_evaluation
4 -
jsonl
ai_data.jsonl
in Lab2_nlp_evaluation
4 -
jsonl
nlp_data.jsonl
in Lab2_nlp_evaluation
3 -
Files With Most Units (Top 2)
File# lines# units
target_ai_api.py
in Lab1_ai_evaluation/target_ai_api
61 8
target_nlp_api.py
in Lab2_nlp_evaluation/target_nlp_api
61 8
Files With Long Lines (Top 7)

There are 7 files with lines longer than 120 characters. In total, there are 57 long lines.

File# lines# units# long lines
nlp_evaluators.ipynb
in Lab2_nlp_evaluation
340 - 18
421 - 14
simulate_evaluate_groundedness.ipynb
in Lab4_simulate_datasets
445 - 13
jsonl
ai_data.jsonl
in Lab1_ai_evaluation
4 - 4
jsonl
ai_data.jsonl
in Lab2_nlp_evaluation
4 - 4
safety_evaluation.ipynb
in Lab3_risk_safety_eval
115 - 2
nlp_base_model_evaluators.ipynb
in Lab2_nlp_evaluation
257 - 2
Correlations

File Size vs. Commits (all time): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 3 commits (all time) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 3 commits (all time) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 3 commits (all time) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 commits (all time) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 commits (all time) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 commits (all time) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 commits (all time) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 commits (all time) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 commits (all time) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 3.0
commits (all time)
min: 1.0 | average: 1.55 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 3.0 | max: 3.0

File Size vs. Contributors (all time): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 1 contributors (all time) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 1 contributors (all time) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 1 contributors (all time) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 contributors (all time) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 contributors (all time) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 contributors (all time) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 contributors (all time) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 contributors (all time) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 contributors (all time) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 1.0
contributors (all time)
min: 1.0 | average: 1.0 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 1.0

File Size vs. Commits (30 days): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 3 commits (30d) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 3 commits (30d) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 3 commits (30d) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 commits (30d) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 commits (30d) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 commits (30d) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 commits (30d) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 commits (30d) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 commits (30d) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 3.0
commits (30d)
min: 1.0 | average: 1.55 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 3.0 | max: 3.0

File Size vs. Contributors (30 days): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 1 contributors (30d) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 1 contributors (30d) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 1 contributors (30d) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 contributors (30d) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 contributors (30d) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 contributors (30d) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 contributors (30d) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 contributors (30d) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 contributors (30d) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 1.0
contributors (30d)
min: 1.0 | average: 1.0 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 1.0

File Size vs. Commits (90 days): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 3 commits (90d) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 3 commits (90d) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 3 commits (90d) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 commits (90d) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 commits (90d) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 commits (90d) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 commits (90d) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 commits (90d) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 commits (90d) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 3.0
commits (90d)
min: 1.0 | average: 1.55 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 3.0 | max: 3.0

File Size vs. Contributors (90 days): 11 points

Lab1_ai_evaluation/ai_evaluation.ipynb x: 1 contributors (90d) y: 186 lines of code Lab2_nlp_evaluation/nlp_base_model_evaluators.ipynb x: 1 contributors (90d) y: 257 lines of code Lab3_risk_safety_eval/safety_evaluation.ipynb x: 1 contributors (90d) y: 115 lines of code Lab1_ai_evaluation/ai_data.jsonl x: 1 contributors (90d) y: 4 lines of code Lab1_ai_evaluation/evaluate_base_model_endpoint.ipynb x: 1 contributors (90d) y: 421 lines of code Lab1_ai_evaluation/target_ai_api/target_ai_api.py x: 1 contributors (90d) y: 61 lines of code Lab2_nlp_evaluation/nlp_data.jsonl x: 1 contributors (90d) y: 3 lines of code Lab2_nlp_evaluation/nlp_evaluators.ipynb x: 1 contributors (90d) y: 340 lines of code Lab4_simulate_datasets/simulate_evaluate_groundedness.ipynb x: 1 contributors (90d) y: 445 lines of code
445.0
lines of code
  min: 3.0
  average: 172.45
  25th percentile: 4.0
  median: 115.0
  75th percentile: 340.0
  max: 445.0
0 1.0
contributors (90d)
min: 1.0 | average: 1.0 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 1.0