openai / preparedness
File Size

The distribution of size of files (measured in lines of code).

Intro
Learn more...
File Size Overall
12% | 14% | 27% | 24% | 20%
Legend:
1001+
501-1000
201-500
101-200
1-100


explore: grouped by folders | grouped by size | sunburst | 3D view
File Size per Extension
1001+
501-1000
201-500
101-200
1-100
py13% | 15% | 26% | 24% | 19%
js0% | 0% | 100% | 0% | 0%
toml0% | 0% | 0% | 54% | 45%
yaml0% | 0% | 0% | 69% | 30%
css0% | 0% | 0% | 0% | 100%
html0% | 0% | 0% | 0% | 100%
File Size per Logical Decomposition
primary
1001+
501-1000
201-500
101-200
1-100
project12% | 14% | 27% | 23% | 20%
ROOT0% | 0% | 0% | 100% | 0%
Longest Files (Top 50)
File# lines# units
local.py
in project/alcatraz/alcatraz/clusters
1608 14
eval.py
in project/paperbench/paperbench/nano
961 9
judge.py
in project/paperbench/paperbench/judge
886 34
run.py
in project/paperbench/paperbench/agents
442 4
app.py
in project/paperbench/paperbench/gui
420 14
utils.py
in project/paperbench/paperbench/agents/aisi-basic-agent
374 7
_executor_worker.py
in project/nanoeval/nanoeval
372 8
script.js
in project/paperbench/paperbench/gui/static
345 18
solver.py
in project/nanoeval/nanoeval/solvers/computer_tasks
289 10
evaluation.py
in project/nanoeval/nanoeval
282 1
alcatraz_services.py
in project/paperbench/paperbench/scripts
259 -
tasks.py
in project/paperbench/paperbench/rubric
243 27
metrics.py
in project/paperbench/paperbench
223 10
_basic_agent_plus.py
in project/paperbench/paperbench/agents/aisi-basic-agent
209 5
run_judge_eval.py
in project/paperbench/paperbench/scripts
196 -
run_judge.py
in project/paperbench/paperbench/scripts
192 3
_basic_agent_iterative.py
in project/paperbench/paperbench/agents/aisi-basic-agent
186 5
eval.py
in project/nanoeval/nanoeval
185 11
monitor.py
in project/nanoeval/nanoeval
179 5
library_config.py
in project/nanoeval/nanoeval
173 25
mcq.py
in project/nanoeval/nanoeval/solvers
170 3
run_monitor.py
in project/paperbench/paperbench/scripts
170 1
judge_eval_perf_cost.py
in project/paperbench/experiments/judge_eval
166 6
154 -
run_reproduce.py
in project/paperbench/paperbench/scripts
150 2
utils.py
in project/paperbench/paperbench/nano
146 10
config.yaml
in project/paperbench/paperbench/agents/aisi-basic-agent
132 -
json_recorder.py
in project/nanoeval/nanoeval
124 15
utils.py
in project/paperbench/paperbench
122 15
judge_eval_perf_tables.py
in project/paperbench/experiments/judge_eval
113 5
monitor.py
in project/paperbench/paperbench/monitor
111 7
code_execution_interface.py
in project/nanoeval/nanoeval/solvers/computer_tasks
106 4
task.py
in project/nanoeval/nanoeval/solvers/computer_tasks
104 4
standard.py
in project/nanoeval/nanoeval/metrics
103 4
alcatraz_computer_interface.py
in project/nanoeval_alcatraz/nanoeval_alcatraz
102 2
alcatraz.py
in project/paperbench/paperbench/infra
100 -
styles.css
in project/paperbench/paperbench/gui/static
100 -
mcq_api.py
in project/nanoeval/nanoeval/solvers
96 3
steps.py
in project/nanoeval/nanoeval/solvers/computer_tasks
95 6
_persistent_db.py
in project/nanoeval/nanoeval
92 5
pausable_timer.py
in project/nanoeval/nanoeval/solvers/computer_tasks
89 7
asyncio_utils.py
in project/nanoeval/nanoeval
84 1
registry.py
in project/paperbench/paperbench/agents
81 4
_file_reader.py
in project/paperbench/paperbench/agents/aisi-basic-agent
81 2
utils.py
in project/paperbench/paperbench/judge
78 4
short_answer.py
in project/nanoeval/nanoeval/solvers
76 1
_db.py
in project/nanoeval/nanoeval
72 4
evaluate.py
in project/paperbench/paperbench/judge/judge_eval
72 4
paper_registry.py
in project/paperbench/paperbench
72 5
download_data.py
in project/paperbench/paperbench/judge/judge_eval
70 4
Files With Most Units (Top 50)
File# lines# units
judge.py
in project/paperbench/paperbench/judge
886 34
tasks.py
in project/paperbench/paperbench/rubric
243 27
library_config.py
in project/nanoeval/nanoeval
173 25
script.js
in project/paperbench/paperbench/gui/static
345 18
json_recorder.py
in project/nanoeval/nanoeval
124 15
utils.py
in project/paperbench/paperbench
122 15
local.py
in project/alcatraz/alcatraz/clusters
1608 14
recorder_protocol.py
in project/nanoeval/nanoeval
62 14
app.py
in project/paperbench/paperbench/gui
420 14
eval.py
in project/nanoeval/nanoeval
185 11
solver.py
in project/nanoeval/nanoeval/solvers/computer_tasks
289 10
utils.py
in project/paperbench/paperbench/nano
146 10
metrics.py
in project/paperbench/paperbench
223 10
eval.py
in project/paperbench/paperbench/nano
961 9
_executor_worker.py
in project/nanoeval/nanoeval
372 8
pausable_timer.py
in project/nanoeval/nanoeval/solvers/computer_tasks
89 7
registry.py
in project/paperbench/paperbench/judge/judge_eval
70 7
monitor.py
in project/paperbench/paperbench/monitor
111 7
utils.py
in project/paperbench/paperbench/agents/aisi-basic-agent
374 7
_container_proc.py
in project/alcatraz/alcatraz/clusters
38 6
steps.py
in project/nanoeval/nanoeval/solvers/computer_tasks
95 6
judge_eval_perf_cost.py
in project/paperbench/experiments/judge_eval
166 6
_persistent_db.py
in project/nanoeval/nanoeval
92 5
monitor.py
in project/nanoeval/nanoeval
179 5
_multiprocessing_utils.py
in project/nanoeval/nanoeval
54 5
judge_eval_perf_tables.py
in project/paperbench/experiments/judge_eval
113 5
create_judge.py
in project/paperbench/paperbench/judge
55 5
paper_registry.py
in project/paperbench/paperbench
72 5
_basic_agent_iterative.py
in project/paperbench/paperbench/agents/aisi-basic-agent
186 5
_basic_agent_plus.py
in project/paperbench/paperbench/agents/aisi-basic-agent
209 5
standard.py
in project/nanoeval/nanoeval/metrics
103 4
_db.py
in project/nanoeval/nanoeval
72 4
code_execution_interface.py
in project/nanoeval/nanoeval/solvers/computer_tasks
106 4
task.py
in project/nanoeval/nanoeval/solvers/computer_tasks
104 4
fs_paths.py
in project/nanoeval/nanoeval
19 4
plot.py
in project/paperbench/experiments/pbcd_correlation
64 4
evaluate.py
in project/paperbench/paperbench/judge/judge_eval
72 4
download_data.py
in project/paperbench/paperbench/judge/judge_eval
70 4
utils.py
in project/paperbench/paperbench/judge
78 4
run.py
in project/paperbench/paperbench/agents
442 4
registry.py
in project/paperbench/paperbench/agents
81 4
utils.py
in project/paperbench/paperbench/agents
36 4
recorder.py
in project/nanoeval/nanoeval
23 3
mcq.py
in project/nanoeval/nanoeval/solvers
170 3
mcq_api.py
in project/nanoeval/nanoeval/solvers
96 3
setup.py
in project/nanoeval/nanoeval
44 3
plot.py
in project/paperbench/experiments/judge_max_depth
56 3
run_judge.py
in project/paperbench/paperbench/scripts
192 3
_execute.py
in project/paperbench/paperbench/agents/aisi-basic-agent
40 3
utils.py
in project/paperbench/paperbench/rubric
26 3
Files With Long Lines (Top 17)

There are 17 files with lines longer than 120 characters. In total, there are 63 long lines.

File# lines# units# long lines
judge.py
in project/paperbench/paperbench/judge
886 34 12
local.py
in project/alcatraz/alcatraz/clusters
1608 14 11
eval.py
in project/nanoeval/nanoeval
185 11 10
eval.py
in project/paperbench/paperbench/nano
961 9 5
constants.py
in project/paperbench/paperbench/judge
15 2 4
macros.html
in project/paperbench/paperbench/gui/templates
33 - 4
_basic_agent_plus.py
in project/paperbench/paperbench/agents/aisi-basic-agent
209 5 3
_executor_worker.py
in project/nanoeval/nanoeval
372 8 2
_multiprocessing_utils.py
in project/nanoeval/nanoeval
54 5 2
_basic_agent_iterative.py
in project/paperbench/paperbench/agents/aisi-basic-agent
186 5 2
script.js
in project/paperbench/paperbench/gui/static
345 18 2
standard.py
in project/nanoeval/nanoeval/metrics
103 4 1
mcq_api.py
in project/nanoeval/nanoeval/solvers
96 3 1
task.py
in project/nanoeval/nanoeval/solvers/computer_tasks
104 4 1
evaluation.py
in project/nanoeval/nanoeval
282 1 1
base.html
in project/paperbench/paperbench/gui/templates
15 - 1
154 - 1
Correlations

File Size vs. Commits (all time): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 2 commits (all time) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 commits (all time) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 commits (all time) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 commits (all time) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 commits (all time) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 4 commits (all time) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 commits (all time) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 commits (all time) y: 122 lines of code project/paperbench/pyproject.toml x: 1 commits (all time) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 2 commits (all time) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 commits (all time) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 commits (all time) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 commits (all time) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 commits (all time) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 commits (all time) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 commits (all time) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 commits (all time) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 commits (all time) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 commits (all time) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 commits (all time) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 commits (all time) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 commits (all time) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 commits (all time) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 commits (all time) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 commits (all time) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 commits (all time) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 commits (all time) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 commits (all time) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 4.0
commits (all time)
min: 1.0 | average: 1.23 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 4.0

File Size vs. Contributors (all time): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 1 contributors (all time) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 contributors (all time) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 contributors (all time) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 contributors (all time) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 contributors (all time) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 2 contributors (all time) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 contributors (all time) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 contributors (all time) y: 122 lines of code project/paperbench/pyproject.toml x: 1 contributors (all time) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 1 contributors (all time) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 contributors (all time) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 contributors (all time) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 contributors (all time) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 contributors (all time) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 contributors (all time) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 contributors (all time) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 contributors (all time) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 contributors (all time) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 contributors (all time) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 contributors (all time) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 contributors (all time) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 contributors (all time) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 contributors (all time) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 contributors (all time) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 contributors (all time) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 contributors (all time) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 contributors (all time) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 contributors (all time) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 2.0
contributors (all time)
min: 1.0 | average: 1.1 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 2.0

File Size vs. Commits (30 days): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 2 commits (30d) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 commits (30d) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 commits (30d) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 commits (30d) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 commits (30d) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 4 commits (30d) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 commits (30d) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 commits (30d) y: 122 lines of code project/paperbench/pyproject.toml x: 1 commits (30d) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 2 commits (30d) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 commits (30d) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 commits (30d) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 commits (30d) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 commits (30d) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 commits (30d) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 commits (30d) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 commits (30d) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 commits (30d) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 commits (30d) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 commits (30d) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 commits (30d) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 commits (30d) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 commits (30d) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 commits (30d) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 commits (30d) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 commits (30d) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 commits (30d) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 commits (30d) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 4.0
commits (30d)
min: 1.0 | average: 1.23 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 4.0

File Size vs. Contributors (30 days): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 1 contributors (30d) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 contributors (30d) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 contributors (30d) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 contributors (30d) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 contributors (30d) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 2 contributors (30d) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 contributors (30d) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 contributors (30d) y: 122 lines of code project/paperbench/pyproject.toml x: 1 contributors (30d) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 1 contributors (30d) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 contributors (30d) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 contributors (30d) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 contributors (30d) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 contributors (30d) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 contributors (30d) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 contributors (30d) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 contributors (30d) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 contributors (30d) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 contributors (30d) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 contributors (30d) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 contributors (30d) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 contributors (30d) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 contributors (30d) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 contributors (30d) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 contributors (30d) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 contributors (30d) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 contributors (30d) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 contributors (30d) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 2.0
contributors (30d)
min: 1.0 | average: 1.1 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 2.0

File Size vs. Commits (90 days): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 2 commits (90d) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 commits (90d) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 commits (90d) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 commits (90d) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 commits (90d) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 4 commits (90d) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 commits (90d) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 commits (90d) y: 122 lines of code project/paperbench/pyproject.toml x: 1 commits (90d) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 2 commits (90d) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 commits (90d) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 commits (90d) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 commits (90d) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 commits (90d) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 commits (90d) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 commits (90d) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 commits (90d) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 commits (90d) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 commits (90d) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 commits (90d) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 commits (90d) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 commits (90d) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 commits (90d) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 commits (90d) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 commits (90d) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 commits (90d) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 commits (90d) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 commits (90d) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 4.0
commits (90d)
min: 1.0 | average: 1.23 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 4.0

File Size vs. Contributors (90 days): 31 points

project/paperbench/paperbench/scripts/alcatraz_services.py x: 1 contributors (90d) y: 259 lines of code project/paperbench/paperbench/agents/run.py x: 1 contributors (90d) y: 442 lines of code project/paperbench/paperbench/infra/alcatraz.py x: 1 contributors (90d) y: 100 lines of code project/paperbench/paperbench/metrics.py x: 2 contributors (90d) y: 223 lines of code project/paperbench/paperbench/nano/eval.py x: 2 contributors (90d) y: 961 lines of code project/paperbench/paperbench/nano/utils.py x: 2 contributors (90d) y: 146 lines of code project/paperbench/paperbench/scripts/run_monitor.py x: 1 contributors (90d) y: 170 lines of code project/paperbench/paperbench/utils.py x: 1 contributors (90d) y: 122 lines of code project/paperbench/pyproject.toml x: 1 contributors (90d) y: 38 lines of code project/paperbench/paperbench/nano/entrypoint.py x: 1 contributors (90d) y: 14 lines of code project/paperbench/data/papers/stochastic-interpolants/config.yaml x: 1 contributors (90d) y: 2 lines of code project/alcatraz/alcatraz/utils/cmds.py x: 1 contributors (90d) y: 7 lines of code project/nanoeval/nanoeval/_db.py x: 1 contributors (90d) y: 72 lines of code project/nanoeval/nanoeval/_executor_worker.py x: 1 contributors (90d) y: 372 lines of code project/nanoeval/nanoeval/_multiprocessing_utils.py x: 1 contributors (90d) y: 54 lines of code project/nanoeval/nanoeval/_persistent_db.py x: 1 contributors (90d) y: 92 lines of code project/nanoeval/nanoeval/eval.py x: 1 contributors (90d) y: 185 lines of code project/nanoeval/nanoeval/evaluation.py x: 1 contributors (90d) y: 282 lines of code project/nanoeval/nanoeval/library_config.py x: 1 contributors (90d) y: 173 lines of code project/nanoeval/nanoeval/metrics/agents.py x: 1 contributors (90d) y: 66 lines of code project/nanoeval/nanoeval/metrics/standard.py x: 1 contributors (90d) y: 103 lines of code project/nanoeval/nanoeval/monitor.py x: 1 contributors (90d) y: 179 lines of code project/nanoeval/nanoeval/setup.py x: 1 contributors (90d) y: 44 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/code_execution_interface.py x: 1 contributors (90d) y: 106 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/pausable_timer.py x: 1 contributors (90d) y: 89 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/solver.py x: 1 contributors (90d) y: 289 lines of code project/nanoeval/nanoeval/solvers/computer_tasks/steps.py x: 1 contributors (90d) y: 95 lines of code project/nanoeval/nanoeval/solvers/short_answer.py x: 1 contributors (90d) y: 76 lines of code
961.0
lines of code
  min: 2.0
  average: 165.52
  25th percentile: 72.0
  median: 104.0
  75th percentile: 185.0
  max: 961.0
0 2.0
contributors (90d)
min: 1.0 | average: 1.1 | 25th percentile: 1.0 | median: 1.0 | 75th percentile: 1.0 | max: 2.0