Leaderboards are widely used in NLP and push the field forward. While leaderboards are a straightforward ranking of NLP models, this simplicity can mask nuances in evaluation items (examples) and subjects (NLP models). Rather than replace leaderboards, we advocate a re-imagining so that they better highlight if and where progress is made. Building on educational testing, we create a Bayesian leaderboard where latent subject skill and latent item difficulty predict correct responses. Using this model, we analyze the reliability of leaderboards. Afterwards, we show the model can guide what annotate, identify annotation errors, detect overfitting, and identify informative examples.
Main Code: 10,442 LOC (63 files) = PY (82%) + TSX (11%) + TOML (2%) + TS (1%) + YML (<1%) + CSS (<1%) + STAN (<1%) + HTML (<1%) Secondary code: Test: 88 LOC (4); Generated: 0 LOC (0); Build & Deploy: 17 LOC (2); Other: 580 LOC (10); |
|||
Duplication: 18% | |||
File Size: 12% long (>1000 LOC), 27% short (<= 200 LOC) | |||
Unit Size: 7% long (>100 LOC), 35% short (<= 10 LOC) | |||
Conditional Complexity: 1% complex (McCabe index > 50), 75% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (11 components) | ||
|
6 months old
|
|
|
|
0% of code updated more than 50 times Also see temporal dependencies for files frequently changed in same commits. |
|
|
|
Goals: Keep the system simple and easy to change (4) |
|
|
Features of interest:
TODOs
5 files |
|
Latest commit date: 2021-10-25
0
commits
(30 days)
0
contributors
(30 days) |
|
generated by sokrates.dev (configuration) on 2022-01-25