This code implements Prioritized Level Replay, a method for sampling training levels for reinforcement learning agents that exploits the fact that not all levels are equally useful for agents to learn from during training.
Main Code: 2,177 LOC (15 files) = PY (100%) Secondary code: Test: 0 LOC (0); Generated: 0 LOC (0); Build & Deploy: 0 LOC (0); Other: 124 LOC (3); |
|||
Duplication: 5% | |||
File Size: 0% long (>1000 LOC), 43% short (<= 200 LOC) | |||
Unit Size: 11% long (>100 LOC), 52% short (<= 10 LOC) | |||
Conditional Complexity: 11% complex (McCabe index > 50), 67% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (4 components) | ||
Goals: Keep the system simple and easy to change (4) |
generated by sokrates.dev (configuration) on 2022-01-25