Dynamic training with Apache MXNet reduces cost and time for training deep neural networks by leveraging AWS cloud elasticity and scale. The system reduces training cost and time by dynamically updating the training cluster size during training, with minimal impact on model training accuracy.
Main Code: 252,286 LOC (1584 files) = H (25%) + PY (24%) + CC (18%) + PM (8%) + CU (5%) + SCALA (4%) + CUH (2%) + R (1%) + JL (1%) + I (1%) + PROTO (1%) + CLJ (<1%) + CPP (<1%) + JAVA (<1%) + HPP (<1%) + CMAKE (<1%) + ST (<1%) + GV (<1%) + M (<1%) + PYX (<1%) + PL (<1%) + YML (<1%) + CFG (<1%) + GROOVY (<1%) + PYI (<1%) + YAML (<1%) + G4 (<1%) + IN (<1%) + T (<1%) + PERL (<1%) Secondary code: Test: 55,422 LOC (298); Generated: 0 LOC (0); Build & Deploy: 3,831 LOC (109); Other: 22,992 LOC (347); |
|||
File Size: 13% long (>1000 LOC), 34% short (<= 200 LOC) | |||
Unit Size: 9% long (>100 LOC), 41% short (<= 10 LOC) | |||
Conditional Complexity: 5% complex (McCabe index > 50), 57% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (19 components) | ||
|
3 years, 2 months old
|
|
|
|
0% of code updated more than 50 times Also see temporal dependencies for files frequently changed in same commits. |
|
|
|
Goals: Keep the system simple and easy to change (4) |
|
|
Features of interest:
TODOs
48 files |
|
Latest commit date: 2021-04-28
0
commits
(30 days)
0
contributors
(30 days) |
|
generated by sokrates.dev (configuration) on 2022-01-31