This is a sample solution for bringing your own ML models and inference code, and running them at scale using AWS serverless services.
Main Code: 674 LOC (9 files) = TS (70%) + PY (28%) + JS (1%) Secondary code: Test: 19 LOC (1); Generated: 0 LOC (0); Build & Deploy: 101 LOC (4); Other: 129 LOC (5); |
|||
Duplication: 4% | |||
File Size: 0% long (>1000 LOC), 69% short (<= 200 LOC) | |||
Unit Size: 52% long (>100 LOC), 13% short (<= 10 LOC) | |||
Conditional Complexity: 0% complex (McCabe index > 50), 97% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (6 components) | ||
|
2 months old
|
|
|
|
0% of code updated more than 50 times Also see temporal dependencies for files frequently changed in same commits. |
|
|
|
Goals: Keep the system simple and easy to change (4) |
|
Latest commit date: 2021-11-12
0
commits
(30 days)
0
contributors
(30 days) |
|
generated by sokrates.dev (configuration) on 2022-01-31