With the aim of building next generation virtual assistants that can handle multimodal inputs and perform multimodal actions, we introduce two new datasets (both in the virtual shopping domain), the annotation schema, the core technical tasks, and the baseline models. The code for the baselines and the datasets will be opensourced.
Main Code: 5,587 LOC (41 files) = PY (100%) Secondary code: Test: 0 LOC (0); Generated: 0 LOC (0); Build & Deploy: 383 LOC (8); Other: 1,085 LOC (42); |
|||
Duplication: 8% | |||
File Size: 0% long (>1000 LOC), 45% short (<= 200 LOC) | |||
Unit Size: 34% long (>100 LOC), 20% short (<= 10 LOC) | |||
Conditional Complexity: 31% complex (McCabe index > 50), 26% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (8 components) | ||
|
1 year, 7 months old
|
|
|
|
0% of code updated more than 50 times Also see temporal dependencies for files frequently changed in same commits. |
|
|
|
Goals: Keep the system simple and easy to change (4) |
|
|
Features of interest:
TODOs
3 files |
|
Latest commit date: 2021-08-25
0
commits
(30 days)
0
contributors
(30 days) |
|
generated by sokrates.dev (configuration) on 2022-01-25