We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
Main Code: 1,717 LOC (19 files) = PY (94%) + YAML (5%) Secondary code: Test: 0 LOC (0); Generated: 0 LOC (0); Build & Deploy: 7 LOC (1); Other: 398 LOC (6); |
|||
Duplication: 6% | |||
File Size: 0% long (>1000 LOC), 87% short (<= 200 LOC) | |||
Unit Size: 0% long (>100 LOC), 51% short (<= 10 LOC) | |||
Conditional Complexity: 0% complex (McCabe index > 50), 61% simple (McCabe index <= 5) | |||
|
Logical Component Decomposition: primary (7 components) | ||
|
1 year, 2 months old
|
|
|
|
0% of code updated more than 50 times Also see temporal dependencies for files frequently changed in same commits. |
|
|
|
Goals: Keep the system simple and easy to change (4) |
|
Latest commit date: 2021-12-19
0
commits
(30 days)
0
contributors
(30 days) |
|
generated by sokrates.dev (configuration) on 2022-01-25