recipes/sota/2019/librivox/train_am_resnet_s2s.cfg (50 lines of code) (raw):
# Replace `[...]`, `[MODEL_DST]`, `[DATA_DST]`, with appropriate paths
--runname=am_resnet_s2s_librivox
--rundir=[...]
--archdir=[...]
--arch=am_arch/am_resnet_s2s_librivox.arch
--tokensdir=[MODEL_DST]/am
--tokens=librispeech-train-all-unigram-10000.tokens
--lexicon=[MODEL_DST]/am/librispeech-train+dev-unigram-10000-nbest10.lexicon
--train=[DATA_DST]/lists/train-clean-100.lst,[DATA_DST]/lists/train-clean-360.lst,[DATA_DST]/lists/train-other-500.lst,[DATA_DST_librilight]/lists/librivox.lst
--valid=dev-clean:[DATA_DST]/lists/dev-clean.lst,dev-other:[DATA_DST]/lists/dev-other.lst
--batchsize=4
--lr=0.06
--lrcrit=0.06
--momentum=0.1
--maxgradnorm=15
--mfsc=true
--nthread=6
--criterion=seq2seq
--maxdecoderoutputlen=120
--labelsmooth=0.05
--dataorder=output_spiral
--inputbinsize=25
--attnWindow=softPretrain
--softwstd=4
--trainWithWindow=true
--pretrainWindow=5
--attention=keyvalue
--encoderdim=1024
--memstepsize=8338608
--eostoken=true
--pcttraineval=1
--pctteacherforcing=99
--listdata=true
--usewordpiece=true
--wordseparator=_
--target=ltr
--filterbanks=80
--stepsize=300
--gamma=0.5
--sampletarget=0.01
--enable_distributed=true
--framesizems=30
--framestridems=10
--decoderdropout=0.2
--decoderattnround=2
--decoderrnnlayer=2
--seed=2
--reportiters=2000
--nthread_decoder=0
--mintsz=1
--lr_decay=10000