duplicated block id: 1 size: 116 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (248:365) - fairseq/models/speech_to_text/s2t_transformer.py (123:241) duplicated block id: 2 size: 72 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (46:117) - fairseq/models/speech_to_text/s2t_transformer.py (144:215) duplicated block id: 3 size: 72 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (268:339) - fairseq/models/speech_to_text/convtransformer.py (46:117) duplicated block id: 4 size: 64 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (423:488) - fairseq/models/speech_to_text/s2t_transformer.py (123:189) duplicated block id: 5 size: 64 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (248:313) - fairseq/models/speech_to_speech/s2s_transformer.py (423:488) duplicated block id: 6 size: 59 cleaned lines of code in 2 files: - fairseq/clib/libnat_cuda/edit_dist.cu (101:166) - fairseq/clib/libnat_cuda/edit_dist.cu (185:250) duplicated block id: 7 size: 46 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (443:488) - fairseq/models/speech_to_text/convtransformer.py (46:91) duplicated block id: 8 size: 42 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (321:370) - fairseq/tasks/multilingual_language_modeling.py (565:614) duplicated block id: 9 size: 41 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (170:214) - fairseq/models/roberta/model.py (403:445) duplicated block id: 10 size: 38 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (174:214) - fairseq/models/nat/nonautoregressive_transformer.py (408:448) duplicated block id: 11 size: 36 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (187:236) - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (143:192) duplicated block id: 12 size: 35 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (2:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (2:36) duplicated block id: 13 size: 35 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (242:277) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (174:209) duplicated block id: 14 size: 35 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (242:277) - fairseq/models/nat/nonautoregressive_transformer.py (408:443) duplicated block id: 15 size: 35 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (2:36) - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (2:36) duplicated block id: 16 size: 34 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (174:208) - fairseq/models/nat/levenshtein_transformer.py (432:466) duplicated block id: 17 size: 34 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (242:276) - fairseq/models/nat/levenshtein_transformer.py (432:466) duplicated block id: 18 size: 34 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (432:466) - fairseq/models/nat/nonautoregressive_transformer.py (408:442) duplicated block id: 19 size: 33 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (385:421) - fairseq/models/speech_to_text/xm_transformer.py (326:362) duplicated block id: 20 size: 31 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (407:437) - fairseq/models/transformer/transformer_legacy.py (169:199) duplicated block id: 21 size: 31 cleaned lines of code in 2 files: - fairseq/clib/libnat/edit_dist.cpp (58:98) - fairseq/clib/libnat/edit_dist.cpp (130:170) duplicated block id: 22 size: 30 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (432:461) - fairseq/models/transformer/transformer_legacy.py (170:199) duplicated block id: 23 size: 30 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (174:203) - fairseq/models/transformer/transformer_legacy.py (170:199) duplicated block id: 24 size: 30 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (242:271) - fairseq/models/transformer/transformer_legacy.py (170:199) duplicated block id: 25 size: 29 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/fastspeech2.py (341:373) - fairseq/models/text_to_speech/tts_transformer.py (335:367) duplicated block id: 26 size: 29 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (244:278) - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (193:227) duplicated block id: 27 size: 27 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (51:83) - fairseq/benchmark/dummy_masked_lm.py (62:94) duplicated block id: 28 size: 27 cleaned lines of code in 2 files: - fairseq/optim/fp16_optimizer.py (257:286) - fairseq/optim/fp16_optimizer.py (491:520) duplicated block id: 29 size: 26 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (123:153) - fairseq/modules/multihead_attention.py (331:361) duplicated block id: 30 size: 26 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (43:68) - fairseq/tasks/multilingual_language_modeling.py (51:76) duplicated block id: 31 size: 25 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (115:139) - fairseq/models/nat/insertion_transformer.py (242:266) duplicated block id: 32 size: 25 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (115:139) - fairseq/models/nat/nonautoregressive_transformer.py (408:432) duplicated block id: 33 size: 25 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (115:139) - fairseq/models/transformer/transformer_legacy.py (170:194) duplicated block id: 34 size: 25 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (115:139) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (174:198) duplicated block id: 35 size: 25 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (115:139) - fairseq/models/nat/levenshtein_transformer.py (432:456) duplicated block id: 36 size: 25 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (12:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (12:36) duplicated block id: 37 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (13:36) duplicated block id: 38 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (13:36) duplicated block id: 39 size: 24 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (348:372) - fairseq/models/speech_to_speech/s2s_transformer.py (487:511) duplicated block id: 40 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (13:36) duplicated block id: 41 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (13:36) duplicated block id: 42 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (13:36) duplicated block id: 43 size: 24 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_dataset.py (5:36) - fairseq/benchmark/dummy_mt.py (88:119) duplicated block id: 44 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (13:36) duplicated block id: 45 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (13:36) duplicated block id: 46 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (13:36) duplicated block id: 47 size: 24 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (13:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (13:36) duplicated block id: 48 size: 23 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (338:360) - fairseq/models/speech_to_text/convtransformer.py (122:144) duplicated block id: 49 size: 23 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (135:157) - fairseq/tasks/speech_to_text.py (25:47) duplicated block id: 50 size: 23 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (122:144) - fairseq/models/speech_to_text/s2t_transformer.py (214:236) duplicated block id: 51 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (15:36) duplicated block id: 52 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (15:36) duplicated block id: 53 size: 22 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (170:199) - fairseq/criterions/speech_to_speech_criterion.py (282:310) duplicated block id: 54 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (15:36) duplicated block id: 55 size: 22 cleaned lines of code in 2 files: - fairseq/criterions/cross_entropy.py (60:90) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (58:87) duplicated block id: 56 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (15:36) duplicated block id: 57 size: 22 cleaned lines of code in 2 files: - fairseq/modules/quantization/scalar/modules/qemb.py (90:120) - fairseq/modules/quantization/scalar/modules/qlinear.py (69:99) duplicated block id: 58 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (15:36) duplicated block id: 59 size: 22 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (96:121) - fairseq/models/speech_to_text/xm_transformer.py (459:484) duplicated block id: 60 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (15:36) duplicated block id: 61 size: 22 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (194:215) - fairseq/models/speech_to_text/xm_transformer.py (390:411) duplicated block id: 62 size: 22 cleaned lines of code in 2 files: - fairseq/modules/quantization/scalar/modules/qconv.py (96:126) - fairseq/modules/quantization/scalar/modules/qemb.py (90:120) duplicated block id: 63 size: 22 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (318:339) - fairseq/models/speech_to_text/xm_transformer.py (390:411) duplicated block id: 64 size: 22 cleaned lines of code in 2 files: - fairseq/modules/quantization/scalar/modules/qconv.py (96:126) - fairseq/modules/quantization/scalar/modules/qlinear.py (69:99) duplicated block id: 65 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (15:36) duplicated block id: 66 size: 22 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (96:117) - fairseq/models/speech_to_text/xm_transformer.py (390:411) duplicated block id: 67 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (15:36) duplicated block id: 68 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (15:36) duplicated block id: 69 size: 22 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (15:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (15:36) duplicated block id: 70 size: 21 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (88:128) - fairseq/models/transformer/transformer_decoder.py (185:225) duplicated block id: 71 size: 21 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (468:494) - fairseq/tasks/multilingual_masked_lm.py (276:302) duplicated block id: 72 size: 21 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (359:384) - fairseq/models/nat/nonautoregressive_transformer.py (304:329) duplicated block id: 73 size: 21 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (246:268) - fairseq/tasks/multilingual_masked_lm.py (316:338) duplicated block id: 74 size: 21 cleaned lines of code in 2 files: - fairseq/optim/amp_optimizer.py (79:106) - fairseq/optim/fp16_optimizer.py (306:333) duplicated block id: 75 size: 20 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (232:254) - fairseq/tasks/multilingual_language_modeling.py (469:494) duplicated block id: 76 size: 20 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (28:50) - fairseq/models/roberta/model_xlmr.py (24:46) duplicated block id: 77 size: 20 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (61:80) - fairseq/models/text_to_speech/hifigan.py (71:90) duplicated block id: 78 size: 20 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (174:195) - fairseq/models/wav2vec/wav2vec2_asr.py (111:132) duplicated block id: 79 size: 20 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (232:254) - fairseq/tasks/multilingual_masked_lm.py (277:302) duplicated block id: 80 size: 20 cleaned lines of code in 2 files: - fairseq/criterions/hubert_criterion.py (143:165) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (56:78) duplicated block id: 81 size: 19 cleaned lines of code in 2 files: - fairseq/criterions/cross_entropy.py (60:81) - fairseq/criterions/hubert_criterion.py (145:165) duplicated block id: 82 size: 19 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (207:225) - fairseq/tasks/translation_multi_simple_epoch.py (337:355) duplicated block id: 83 size: 19 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (263:287) - fairseq/models/speech_to_text/xm_transformer.py (545:570) duplicated block id: 84 size: 19 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (754:775) - fairseq/models/transformer/transformer_decoder.py (424:445) duplicated block id: 85 size: 18 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (137:154) - fairseq/data/audio/text_to_speech_dataset.py (37:54) duplicated block id: 86 size: 18 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (131:151) - fairseq/models/lightconv.py (453:473) duplicated block id: 87 size: 18 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (526:543) - fairseq/models/lightconv.py (838:855) duplicated block id: 88 size: 18 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (487:504) - fairseq/models/speech_to_text/s2t_transformer.py (224:241) duplicated block id: 89 size: 18 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (562:579) - fairseq/models/speech_to_text/modules/emformer.py (679:696) duplicated block id: 90 size: 18 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (175:192) - fairseq/models/wav2vec/wav2vec2_asr.py (255:272) duplicated block id: 91 size: 18 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (116:136) - fairseq/criterions/tacotron2_loss.py (207:227) duplicated block id: 92 size: 18 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (150:169) - fairseq/models/hubert/hubert_asr.py (81:100) duplicated block id: 93 size: 17 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (440:456) - fairseq/models/speech_to_text/convtransformer.py (408:424) duplicated block id: 94 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (119:135) - fairseq/models/speech_to_text/convtransformer.py (67:83) duplicated block id: 95 size: 17 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (182:198) - fairseq/models/speech_to_text/convtransformer.py (408:424) duplicated block id: 96 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (162:178) - fairseq/models/speech_to_text/convtransformer.py (93:109) duplicated block id: 97 size: 17 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (331:349) - fairseq/modules/multihead_attention.py (585:603) duplicated block id: 98 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (162:178) - fairseq/models/speech_to_text/s2t_transformer.py (191:207) duplicated block id: 99 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (119:135) - fairseq/models/speech_to_speech/s2s_transformer.py (289:305) duplicated block id: 100 size: 17 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (65:85) - fairseq/models/speech_to_speech/s2s_transformer.py (292:308) duplicated block id: 101 size: 17 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (330:346) - fairseq/models/wav2vec/wav2vec2.py (422:438) duplicated block id: 102 size: 17 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (46:62) - fairseq/models/wav2vec/wav2vec2.py (49:65) duplicated block id: 103 size: 17 cleaned lines of code in 2 files: - fairseq/data/huffman/huffman_mmap_indexed_dataset.py (45:69) - fairseq/data/indexed_dataset.py (422:443) duplicated block id: 104 size: 17 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (408:424) - fairseq/models/transformer/transformer_legacy.py (178:194) duplicated block id: 105 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (119:135) - fairseq/models/speech_to_speech/s2s_transformer.py (464:480) duplicated block id: 106 size: 17 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (101:123) - fairseq/criterions/cross_entropy.py (68:90) duplicated block id: 107 size: 17 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (123:139) - fairseq/models/speech_to_text/convtransformer.py (408:424) duplicated block id: 108 size: 17 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (65:85) - fairseq/models/speech_to_text/s2t_transformer.py (168:184) duplicated block id: 109 size: 17 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (65:85) - fairseq/models/speech_to_text/convtransformer.py (70:86) duplicated block id: 110 size: 17 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (65:85) - fairseq/models/speech_to_speech/s2s_transformer.py (467:483) duplicated block id: 111 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (119:135) - fairseq/models/speech_to_text/s2t_transformer.py (165:181) duplicated block id: 112 size: 17 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (416:432) - fairseq/models/speech_to_text/convtransformer.py (408:424) duplicated block id: 113 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (162:178) - fairseq/models/speech_to_speech/s2s_transformer.py (315:331) duplicated block id: 114 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (165:181) - fairseq/models/lightconv_lm.py (65:81) duplicated block id: 115 size: 17 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (238:255) - fairseq/models/lightconv_lm.py (185:202) duplicated block id: 116 size: 17 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (250:266) - fairseq/models/speech_to_text/convtransformer.py (408:424) duplicated block id: 117 size: 17 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (101:123) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (65:87) duplicated block id: 118 size: 16 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (54:69) - fairseq/data/audio/frm_text_to_speech_dataset.py (182:197) duplicated block id: 119 size: 16 cleaned lines of code in 2 files: - fairseq/tasks/translation.py (333:351) - fairseq/tasks/translation_lev.py (48:66) duplicated block id: 120 size: 16 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (330:345) - fairseq/models/wav2vec/wav2vec2.py (462:477) duplicated block id: 121 size: 16 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (422:437) - fairseq/models/wav2vec/wav2vec2.py (462:477) duplicated block id: 122 size: 16 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (420:436) - fairseq/models/speech_to_text/s2t_transformer.py (474:489) duplicated block id: 123 size: 16 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (418:433) - fairseq/data/audio/text_to_speech_dataset.py (61:76) duplicated block id: 124 size: 16 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qemb.py (46:61) - fairseq/modules/quantization/scalar/modules/qemb.py (50:65) duplicated block id: 125 size: 16 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (283:303) - fairseq/modules/multihead_attention.py (522:542) duplicated block id: 126 size: 16 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (608:623) - fairseq/tasks/translation.py (40:55) duplicated block id: 127 size: 15 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (561:575) - fairseq/models/lightconv.py (599:613) duplicated block id: 128 size: 15 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (276:294) - fairseq/modules/lightweight_convolution.py (281:299) duplicated block id: 129 size: 15 cleaned lines of code in 2 files: - fairseq/models/nat/nat_crf_transformer.py (45:64) - fairseq/models/nat/nonautoregressive_transformer.py (82:101) duplicated block id: 130 size: 15 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (544:558) - fairseq/models/speech_to_text/modules/emformer.py (662:676) duplicated block id: 131 size: 15 cleaned lines of code in 2 files: - fairseq/models/bart/hub_interface.py (143:160) - fairseq/models/roberta/hub_interface.py (95:112) duplicated block id: 132 size: 15 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (177:194) - fairseq/tasks/multilingual_language_modeling.py (246:263) duplicated block id: 133 size: 15 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/cosine_lr_scheduler.py (19:33) - fairseq/optim/lr_scheduler/step_lr_scheduler.py (18:32) duplicated block id: 134 size: 15 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (57:75) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (127:145) duplicated block id: 135 size: 15 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (45:59) - fairseq/tasks/masked_lm.py (46:60) duplicated block id: 136 size: 15 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (46:60) - fairseq/tasks/multilingual_language_modeling.py (53:67) duplicated block id: 137 size: 14 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (101:114) - fairseq/criterions/hubert_criterion.py (152:165) duplicated block id: 138 size: 14 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (103:116) - fairseq/data/audio/text_to_speech_dataset.py (60:73) duplicated block id: 139 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (65:78) - fairseq/models/speech_to_text/xm_transformer.py (390:403) duplicated block id: 140 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (173:192) - fairseq/models/speech_to_text/xm_transformer.py (543:563) duplicated block id: 141 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (640:653) - fairseq/models/speech_to_text/convtransformer.py (420:434) duplicated block id: 142 size: 14 cleaned lines of code in 2 files: - fairseq_cli/train.py (271:284) - fairseq_cli/train.py (456:469) duplicated block id: 143 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (640:653) - fairseq/models/speech_to_text/s2t_transformer.py (474:487) duplicated block id: 144 size: 14 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (72:85) - fairseq/criterions/tacotron2_loss.py (140:153) duplicated block id: 145 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (656:670) - fairseq/models/lightconv.py (764:778) duplicated block id: 146 size: 14 cleaned lines of code in 2 files: - fairseq/optim/adam.py (151:173) - fairseq/optim/adamax.py (99:121) duplicated block id: 147 size: 14 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (88:101) - fairseq/tasks/multilingual_language_modeling.py (97:111) duplicated block id: 148 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (383:397) - fairseq/models/speech_to_text/s2t_transformer.py (433:447) duplicated block id: 149 size: 14 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (156:169) - fairseq/tasks/denoising.py (110:123) duplicated block id: 150 size: 14 cleaned lines of code in 2 files: - fairseq/data/append_token_dataset.py (25:41) - fairseq/data/prepend_token_dataset.py (25:41) duplicated block id: 151 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (65:78) - fairseq/models/speech_to_text/s2t_transformer.py (194:207) duplicated block id: 152 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (65:78) - fairseq/models/speech_to_text/convtransformer.py (96:109) duplicated block id: 153 size: 14 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (73:86) - fairseq/tasks/multilingual_language_modeling.py (97:111) duplicated block id: 154 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (198:211) - fairseq/modules/multihead_attention.py (26:39) duplicated block id: 155 size: 14 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (44:57) - fairseq/models/lightconv_lm.py (86:99) duplicated block id: 156 size: 14 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (197:210) - fairseq/models/wav2vec/wav2vec2_asr.py (133:146) duplicated block id: 157 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (65:78) - fairseq/models/speech_to_speech/s2s_transformer.py (318:331) duplicated block id: 158 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (165:178) - fairseq/models/speech_to_text/xm_transformer.py (390:403) duplicated block id: 159 size: 14 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (122:135) - fairseq/models/masked_lm.py (65:80) duplicated block id: 160 size: 14 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (46:60) - fairseq/data/language_pair_dataset.py (77:91) duplicated block id: 161 size: 14 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (141:157) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (200:216) duplicated block id: 162 size: 14 cleaned lines of code in 2 files: - fairseq/data/huffman/huffman_mmap_indexed_dataset.py (83:97) - fairseq/data/indexed_dataset.py (456:470) duplicated block id: 163 size: 14 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (73:86) - fairseq/tasks/masked_lm.py (88:101) duplicated block id: 164 size: 14 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (675:690) - fairseq/models/text_to_speech/tts_transformer.py (439:454) duplicated block id: 165 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (537:554) - fairseq/models/wav2vec/wav2vec2_asr.py (656:673) duplicated block id: 166 size: 13 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (31:43) - fairseq/data/audio/text_to_speech_dataset.py (42:54) duplicated block id: 167 size: 13 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (109:121) - fairseq/models/speech_to_text/convtransformer.py (137:149) duplicated block id: 168 size: 13 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (608:620) - fairseq/data/multilingual/multilingual_data_manager.py (806:818) duplicated block id: 169 size: 13 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (729:742) - fairseq/models/wav2vec/wav2vec2_asr.py (723:736) duplicated block id: 170 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/transformer/transformer_legacy.py (174:186) duplicated block id: 171 size: 13 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (487:499) - fairseq/models/speech_to_text/convtransformer.py (132:144) duplicated block id: 172 size: 13 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (345:361) - fairseq/models/wav2vec/wav2vec2_asr.py (736:752) duplicated block id: 173 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/nat/levenshtein_transformer.py (436:448) duplicated block id: 174 size: 13 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (198:211) - fairseq/models/nat/nonautoregressive_ensembles.py (202:214) duplicated block id: 175 size: 13 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (269:281) - fairseq/models/transformer/transformer_legacy.py (110:122) duplicated block id: 176 size: 13 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (47:59) - fairseq/models/text_to_speech/tts_transformer.py (64:76) duplicated block id: 177 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/nat/nonautoregressive_transformer.py (412:424) duplicated block id: 178 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/nat/insertion_transformer.py (246:258) duplicated block id: 179 size: 13 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (858:870) - fairseq/models/speech_to_text/modules/emformer.py (900:916) duplicated block id: 180 size: 13 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (31:43) - fairseq/data/audio/speech_to_text_dataset.py (142:154) duplicated block id: 181 size: 13 cleaned lines of code in 2 files: - fairseq/criterions/sentence_prediction.py (122:141) - fairseq/criterions/sentence_ranking.py (101:120) duplicated block id: 182 size: 13 cleaned lines of code in 2 files: - fairseq/models/fairseq_decoder.py (59:77) - fairseq/models/fairseq_model.py (63:81) duplicated block id: 183 size: 13 cleaned lines of code in 2 files: - fairseq/logging/progress_bar.py (183:198) - fairseq/logging/progress_bar.py (253:268) duplicated block id: 184 size: 13 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_config.py (149:161) - fairseq/models/transformer_lm.py (117:129) duplicated block id: 185 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/nat/cmlm_transformer.py (119:131) duplicated block id: 186 size: 13 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (912:924) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (178:190) duplicated block id: 187 size: 13 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (806:818) - fairseq/tasks/translation.py (40:52) duplicated block id: 188 size: 13 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (98:111) - fairseq/models/roberta/model.py (329:342) duplicated block id: 189 size: 13 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (161:174) - fairseq/models/speech_to_text/s2t_transformer.py (246:259) duplicated block id: 190 size: 13 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (175:192) - fairseq/models/speech_to_text/s2t_transformer.py (263:280) duplicated block id: 191 size: 13 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (418:430) - fairseq/data/audio/speech_to_text_joint_dataset.py (104:116) duplicated block id: 192 size: 12 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (125:136) - fairseq/tasks/sentence_ranking.py (57:68) duplicated block id: 193 size: 12 cleaned lines of code in 2 files: - fairseq/optim/adafactor.py (168:186) - fairseq/optim/adam.py (159:177) duplicated block id: 194 size: 12 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tts_transformer.py (301:313) - fairseq/models/transformer/transformer_decoder.py (386:398) duplicated block id: 195 size: 12 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (141:155) - fairseq/models/nat/nonautoregressive_transformer.py (434:448) duplicated block id: 196 size: 12 cleaned lines of code in 2 files: - fairseq/data/iterators.py (479:494) - fairseq/data/iterators.py (802:816) duplicated block id: 197 size: 12 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (363:375) - fairseq/models/speech_to_text/s2t_transformer.py (409:421) duplicated block id: 198 size: 12 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qemb.py (93:104) - fairseq/modules/quantization/scalar/modules/qemb.py (134:145) duplicated block id: 199 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (59:70) - fairseq/data/audio/text_to_speech_dataset.py (65:76) duplicated block id: 200 size: 12 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (96:107) - fairseq/models/speech_to_text/convtransformer.py (48:59) duplicated block id: 201 size: 12 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (185:196) - fairseq/models/speech_to_speech/s2s_transformer.py (333:344) duplicated block id: 202 size: 12 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (328:341) - fairseq/models/lightconv.py (396:419) duplicated block id: 203 size: 12 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (96:107) - fairseq/models/speech_to_text/s2t_transformer.py (146:157) duplicated block id: 204 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (394:405) - fairseq/data/audio/text_to_speech_dataset.py (187:198) duplicated block id: 205 size: 12 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (195:210) - fairseq/optim/fused_adam.py (236:251) duplicated block id: 206 size: 12 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (175:187) - fairseq/modules/dynamic_convolution.py (233:244) duplicated block id: 207 size: 12 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (17:28) - fairseq/modules/dynamic_convolution.py (97:108) duplicated block id: 208 size: 12 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (469:480) - fairseq/tasks/translation_from_pretrained_bart.py (104:115) duplicated block id: 209 size: 12 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (96:107) - fairseq/models/speech_to_speech/s2s_transformer.py (270:281) duplicated block id: 210 size: 12 cleaned lines of code in 2 files: - fairseq/models/fconv.py (449:460) - fairseq/models/fconv_self_att.py (380:392) duplicated block id: 211 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (59:70) - fairseq/data/audio/speech_to_text_dataset.py (422:433) duplicated block id: 212 size: 12 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (127:143) - fairseq/models/roberta/model.py (473:489) duplicated block id: 213 size: 12 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (278:292) - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (119:133) duplicated block id: 214 size: 12 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (363:375) - fairseq/models/speech_to_text/xm_transformer.py (350:362) duplicated block id: 215 size: 12 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (176:188) - fairseq/models/lightconv.py (533:545) duplicated block id: 216 size: 12 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (80:92) - fairseq/models/wav2vec/wav2vec2.py (81:93) duplicated block id: 217 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (539:553) - fairseq/data/audio/speech_to_text_joint_dataset.py (345:359) duplicated block id: 218 size: 12 cleaned lines of code in 2 files: - fairseq/criterions/ctc.py (225:240) - fairseq/criterions/wav2vec_criterion.py (164:178) duplicated block id: 219 size: 12 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (39:53) - fairseq/models/nat/nat_crf_transformer.py (49:64) duplicated block id: 220 size: 12 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (515:528) - fairseq/models/text_to_speech/tts_transformer.py (382:395) duplicated block id: 221 size: 12 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (217:228) - fairseq/modules/sparse_multihead_attention.py (39:51) duplicated block id: 222 size: 12 cleaned lines of code in 2 files: - fairseq/modules/multihead_attention.py (26:37) - fairseq/modules/sparse_multihead_attention.py (24:35) duplicated block id: 223 size: 12 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (185:196) - fairseq/models/speech_to_text/s2t_transformer.py (209:220) duplicated block id: 224 size: 12 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (30:41) - fairseq/modules/transformer_sentence_encoder.py (90:101) duplicated block id: 225 size: 12 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (96:107) - fairseq/models/speech_to_speech/s2s_transformer.py (445:456) duplicated block id: 226 size: 12 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (190:201) - fairseq/models/roberta/enc_dec.py (35:46) duplicated block id: 227 size: 12 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (198:209) - fairseq/modules/sparse_multihead_attention.py (24:35) duplicated block id: 228 size: 12 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (113:125) - fairseq/tasks/speech_to_speech.py (147:158) duplicated block id: 229 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (417:428) - fairseq/data/audio/speech_to_text_joint_dataset.py (259:270) duplicated block id: 230 size: 12 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (39:53) - fairseq/models/nat/nonautoregressive_transformer.py (86:101) duplicated block id: 231 size: 12 cleaned lines of code in 2 files: - fairseq/data/audio/raw_audio_dataset.py (261:272) - fairseq/data/audio/raw_audio_dataset.py (342:353) duplicated block id: 232 size: 12 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (119:133) - fairseq/modules/lightweight_convolution.py (283:297) duplicated block id: 233 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (275:286) - fairseq/tasks/multilingual_language_modeling.py (504:515) duplicated block id: 234 size: 11 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (81:91) - fairseq/criterions/speech_to_speech_criterion.py (155:165) duplicated block id: 235 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (217:228) - fairseq/tasks/multilingual_masked_lm.py (264:275) duplicated block id: 236 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (121:133) - fairseq/modules/lightconv_layer/lightconv_layer.py (122:134) duplicated block id: 237 size: 11 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (159:169) - fairseq/tasks/speech_to_text.py (37:47) duplicated block id: 238 size: 11 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (19:29) - fairseq/modules/transformer_sentence_encoder.py (78:88) duplicated block id: 239 size: 11 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (104:114) - fairseq/data/audio/speech_to_text_joint_dataset.py (260:270) duplicated block id: 240 size: 11 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (22:35) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (87:100) duplicated block id: 241 size: 11 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (162:172) - fairseq/models/wav2vec/wav2vec2_asr.py (99:109) duplicated block id: 242 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (88:100) - fairseq/tasks/legacy_masked_lm.py (64:76) duplicated block id: 243 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (39:49) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (74:84) duplicated block id: 244 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (181:193) - fairseq/tasks/language_modeling.py (221:231) duplicated block id: 245 size: 11 cleaned lines of code in 2 files: - fairseq/models/lstm.py (56:66) - fairseq/models/lstm_lm.py (33:43) duplicated block id: 246 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (280:292) - fairseq/modules/lightconv_layer/lightconv_layer.py (122:134) duplicated block id: 247 size: 11 cleaned lines of code in 2 files: - fairseq/optim/adam.py (151:170) - fairseq/optim/nag.py (54:73) duplicated block id: 248 size: 11 cleaned lines of code in 2 files: - fairseq/criterions/model_criterion.py (110:122) - fairseq/criterions/wav2vec_criterion.py (164:176) duplicated block id: 249 size: 11 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (95:105) - fairseq/models/lightconv_lm.py (41:51) duplicated block id: 250 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (135:146) - fairseq/tasks/multilingual_masked_lm.py (175:186) duplicated block id: 251 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (34:44) - fairseq/modules/dynamic_convolution.py (48:58) duplicated block id: 252 size: 11 cleaned lines of code in 2 files: - fairseq/criterions/ctc.py (225:238) - fairseq/criterions/model_criterion.py (110:122) duplicated block id: 253 size: 11 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (397:407) - fairseq/models/speech_to_speech/s2s_transformer.py (543:553) duplicated block id: 254 size: 11 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (187:197) - fairseq/data/audio/speech_to_text_dataset.py (422:432) duplicated block id: 255 size: 11 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (294:304) - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (61:71) duplicated block id: 256 size: 11 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (891:903) - fairseq/models/wav2vec/wav2vec2_asr.py (740:752) duplicated block id: 257 size: 11 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (61:73) - fairseq/models/nat/nonautoregressive_transformer.py (108:119) duplicated block id: 258 size: 11 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (588:600) - fairseq/models/wav2vec/wav2vec2_asr.py (740:752) duplicated block id: 259 size: 11 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (588:600) - fairseq/models/lightconv.py (891:903) duplicated block id: 260 size: 11 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (46:56) - fairseq/models/lightconv.py (333:343) duplicated block id: 261 size: 11 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (159:169) - fairseq/tasks/speech_to_speech.py (147:157) duplicated block id: 262 size: 11 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (588:600) - fairseq/models/hubert/hubert_asr.py (349:361) duplicated block id: 263 size: 11 cleaned lines of code in 2 files: - fairseq/optim/adamax.py (99:118) - fairseq/optim/nag.py (54:73) duplicated block id: 264 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (148:158) - fairseq/modules/lightweight_convolution.py (217:227) duplicated block id: 265 size: 11 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (187:197) - fairseq/data/audio/text_to_speech_dataset.py (65:75) duplicated block id: 266 size: 11 cleaned lines of code in 2 files: - fairseq/models/bart/hub_interface.py (68:78) - fairseq/models/roberta/hub_interface.py (70:80) duplicated block id: 267 size: 11 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (190:202) - fairseq/models/text_to_speech/tts_transformer.py (175:187) duplicated block id: 268 size: 11 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_layer.py (122:134) - fairseq/modules/lightweight_convolution.py (285:297) duplicated block id: 269 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (265:279) - fairseq/tasks/speech_to_text.py (105:119) duplicated block id: 270 size: 11 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (258:269) - fairseq/models/nat/nonautoregressive_transformer.py (164:175) duplicated block id: 271 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (113:123) - fairseq/tasks/speech_to_text.py (37:47) duplicated block id: 272 size: 11 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (260:270) - fairseq/data/audio/text_to_speech_dataset.py (61:71) duplicated block id: 273 size: 11 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (192:202) - fairseq/modules/lightweight_convolution.py (217:227) duplicated block id: 274 size: 11 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (34:45) - fairseq/tasks/legacy_masked_lm.py (30:41) duplicated block id: 275 size: 11 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (349:361) - fairseq/models/lightconv.py (891:903) duplicated block id: 276 size: 10 cleaned lines of code in 2 files: - fairseq/models/bart/hub_interface.py (187:196) - fairseq/models/roberta/hub_interface.py (164:173) duplicated block id: 277 size: 10 cleaned lines of code in 2 files: - fairseq/modules/downsampled_multihead_attention.py (255:264) - fairseq/modules/downsampled_multihead_attention.py (269:278) duplicated block id: 278 size: 10 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (15:24) - fairseq/modules/transformer_sentence_encoder_layer.py (22:31) duplicated block id: 279 size: 10 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (52:63) - fairseq/models/wav2vec/wav2vec2_asr.py (82:93) duplicated block id: 280 size: 10 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (393:402) - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (411:420) duplicated block id: 281 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (84:93) - fairseq/models/speech_to_text/s2t_transformer.py (147:156) duplicated block id: 282 size: 10 cleaned lines of code in 2 files: - fairseq/data/multilingual/sampled_multi_dataset.py (74:83) - fairseq/data/multilingual/sampled_multi_epoch_dataset.py (49:58) duplicated block id: 283 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:935) - fairseq/models/nat/nonautoregressive_transformer.py (427:436) duplicated block id: 284 size: 10 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_conformer.py (23:32) - fairseq/models/speech_to_text/s2t_transformer.py (319:329) duplicated block id: 285 size: 10 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (360:369) - fairseq/models/text_to_speech/tts_transformer.py (229:238) duplicated block id: 286 size: 10 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tts_transformer.py (239:249) - fairseq/models/transformer/transformer_decoder.py (344:354) duplicated block id: 287 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (84:93) - fairseq/models/speech_to_speech/s2s_transformer.py (446:455) duplicated block id: 288 size: 10 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (305:314) - fairseq/models/text_to_speech/tts_transformer.py (229:238) duplicated block id: 289 size: 10 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (350:359) - fairseq/models/wav2vec/wav2vec2.py (371:380) duplicated block id: 290 size: 10 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (313:322) - fairseq/models/wav2vec/wav2vec2.py (442:451) duplicated block id: 291 size: 10 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_conformer.py (102:111) - fairseq/models/speech_to_text/s2t_transformer.py (362:371) duplicated block id: 292 size: 10 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (69:79) - fairseq/models/nat/levenshtein_utils.py (61:71) duplicated block id: 293 size: 10 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/cosine_lr_scheduler.py (19:28) - fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py (39:48) duplicated block id: 294 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (84:93) - fairseq/models/roberta/model.py (97:106) duplicated block id: 295 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (84:93) - fairseq/models/speech_to_text/convtransformer.py (49:58) duplicated block id: 296 size: 10 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_config.py (203:212) - fairseq/models/transformer_lm.py (187:196) duplicated block id: 297 size: 10 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (28:37) - fairseq/models/wav2vec/wav2vec2_asr.py (46:55) duplicated block id: 298 size: 10 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (366:377) - fairseq/models/roberta/model_camembert.py (38:49) duplicated block id: 299 size: 10 cleaned lines of code in 2 files: - fairseq/data/append_token_dataset.py (13:23) - fairseq/data/prepend_token_dataset.py (13:23) duplicated block id: 300 size: 10 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py (39:48) - fairseq/optim/lr_scheduler/step_lr_scheduler.py (18:27) duplicated block id: 301 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (84:93) - fairseq/models/speech_to_speech/s2s_transformer.py (271:280) duplicated block id: 302 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (593:604) - fairseq/models/wav2vec/wav2vec2_asr.py (717:728) duplicated block id: 303 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (233:243) - fairseq/models/transformer_lm.py (293:303) duplicated block id: 304 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:935) - fairseq/models/nat/insertion_transformer.py (261:270) duplicated block id: 305 size: 10 cleaned lines of code in 2 files: - fairseq/models/multilingual_transformer.py (105:114) - fairseq/models/transformer/transformer_legacy.py (113:122) duplicated block id: 306 size: 10 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/transformer_layer.py (27:38) - fairseq/model_parallel/modules/transformer_layer.py (52:63) duplicated block id: 307 size: 10 cleaned lines of code in 2 files: - fairseq/models/nat/nat_crf_transformer.py (101:111) - fairseq/models/nat/nonautoregressive_transformer.py (133:143) duplicated block id: 308 size: 10 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (67:77) - fairseq/modules/multihead_attention.py (48:59) duplicated block id: 309 size: 10 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (272:281) - fairseq/models/multilingual_transformer.py (105:114) duplicated block id: 310 size: 10 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (109:118) - fairseq/tasks/text_to_speech.py (48:57) duplicated block id: 311 size: 10 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (61:70) - fairseq/models/text_to_speech/hifigan.py (81:90) duplicated block id: 312 size: 10 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (600:610) - fairseq/models/speech_to_text/modules/emformer.py (708:719) duplicated block id: 313 size: 10 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (385:394) - fairseq/tasks/multilingual_masked_lm.py (238:247) duplicated block id: 314 size: 10 cleaned lines of code in 2 files: - fairseq_cli/generate.py (263:272) - fairseq_cli/interactive.py (267:276) duplicated block id: 315 size: 10 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (270:279) - fairseq/criterions/tacotron2_loss.py (156:165) duplicated block id: 316 size: 10 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (366:377) - fairseq/models/roberta/model_xlmr.py (34:45) duplicated block id: 317 size: 10 cleaned lines of code in 2 files: - fairseq/data/multilingual/sampled_multi_dataset.py (401:410) - fairseq/data/multilingual/sampled_multi_epoch_dataset.py (150:159) duplicated block id: 318 size: 10 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (60:69) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (158:167) duplicated block id: 319 size: 10 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (145:154) - fairseq/models/roberta/model.py (333:342) duplicated block id: 320 size: 10 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (108:128) - fairseq/models/fairseq_model.py (450:470) duplicated block id: 321 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:935) - fairseq/models/nat/levenshtein_transformer.py (451:460) duplicated block id: 322 size: 10 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (102:111) - fairseq/models/bart/model.py (145:154) duplicated block id: 323 size: 10 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (159:168) - fairseq/data/audio/speech_to_text_dataset.py (472:481) duplicated block id: 324 size: 10 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (520:529) - fairseq/data/multilingual/multilingual_data_manager.py (607:616) duplicated block id: 325 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:935) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (193:202) duplicated block id: 326 size: 10 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:935) - fairseq/models/transformer/transformer_legacy.py (189:198) duplicated block id: 327 size: 10 cleaned lines of code in 2 files: - fairseq/search.py (567:576) - fairseq/search.py (675:684) duplicated block id: 328 size: 10 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (156:171) - fairseq/tasks/multilingual_language_modeling.py (225:240) duplicated block id: 329 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (193:201) - fairseq/modules/lightconv_layer/lightconv_layer.py (82:90) duplicated block id: 330 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (363:372) - fairseq/models/transformer/transformer_encoder.py (302:311) duplicated block id: 331 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (177:186) - fairseq/data/audio/speech_to_text_dataset.py (295:304) duplicated block id: 332 size: 9 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/inverse_square_root_schedule.py (20:28) - fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py (41:49) duplicated block id: 333 size: 9 cleaned lines of code in 2 files: - fairseq/data/iterators.py (323:331) - fairseq/data/iterators.py (747:755) duplicated block id: 334 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (59:67) - fairseq/data/audio/speech_to_text_joint_dataset.py (108:116) duplicated block id: 335 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (521:529) - fairseq/data/multilingual/multilingual_data_manager.py (653:661) duplicated block id: 336 size: 9 cleaned lines of code in 2 files: - fairseq/model_parallel/models/__init__.py (11:19) - fairseq/models/huggingface/__init__.py (11:19) duplicated block id: 337 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (559:567) - fairseq/tasks/translation.py (88:96) duplicated block id: 338 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (653:661) - fairseq/tasks/translation.py (40:48) duplicated block id: 339 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (96:105) - fairseq/models/text_to_speech/fastspeech2.py (341:350) duplicated block id: 340 size: 9 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (33:43) - fairseq/models/roberta/model_xlmr.py (33:43) duplicated block id: 341 size: 9 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (81:89) - fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py (114:123) duplicated block id: 342 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (142:150) - fairseq/data/audio/speech_to_text_joint_dataset.py (84:92) duplicated block id: 343 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (608:616) - fairseq/data/multilingual/multilingual_data_manager.py (653:661) duplicated block id: 344 size: 9 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/cosine_lr_scheduler.py (107:120) - fairseq/optim/lr_scheduler/inverse_square_root_schedule.py (69:82) duplicated block id: 345 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (149:157) - fairseq/modules/lightconv_layer/lightconv_layer.py (82:90) duplicated block id: 346 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (379:387) - fairseq/models/transformer/transformer_decoder.py (229:237) duplicated block id: 347 size: 9 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (31:39) - fairseq/models/text_to_speech/tts_transformer.py (46:54) duplicated block id: 348 size: 9 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (37:47) - fairseq/models/roberta/model_gottbert.py (33:43) duplicated block id: 349 size: 9 cleaned lines of code in 2 files: - fairseq/search.py (568:576) - fairseq/search.py (765:773) duplicated block id: 350 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/sentence_prediction.py (245:256) - fairseq/tasks/sentence_ranking.py (185:196) duplicated block id: 351 size: 9 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (131:141) - fairseq/models/roberta/model.py (76:84) duplicated block id: 352 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (459:468) - fairseq/models/text_to_speech/fastspeech2.py (341:350) duplicated block id: 353 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (459:468) - fairseq/models/text_to_speech/tts_transformer.py (335:344) duplicated block id: 354 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (548:561) - fairseq/models/nat/levenshtein_transformer.py (341:352) duplicated block id: 355 size: 9 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (159:167) - fairseq/models/nat/nonautoregressive_transformer.py (108:116) duplicated block id: 356 size: 9 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (213:221) - fairseq/criterions/tacotron2_loss.py (86:94) duplicated block id: 357 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cpp (15:28) - fairseq/modules/lightconv_layer/lightconv_cuda.cpp (15:28) duplicated block id: 358 size: 9 cleaned lines of code in 2 files: - fairseq/optim/fused_adam.py (357:365) - fairseq/optim/fused_adam.py (373:381) duplicated block id: 359 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (28:36) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (14:22) duplicated block id: 360 size: 9 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (250:258) - fairseq/modules/multihead_attention.py (483:491) duplicated block id: 361 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (84:92) - fairseq/data/audio/text_to_speech_dataset.py (42:50) duplicated block id: 362 size: 9 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_layer.py (82:90) - fairseq/modules/lightweight_convolution.py (218:226) duplicated block id: 363 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (78:86) - fairseq/tasks/sentence_prediction.py (62:70) duplicated block id: 364 size: 9 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (110:118) - fairseq/models/wav2vec/wav2vec2.py (105:113) duplicated block id: 365 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (96:105) - fairseq/models/text_to_speech/tts_transformer.py (335:344) duplicated block id: 366 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (760:769) - fairseq/tasks/multilingual_translation.py (203:212) duplicated block id: 367 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (156:164) - fairseq/tasks/text_to_speech.py (49:57) duplicated block id: 368 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (521:529) - fairseq/tasks/translation.py (40:48) duplicated block id: 369 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (145:153) - fairseq/models/speech_to_text/modules/emformer.py (1776:1784) duplicated block id: 370 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (717:725) - fairseq/models/lightconv.py (880:888) duplicated block id: 371 size: 9 cleaned lines of code in 2 files: - fairseq/data/legacy/masked_lm_dataset.py (293:303) - fairseq/data/monolingual_dataset.py (243:253) duplicated block id: 372 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (295:303) - fairseq/tasks/multilingual_language_modeling.py (528:536) duplicated block id: 373 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (916:924) - fairseq/models/speech_to_text/convtransformer.py (408:416) duplicated block id: 374 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (161:170) - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (91:101) duplicated block id: 375 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (246:254) - fairseq/data/audio/text_to_speech_dataset.py (200:208) duplicated block id: 376 size: 9 cleaned lines of code in 2 files: - fairseq/modules/downsampled_multihead_attention.py (74:82) - fairseq/modules/downsampled_multihead_attention.py (230:238) duplicated block id: 377 size: 9 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (180:190) - fairseq/models/wav2vec/wav2vec2.py (230:240) duplicated block id: 378 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (653:661) - fairseq/data/multilingual/multilingual_data_manager.py (806:814) duplicated block id: 379 size: 9 cleaned lines of code in 2 files: - fairseq/models/fconv.py (167:176) - fairseq/models/fconv_self_att.py (185:194) duplicated block id: 380 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (145:153) - fairseq/models/speech_to_text/modules/emformer.py (733:741) duplicated block id: 381 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (407:415) - fairseq/data/audio/speech_to_text_joint_dataset.py (246:254) duplicated block id: 382 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (997:1005) - fairseq/models/transformer/transformer_legacy.py (245:253) duplicated block id: 383 size: 9 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (133:143) - fairseq/models/wav2vec/wav2vec2_asr.py (566:576) duplicated block id: 384 size: 9 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:134) - fairseq/models/roberta/model.py (365:375) duplicated block id: 385 size: 9 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (740:750) - fairseq/models/transformer/transformer_decoder.py (398:408) duplicated block id: 386 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (455:465) - fairseq/models/wav2vec/wav2vec2_asr.py (566:576) duplicated block id: 387 size: 9 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (192:200) - fairseq/modules/multihead_attention.py (414:422) duplicated block id: 388 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (137:145) - fairseq/tasks/text_to_speech.py (42:50) duplicated block id: 389 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (733:741) - fairseq/models/speech_to_text/modules/emformer.py (1776:1784) duplicated block id: 390 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (1:11) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (1:11) duplicated block id: 391 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (448:456) - fairseq/tasks/multilingual_masked_lm.py (262:271) duplicated block id: 392 size: 9 cleaned lines of code in 2 files: - fairseq/criterions/hubert_criterion.py (27:35) - fairseq/criterions/wav2vec_criterion.py (26:34) duplicated block id: 393 size: 9 cleaned lines of code in 2 files: - fairseq/search.py (676:684) - fairseq/search.py (765:773) duplicated block id: 394 size: 9 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py (114:123) - fairseq/criterions/speech_to_speech_criterion.py (155:163) duplicated block id: 395 size: 9 cleaned lines of code in 2 files: - fairseq/data/multi_corpus_sampled_dataset.py (56:69) - fairseq/data/round_robin_zip_datasets.py (101:113) duplicated block id: 396 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (37:45) - fairseq/modules/lightconv_layer/lightconv_layer.py (36:44) duplicated block id: 397 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (299:307) - fairseq/tasks/multilingual_masked_lm.py (217:225) duplicated block id: 398 size: 9 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (46:54) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (149:157) duplicated block id: 399 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (93:101) - fairseq/tasks/sentence_prediction.py (62:70) duplicated block id: 400 size: 9 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_config.py (188:196) - fairseq/models/wav2vec/wav2vec2_asr.py (173:181) duplicated block id: 401 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (409:418) - fairseq/models/transformer/transformer_encoder.py (302:311) duplicated block id: 402 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (31:39) - fairseq/data/audio/speech_to_text_joint_dataset.py (84:92) duplicated block id: 403 size: 9 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (141:150) - fairseq/models/nat/insertion_transformer.py (268:277) duplicated block id: 404 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (187:195) - fairseq/data/audio/speech_to_text_joint_dataset.py (108:116) duplicated block id: 405 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_text.py (27:35) - fairseq/tasks/text_to_speech.py (42:50) duplicated block id: 406 size: 9 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (350:359) - fairseq/models/transformer/transformer_encoder.py (302:311) duplicated block id: 407 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (407:415) - fairseq/data/audio/text_to_speech_dataset.py (200:208) duplicated block id: 408 size: 9 cleaned lines of code in 2 files: - fairseq/data/audio/multi_modality_dataset.py (211:222) - fairseq/data/base_wrapper_dataset.py (32:43) duplicated block id: 409 size: 9 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/fastspeech2.py (227:236) - fairseq/models/text_to_speech/tts_transformer.py (49:58) duplicated block id: 410 size: 9 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (45:53) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (322:331) duplicated block id: 411 size: 9 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (103:111) - fairseq/tasks/sentence_prediction.py (62:70) duplicated block id: 412 size: 9 cleaned lines of code in 2 files: - fairseq/models/fconv.py (197:205) - fairseq/models/fconv.py (442:450) duplicated block id: 413 size: 9 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (81:90) - fairseq/models/masked_lm.py (47:57) duplicated block id: 414 size: 9 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (142:150) - fairseq/models/wav2vec/wav2vec2.py (168:176) duplicated block id: 415 size: 9 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (521:529) - fairseq/data/multilingual/multilingual_data_manager.py (806:814) duplicated block id: 416 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (26:33) - fairseq/models/lightconv.py (153:160) duplicated block id: 417 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (110:117) - fairseq/models/speech_to_speech/s2s_transformer.py (286:293) duplicated block id: 418 size: 8 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (82:91) - fairseq/models/speech_to_speech/s2s_transformer.py (286:293) duplicated block id: 419 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (246:254) - fairseq/tasks/multilingual_language_modeling.py (318:326) duplicated block id: 420 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (629:636) - fairseq/models/speech_to_text/modules/emformer.py (831:838) duplicated block id: 421 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (26:33) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (13:20) duplicated block id: 422 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (51:60) - fairseq/models/transformer/transformer_legacy.py (29:38) duplicated block id: 423 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (29:36) duplicated block id: 424 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (492:499) - fairseq/models/speech_to_text/berard.py (109:116) duplicated block id: 425 size: 8 cleaned lines of code in 2 files: - fairseq/models/lstm.py (180:187) - fairseq/models/lstm_lm.py (109:116) duplicated block id: 426 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/hubert_criterion.py (150:158) - fairseq/criterions/sentence_ranking.py (97:105) duplicated block id: 427 size: 8 cleaned lines of code in 2 files: - fairseq/data/multi_corpus_sampled_dataset.py (134:142) - fairseq/data/round_robin_zip_datasets.py (151:159) duplicated block id: 428 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 429 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (353:360) - fairseq/models/speech_to_text/berard.py (109:116) duplicated block id: 430 size: 8 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (31:38) - fairseq/modules/transformer_sentence_encoder.py (203:210) duplicated block id: 431 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (577:585) - fairseq/models/lightconv.py (870:878) duplicated block id: 432 size: 8 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (230:237) - fairseq/models/transformer/transformer_decoder.py (254:261) duplicated block id: 433 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (153:160) - fairseq/models/speech_to_text/s2t_transformer.py (188:195) duplicated block id: 434 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (186:194) - fairseq/tasks/multilingual_masked_lm.py (235:243) duplicated block id: 435 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (273:280) - fairseq/models/nat/nonautoregressive_transformer.py (208:215) duplicated block id: 436 size: 8 cleaned lines of code in 2 files: - fairseq/models/multilingual_transformer.py (221:228) - fairseq/models/transformer/transformer_legacy.py (226:233) duplicated block id: 437 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_translation.py (101:116) - fairseq/tasks/translation_multi_simple_epoch.py (83:98) duplicated block id: 438 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (119:126) - fairseq/tasks/legacy_masked_lm.py (106:114) duplicated block id: 439 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (65:72) - fairseq/models/transformer/transformer_decoder.py (457:464) duplicated block id: 440 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/megatron_trainer.py (58:65) - fairseq/trainer.py (446:453) duplicated block id: 441 size: 8 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (95:102) - fairseq/modules/lightweight_convolution.py (153:160) duplicated block id: 442 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (29:36) duplicated block id: 443 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (34:42) - fairseq/tasks/multilingual_masked_lm.py (39:47) duplicated block id: 444 size: 8 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (210:217) - fairseq/data/audio/speech_to_text_dataset.py (348:355) duplicated block id: 445 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (157:166) - fairseq/modules/transformer_layer.py (207:215) duplicated block id: 446 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (50:57) - fairseq/models/speech_to_text/s2t_transformer.py (188:195) duplicated block id: 447 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightweight_convolution.py (29:36) - fairseq/modules/lightweight_convolution.py (40:47) duplicated block id: 448 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (228:236) - fairseq/models/wav2vec/wav2vec2_asr.py (353:361) duplicated block id: 449 size: 8 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (82:91) - fairseq/models/speech_to_speech/s2s_transformer.py (461:468) duplicated block id: 450 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (106:115) - fairseq/models/text_to_speech/tts_transformer.py (347:356) duplicated block id: 451 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (29:36) duplicated block id: 452 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (604:614) - fairseq/models/speech_to_text/s2t_transformer.py (452:461) duplicated block id: 453 size: 8 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (82:91) - fairseq/models/speech_to_text/s2t_transformer.py (162:169) duplicated block id: 454 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (37:44) - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (198:205) duplicated block id: 455 size: 8 cleaned lines of code in 2 files: - fairseq/modules/transformer_layer.py (46:54) - fairseq/modules/transformer_layer.py (289:298) duplicated block id: 456 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (148:155) - fairseq/tasks/multilingual_language_modeling.py (209:216) duplicated block id: 457 size: 8 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (66:73) - fairseq/data/language_pair_dataset.py (102:109) duplicated block id: 458 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (24:38) - fairseq/tasks/multilingual_language_modeling.py (29:41) duplicated block id: 459 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (102:110) - fairseq/models/nat/insertion_transformer.py (198:206) duplicated block id: 460 size: 8 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (125:134) - fairseq/models/roberta/model_camembert.py (38:47) duplicated block id: 461 size: 8 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (96:103) - fairseq/models/transformer/transformer_base.py (151:158) duplicated block id: 462 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (29:36) duplicated block id: 463 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (110:117) - fairseq/models/speech_to_speech/s2s_transformer.py (461:468) duplicated block id: 464 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 465 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (102:110) - fairseq/models/nat/nat_crf_transformer.py (103:111) duplicated block id: 466 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (846:853) - fairseq/modules/transformer_layer.py (483:491) duplicated block id: 467 size: 8 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (366:375) - fairseq/models/roberta/model_gottbert.py (34:43) duplicated block id: 468 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (64:71) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (281:288) duplicated block id: 469 size: 8 cleaned lines of code in 2 files: - fairseq/models/__init__.py (209:216) - fairseq/models/huggingface/__init__.py (12:19) duplicated block id: 470 size: 8 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qemb.py (34:41) - fairseq/modules/quantization/scalar/modules/qemb.py (36:43) duplicated block id: 471 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (175:182) - fairseq/models/nat/nonautoregressive_ensembles.py (169:176) duplicated block id: 472 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (153:160) - fairseq/models/speech_to_speech/s2s_transformer.py (312:319) duplicated block id: 473 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (534:541) - fairseq/modules/transformer_layer.py (483:491) duplicated block id: 474 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/legacy_masked_lm.py (30:38) - fairseq/tasks/multilingual_masked_lm.py (39:47) duplicated block id: 475 size: 8 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/inverse_square_root_schedule.py (20:27) - fairseq/optim/lr_scheduler/step_lr_scheduler.py (20:27) duplicated block id: 476 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/__init__.py (12:19) - fairseq/models/__init__.py (209:216) duplicated block id: 477 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (57:64) - fairseq/criterions/speech_to_speech_criterion.py (132:139) duplicated block id: 478 size: 8 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (82:91) - fairseq/models/speech_to_text/convtransformer.py (64:71) duplicated block id: 479 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (37:44) - fairseq/modules/multihead_attention.py (26:33) duplicated block id: 480 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (153:160) - fairseq/models/speech_to_text/convtransformer.py (90:97) duplicated block id: 481 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (37:44) - fairseq/modules/sparse_multihead_attention.py (24:31) duplicated block id: 482 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv.py (410:418) - fairseq/models/fconv_self_att.py (354:362) duplicated block id: 483 size: 8 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (237:244) - fairseq/data/audio/speech_to_text_joint_dataset.py (303:310) duplicated block id: 484 size: 8 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (931:938) - fairseq/models/wav2vec/wav2vec2.py (1088:1095) duplicated block id: 485 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (110:117) - fairseq/models/speech_to_text/s2t_transformer.py (162:169) duplicated block id: 486 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (298:305) - fairseq/criterions/tacotron2_loss.py (216:223) duplicated block id: 487 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/translation.py (338:345) - fairseq/tasks/translation_from_pretrained_bart.py (73:80) duplicated block id: 488 size: 8 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (686:694) - fairseq/models/wav2vec/wav2vec2.py (702:711) duplicated block id: 489 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (26:33) - fairseq/models/speech_to_speech/s2s_transformer.py (312:319) duplicated block id: 490 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (110:117) - fairseq/models/masked_lm.py (82:91) duplicated block id: 491 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 492 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (28:36) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (38:46) duplicated block id: 493 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (26:33) - fairseq/models/lightconv_lm.py (50:57) duplicated block id: 494 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (29:36) duplicated block id: 495 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (198:206) - fairseq/models/nat/nonautoregressive_transformer.py (135:143) duplicated block id: 496 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (420:427) - fairseq/models/speech_to_text/xm_transformer.py (648:655) duplicated block id: 497 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (187:194) - fairseq/criterions/tacotron2_loss.py (216:223) duplicated block id: 498 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (469:478) - fairseq/models/text_to_speech/tts_transformer.py (347:356) duplicated block id: 499 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 500 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (632:639) - fairseq/models/speech_to_text/s2t_transformer.py (462:469) duplicated block id: 501 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_utils.py (63:71) - fairseq/models/nat/levenshtein_utils.py (131:139) duplicated block id: 502 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (62:69) - fairseq/tasks/multilingual_denoising.py (113:120) duplicated block id: 503 size: 8 cleaned lines of code in 2 files: - fairseq/checkpoint_utils.py (344:351) - fairseq/checkpoint_utils.py (391:398) duplicated block id: 504 size: 8 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/cosine_lr_scheduler.py (21:28) - fairseq/optim/lr_scheduler/inverse_square_root_schedule.py (20:27) duplicated block id: 505 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (125:132) - fairseq/criterions/speech_to_speech_criterion.py (298:305) duplicated block id: 506 size: 8 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (125:134) - fairseq/models/roberta/model_xlmr.py (34:43) duplicated block id: 507 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (157:165) - fairseq/models/nat/nonautoregressive_transformer.py (191:199) duplicated block id: 508 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (334:346) - fairseq/modules/transformer_layer.py (149:161) duplicated block id: 509 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (149:157) - fairseq/tasks/multilingual_language_modeling.py (333:342) duplicated block id: 510 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (141:149) - fairseq/models/nat/levenshtein_transformer.py (458:466) duplicated block id: 511 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (16:23) - fairseq/tasks/multilingual_language_modeling.py (19:26) duplicated block id: 512 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (66:73) - fairseq/models/wav2vec/wav2vec2_asr.py (95:102) duplicated block id: 513 size: 8 cleaned lines of code in 2 files: - fairseq/data/audio/multi_modality_dataset.py (215:225) - fairseq/data/transform_eos_dataset.py (108:120) duplicated block id: 514 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv.py (75:84) - fairseq/models/lstm.py (32:41) duplicated block id: 515 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (997:1004) - fairseq/models/nat/levenshtein_transformer.py (490:497) duplicated block id: 516 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (268:277) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (391:400) duplicated block id: 517 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (490:497) - fairseq/models/transformer/transformer_legacy.py (245:252) duplicated block id: 518 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (50:57) - fairseq/models/speech_to_text/convtransformer.py (90:97) duplicated block id: 519 size: 8 cleaned lines of code in 2 files: - fairseq/modules/adaptive_input.py (25:33) - fairseq/modules/adaptive_softmax.py (76:84) duplicated block id: 520 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (180:188) - fairseq/models/wav2vec/wav2vec2_asr.py (656:664) duplicated block id: 521 size: 8 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (82:89) - fairseq/models/transformer/transformer_encoder.py (83:90) duplicated block id: 522 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (91:98) - fairseq/models/wav2vec/wav2vec2_asr.py (132:139) duplicated block id: 523 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (109:116) - fairseq/models/speech_to_text/s2t_transformer.py (229:236) duplicated block id: 524 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (469:478) - fairseq/models/text_to_speech/fastspeech2.py (353:362) duplicated block id: 525 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_self_att.py (214:221) - fairseq/models/fconv_self_att.py (375:382) duplicated block id: 526 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) duplicated block id: 527 size: 8 cleaned lines of code in 2 files: - fairseq/models/lstm.py (99:107) - fairseq/models/lstm_lm.py (70:78) duplicated block id: 528 size: 8 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (125:134) - fairseq/models/roberta/model_gottbert.py (34:43) duplicated block id: 529 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 530 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (268:276) - fairseq/models/multilingual_transformer.py (92:100) duplicated block id: 531 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (71:79) - fairseq/models/nat/levenshtein_utils.py (131:139) duplicated block id: 532 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (474:481) - fairseq/models/speech_to_text/xm_transformer.py (648:655) duplicated block id: 533 size: 8 cleaned lines of code in 2 files: - fairseq/data/transform_eos_concat_langpair_dataset.py (113:120) - fairseq/data/transform_eos_lang_pair_dataset.py (64:71) duplicated block id: 534 size: 8 cleaned lines of code in 2 files: - fairseq/search.py (134:144) - fairseq/search.py (200:207) duplicated block id: 535 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (133:144) - fairseq/models/nat/levenshtein_transformer.py (67:78) duplicated block id: 536 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/cross_entropy.py (65:74) - fairseq/criterions/sentence_ranking.py (97:105) duplicated block id: 537 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (36:43) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (77:84) duplicated block id: 538 size: 8 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (13:22) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (12:21) duplicated block id: 539 size: 8 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (102:109) - fairseq/models/transformer/transformer_encoder.py (74:81) duplicated block id: 540 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (376:383) - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (548:555) duplicated block id: 541 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (241:249) - fairseq/models/nat/nonautoregressive_ensembles.py (132:139) duplicated block id: 542 size: 8 cleaned lines of code in 2 files: - fairseq/search.py (536:543) - fairseq/search.py (568:575) duplicated block id: 543 size: 8 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (165:172) - fairseq/models/roberta/model.py (523:530) duplicated block id: 544 size: 8 cleaned lines of code in 2 files: - fairseq/modules/dynamic_crf_layer.py (112:119) - fairseq/modules/dynamic_crf_layer.py (140:147) duplicated block id: 545 size: 8 cleaned lines of code in 2 files: - fairseq/tasks/translation_from_pretrained_bart.py (73:80) - fairseq/tasks/translation_lev.py (53:60) duplicated block id: 546 size: 8 cleaned lines of code in 2 files: - fairseq/modules/espnet_multihead_attention.py (98:106) - fairseq/modules/espnet_multihead_attention.py (246:254) duplicated block id: 547 size: 8 cleaned lines of code in 2 files: - fairseq/modules/lightweight_convolution.py (16:23) - fairseq/modules/lightweight_convolution.py (155:162) duplicated block id: 548 size: 8 cleaned lines of code in 2 files: - fairseq/data/huffman/huffman_mmap_indexed_dataset.py (107:116) - fairseq/data/indexed_dataset.py (476:485) duplicated block id: 549 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (50:57) - fairseq/models/speech_to_speech/s2s_transformer.py (312:319) duplicated block id: 550 size: 8 cleaned lines of code in 2 files: - fairseq/data/audio/raw_audio_dataset.py (252:259) - fairseq/data/audio/raw_audio_dataset.py (334:341) duplicated block id: 551 size: 8 cleaned lines of code in 2 files: - fairseq/search.py (119:130) - fairseq/search.py (185:196) duplicated block id: 552 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (200:207) - fairseq/models/wav2vec/wav2vec2_asr.py (279:286) duplicated block id: 553 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (110:117) - fairseq/models/speech_to_text/convtransformer.py (64:71) duplicated block id: 554 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (26:33) - fairseq/models/speech_to_text/convtransformer.py (90:97) duplicated block id: 555 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (140:147) - fairseq/models/speech_to_text/xm_transformer.py (366:373) duplicated block id: 556 size: 8 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (26:33) - fairseq/models/speech_to_text/s2t_transformer.py (188:195) duplicated block id: 557 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (143:150) - fairseq/models/speech_to_text/xm_transformer.py (421:428) duplicated block id: 558 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (439:446) - fairseq/models/speech_to_text/xm_transformer.py (366:373) duplicated block id: 559 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (250:259) - fairseq/models/wav2vec/wav2vec2.py (324:333) duplicated block id: 560 size: 8 cleaned lines of code in 2 files: - fairseq/search.py (536:543) - fairseq/search.py (765:772) duplicated block id: 561 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (224:232) - fairseq/models/wav2vec/wav2vec2.py (297:305) duplicated block id: 562 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 563 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (380:387) - fairseq/models/transformer/transformer_decoder.py (254:261) duplicated block id: 564 size: 8 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (43:50) - fairseq/models/text_to_speech/hifigan.py (69:76) duplicated block id: 565 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (640:647) - fairseq/models/speech_to_text/xm_transformer.py (648:655) duplicated block id: 566 size: 8 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (33:40) - fairseq/models/text_to_speech/hifigan.py (79:86) duplicated block id: 567 size: 8 cleaned lines of code in 2 files: - fairseq/modules/multihead_attention.py (367:374) - fairseq/modules/multihead_attention.py (454:461) duplicated block id: 568 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (102:110) - fairseq/models/nat/nonautoregressive_transformer.py (135:143) duplicated block id: 569 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (264:271) - fairseq/models/speech_to_text/xm_transformer.py (366:373) duplicated block id: 570 size: 8 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (43:50) - fairseq/models/text_to_speech/hifigan.py (79:86) duplicated block id: 571 size: 8 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (33:40) - fairseq/models/text_to_speech/hifigan.py (69:76) duplicated block id: 572 size: 8 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (33:40) - fairseq/models/text_to_speech/hifigan.py (43:50) duplicated block id: 573 size: 8 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (106:115) - fairseq/models/text_to_speech/fastspeech2.py (353:362) duplicated block id: 574 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (29:36) duplicated block id: 575 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/sentence_ranking.py (97:105) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (63:71) duplicated block id: 576 size: 8 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (125:132) - fairseq/criterions/speech_to_speech_criterion.py (187:194) duplicated block id: 577 size: 8 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (160:167) - fairseq/models/wav2vec/wav2vec2_asr.py (132:139) duplicated block id: 578 size: 8 cleaned lines of code in 2 files: - fairseq/optim/adafactor.py (168:182) - fairseq/optim/adamax.py (107:121) duplicated block id: 579 size: 8 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (198:206) - fairseq/models/nat/nat_crf_transformer.py (103:111) duplicated block id: 580 size: 8 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (29:36) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (29:36) duplicated block id: 581 size: 8 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (153:160) - fairseq/models/lightconv_lm.py (50:57) duplicated block id: 582 size: 8 cleaned lines of code in 2 files: - fairseq/search.py (536:543) - fairseq/search.py (676:683) duplicated block id: 583 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (242:271) - fairseq/models/roberta/model.py (365:373) duplicated block id: 584 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 585 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (321:328) - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (424:431) duplicated block id: 586 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (152:159) - fairseq/models/speech_to_text/convtransformer.py (169:176) duplicated block id: 587 size: 7 cleaned lines of code in 2 files: - fairseq/modules/downsampled_multihead_attention.py (203:209) - fairseq/modules/downsampled_multihead_attention.py (222:228) duplicated block id: 588 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (408:414) - fairseq/models/speech_to_speech/s2s_transformer.py (557:563) duplicated block id: 589 size: 7 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (175:183) - fairseq/models/transformer/transformer_encoder.py (110:118) duplicated block id: 590 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (86:92) - fairseq/models/speech_to_speech/s2s_transformer.py (307:313) duplicated block id: 591 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (142:148) - fairseq/models/masked_lm.py (95:101) duplicated block id: 592 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (190:196) - fairseq/models/speech_to_text/convtransformer.py (122:128) duplicated block id: 593 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 594 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (347:353) - fairseq/models/masked_lm.py (384:390) duplicated block id: 595 size: 7 cleaned lines of code in 2 files: - fairseq/data/base_wrapper_dataset.py (54:60) - fairseq/data/fairseq_dataset.py (104:110) duplicated block id: 596 size: 7 cleaned lines of code in 2 files: - fairseq/models/model_utils.py (86:92) - fairseq/models/nat/levenshtein_utils.py (287:293) duplicated block id: 597 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (242:271) - fairseq/models/text_to_speech/tts_transformer.py (347:355) duplicated block id: 598 size: 7 cleaned lines of code in 2 files: - fairseq/data/encoders/bytes.py (18:26) - fairseq/data/encoders/characters.py (16:24) duplicated block id: 599 size: 7 cleaned lines of code in 2 files: - fairseq/data/transform_eos_concat_langpair_dataset.py (128:134) - fairseq/data/transform_eos_lang_pair_dataset.py (83:89) duplicated block id: 600 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (58:64) - fairseq/models/roberta/model.py (81:87) duplicated block id: 601 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (304:312) - fairseq/models/wav2vec/wav2vec2_asr.py (479:487) duplicated block id: 602 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/enc_dec.py (35:41) - fairseq/models/speech_to_speech/s2s_transformer.py (338:344) duplicated block id: 603 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (137:143) - fairseq/models/hubert/hubert.py (166:172) duplicated block id: 604 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (132:138) - fairseq/models/wav2vec/wav2vec2.py (159:165) duplicated block id: 605 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (96:102) - fairseq/modules/multihead_attention.py (243:249) duplicated block id: 606 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (92:98) - fairseq/models/wav2vec/wav2vec2.py (197:203) duplicated block id: 607 size: 7 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (1220:1226) - fairseq/modules/conformer_layer.py (290:296) duplicated block id: 608 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:132) - fairseq/models/text_to_speech/fastspeech2.py (353:361) duplicated block id: 609 size: 7 cleaned lines of code in 2 files: - fairseq/models/fconv.py (313:320) - fairseq/models/hubert/hubert_asr.py (334:341) duplicated block id: 610 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/hub_interface.py (63:69) - fairseq/models/text_to_speech/hub_interface.py (118:124) duplicated block id: 611 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (390:396) - fairseq/models/transformer/transformer_decoder.py (239:245) duplicated block id: 612 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/enc_dec.py (35:41) - fairseq/models/speech_to_text/convtransformer.py (122:128) duplicated block id: 613 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (247:254) - fairseq/criterions/tacotron2_loss.py (122:129) duplicated block id: 614 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (76:82) - fairseq/models/speech_to_text/convtransformer.py (43:49) duplicated block id: 615 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (242:271) - fairseq/models/speech_to_text/s2t_transformer.py (106:114) duplicated block id: 616 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (217:224) - fairseq/tasks/multilingual_language_modeling.py (450:456) duplicated block id: 617 size: 7 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (15:21) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (79:85) duplicated block id: 618 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (440:446) - fairseq/models/transformer/transformer_decoder.py (239:245) duplicated block id: 619 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv_lm.py (216:222) - fairseq/models/transformer_lm.py (274:280) duplicated block id: 620 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/enc_dec.py (35:41) - fairseq/models/speech_to_text/s2t_transformer.py (214:220) duplicated block id: 621 size: 7 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (734:740) - fairseq/tasks/multilingual_translation.py (178:184) duplicated block id: 622 size: 7 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (91:97) - fairseq/models/wav2vec/wav2vec2_asr.py (569:576) duplicated block id: 623 size: 7 cleaned lines of code in 2 files: - fairseq/models/multilingual_transformer.py (93:100) - fairseq/models/transformer/transformer_base.py (106:114) duplicated block id: 624 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (63:70) - fairseq/models/transformer/transformer_base.py (165:172) duplicated block id: 625 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (17:23) duplicated block id: 626 size: 7 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (1:8) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (1:8) duplicated block id: 627 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (86:92) - fairseq/models/speech_to_text/s2t_transformer.py (183:189) duplicated block id: 628 size: 7 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (89:95) - fairseq/optim/fused_adam.py (72:78) duplicated block id: 629 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (79:86) - fairseq/models/roberta/model.py (242:249) duplicated block id: 630 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (187:193) - fairseq/data/audio/speech_to_text_joint_dataset.py (264:270) duplicated block id: 631 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:132) - fairseq/models/fairseq_model.py (242:271) duplicated block id: 632 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_text.py (124:130) - fairseq/tasks/translation_multi_simple_epoch.py (199:205) duplicated block id: 633 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (642:648) - fairseq/models/lightconv.py (651:657) duplicated block id: 634 size: 7 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cpp (37:43) - fairseq/modules/lightconv_layer/lightconv_cuda.cpp (37:43) duplicated block id: 635 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (194:201) - fairseq/tasks/multilingual_masked_lm.py (158:165) duplicated block id: 636 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (593:601) - fairseq/models/transformer/transformer_decoder.py (380:389) duplicated block id: 637 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_language_modeling.py (538:544) - fairseq/tasks/multilingual_masked_lm.py (222:228) duplicated block id: 638 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (86:93) - fairseq/criterions/legacy_masked_lm.py (129:136) duplicated block id: 639 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:143) - fairseq/models/speech_to_speech/s2s_transformer.py (482:488) duplicated block id: 640 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (180:186) - fairseq/models/lightconv_lm.py (170:178) duplicated block id: 641 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (301:307) - fairseq/tasks/speech_to_text.py (124:130) duplicated block id: 642 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (53:59) - fairseq/models/roberta/model.py (112:118) duplicated block id: 643 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (750:756) - fairseq/models/lightconv.py (759:765) duplicated block id: 644 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (21:28) - fairseq/models/roberta/model_xlmr.py (24:31) duplicated block id: 645 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (51:57) - fairseq/models/roberta/model.py (97:103) duplicated block id: 646 size: 7 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (112:118) - fairseq/modules/lightweight_convolution.py (165:171) duplicated block id: 647 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (365:373) - fairseq/models/speech_to_text/xm_transformer.py (469:477) duplicated block id: 648 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (171:177) - fairseq/tasks/multilingual_denoising.py (140:146) duplicated block id: 649 size: 7 cleaned lines of code in 2 files: - fairseq/optim/adam.py (183:193) - fairseq/optim/adamax.py (126:135) duplicated block id: 650 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (220:226) - fairseq/models/lightconv_lm.py (175:183) duplicated block id: 651 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (365:373) - fairseq/models/speech_to_text/s2t_transformer.py (106:114) duplicated block id: 652 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (257:263) - fairseq/tasks/multilingual_language_modeling.py (370:376) duplicated block id: 653 size: 7 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qconv.py (70:76) - fairseq/modules/quantization/pq/modules/qlinear.py (42:48) duplicated block id: 654 size: 7 cleaned lines of code in 2 files: - fairseq/file_io.py (45:51) - fairseq/file_io.py (158:164) duplicated block id: 655 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (199:209) - fairseq/models/wav2vec/wav2vec2_asr.py (667:677) duplicated block id: 656 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (269:276) - fairseq/models/transformer/transformer_base.py (106:114) duplicated block id: 657 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (242:249) - fairseq/models/wav2vec/wav2vec2.py (315:322) duplicated block id: 658 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (144:150) - fairseq/data/audio/speech_to_text_dataset.py (310:316) duplicated block id: 659 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 660 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 661 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (199:206) - fairseq/models/speech_to_text/s2t_transformer.py (282:289) duplicated block id: 662 size: 7 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (86:94) - fairseq/benchmark/dummy_mt.py (77:85) duplicated block id: 663 size: 7 cleaned lines of code in 2 files: - fairseq/data/text_compressor.py (28:37) - fairseq/data/text_compressor.py (48:55) duplicated block id: 664 size: 7 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (24:30) - fairseq/models/text_to_speech/hifigan.py (60:66) duplicated block id: 665 size: 7 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (323:331) - fairseq/data/denoising_dataset.py (338:346) duplicated block id: 666 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (301:307) - fairseq/tasks/translation_multi_simple_epoch.py (199:205) duplicated block id: 667 size: 7 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (75:83) - fairseq/benchmark/dummy_mt.py (77:85) duplicated block id: 668 size: 7 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (48:55) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (322:329) duplicated block id: 669 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (101:107) - fairseq/criterions/sentence_ranking.py (99:105) duplicated block id: 670 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (171:177) - fairseq/tasks/multilingual_masked_lm.py (180:186) duplicated block id: 671 size: 7 cleaned lines of code in 2 files: - fairseq_cli/generate.py (128:134) - fairseq_cli/interactive.py (160:166) duplicated block id: 672 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:143) - fairseq/models/speech_to_speech/s2s_transformer.py (307:313) duplicated block id: 673 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_decoder.py (59:66) - fairseq/models/transformer/transformer_base.py (165:172) duplicated block id: 674 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (17:23) duplicated block id: 675 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (86:92) - fairseq/models/speech_to_speech/s2s_transformer.py (482:488) duplicated block id: 676 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (17:23) duplicated block id: 677 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_decoder.py (59:66) - fairseq/models/speech_to_text/xm_transformer.py (565:571) duplicated block id: 678 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_utils.py (44:50) - fairseq/models/nat/levenshtein_utils.py (113:119) duplicated block id: 679 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/xm_transformer.py (565:571) - fairseq/models/transformer/transformer_base.py (165:172) duplicated block id: 680 size: 7 cleaned lines of code in 2 files: - fairseq/modules/transformer_sentence_encoder.py (169:175) - fairseq/modules/transformer_sentence_encoder.py (219:225) duplicated block id: 681 size: 7 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (75:81) - fairseq/modules/transformer_sentence_encoder.py (216:222) duplicated block id: 682 size: 7 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_layer.py (54:60) - fairseq/modules/lightconv_layer/lightconv_layer.py (48:54) duplicated block id: 683 size: 7 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (991:998) - fairseq/models/wav2vec/wav2vec2.py (1121:1128) duplicated block id: 684 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (1576:1584) - fairseq/models/speech_to_text/modules/emformer.py (1710:1718) duplicated block id: 685 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:143) - fairseq/models/speech_to_text/s2t_transformer.py (183:189) duplicated block id: 686 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (39:46) - fairseq/models/nat/iterative_nonautoregressive_transformer.py (94:102) duplicated block id: 687 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (63:70) - fairseq/models/speech_to_text/xm_transformer.py (565:571) duplicated block id: 688 size: 7 cleaned lines of code in 2 files: - fairseq/data/monolingual_dataset.py (238:246) - fairseq/data/subsample_dataset.py (61:69) duplicated block id: 689 size: 7 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (380:389) - fairseq/models/wav2vec/wav2vec2_asr.py (717:725) duplicated block id: 690 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (88:96) - fairseq/models/roberta/model.py (251:259) duplicated block id: 691 size: 7 cleaned lines of code in 2 files: - fairseq/models/fconv.py (313:320) - fairseq/models/lightconv.py (412:419) duplicated block id: 692 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) duplicated block id: 693 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (94:102) - fairseq/models/nat/nat_crf_transformer.py (49:57) duplicated block id: 694 size: 7 cleaned lines of code in 2 files: - fairseq/models/fconv.py (13:20) - fairseq/models/lightconv.py (13:20) duplicated block id: 695 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (51:57) - fairseq/models/speech_to_text/s2t_transformer.py (147:153) duplicated block id: 696 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (287:293) - fairseq/models/roberta/model.py (498:504) duplicated block id: 697 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:143) - fairseq/models/speech_to_text/convtransformer.py (85:91) duplicated block id: 698 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (365:373) - fairseq/models/text_to_speech/fastspeech2.py (353:361) duplicated block id: 699 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (86:92) - fairseq/models/speech_to_text/convtransformer.py (85:91) duplicated block id: 700 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (28:35) - fairseq/models/roberta/model_gottbert.py (21:28) duplicated block id: 701 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (420:426) - fairseq/tasks/text_to_speech.py (212:218) duplicated block id: 702 size: 7 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (45:51) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (48:55) duplicated block id: 703 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (500:506) - fairseq/data/audio/speech_to_text_joint_dataset.py (297:303) duplicated block id: 704 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (131:139) - fairseq/models/speech_to_text/convtransformer.py (43:49) duplicated block id: 705 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (136:143) - fairseq/models/transformer/transformer_decoder.py (91:97) duplicated block id: 706 size: 7 cleaned lines of code in 2 files: - fairseq/binarizer.py (174:180) - fairseq/binarizer.py (206:212) duplicated block id: 707 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:143) - fairseq/models/roberta/model.py (86:92) duplicated block id: 708 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (51:57) - fairseq/models/speech_to_speech/s2s_transformer.py (446:452) duplicated block id: 709 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml (17:23) duplicated block id: 710 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (403:409) - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (431:437) duplicated block id: 711 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (72:78) - fairseq/models/hubert/hubert_asr.py (97:103) duplicated block id: 712 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (208:216) - fairseq/models/lightconv_lm.py (93:99) duplicated block id: 713 size: 7 cleaned lines of code in 2 files: - fairseq/data/transform_eos_dataset.py (110:120) - fairseq/data/transform_eos_lang_pair_dataset.py (105:113) duplicated block id: 714 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (64:71) - fairseq/models/nat/nat_crf_transformer.py (81:88) duplicated block id: 715 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (51:57) - fairseq/models/speech_to_text/convtransformer.py (49:55) duplicated block id: 716 size: 7 cleaned lines of code in 2 files: - fairseq/data/fairseq_dataset.py (104:110) - fairseq/data/multi_corpus_dataset.py (222:228) duplicated block id: 717 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (91:97) - fairseq/models/speech_to_speech/s2s_transformer.py (343:349) duplicated block id: 718 size: 7 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (121:129) - fairseq/optim/fused_adam.py (273:281) duplicated block id: 719 size: 7 cleaned lines of code in 2 files: - fairseq/binarizer.py (334:340) - fairseq/binarizer.py (352:358) duplicated block id: 720 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/frm_text_to_speech_dataset.py (59:65) - fairseq/data/audio/speech_to_text_joint_dataset.py (264:270) duplicated block id: 721 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (154:160) - fairseq/models/nat/nat_crf_transformer.py (71:77) duplicated block id: 722 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (363:369) - fairseq/models/wav2vec/wav2vec2.py (591:597) duplicated block id: 723 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (115:121) - fairseq/models/speech_to_text/xm_transformer.py (421:427) duplicated block id: 724 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/nat_crf_transformer.py (81:88) - fairseq/models/nat/nonautoregressive_transformer.py (111:118) duplicated block id: 725 size: 7 cleaned lines of code in 2 files: - fairseq/data/encoders/utils.py (19:26) - fairseq/tasks/multilingual_masked_lm.py (127:134) duplicated block id: 726 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (458:465) - fairseq/models/transformer/transformer_decoder.py (91:97) duplicated block id: 727 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (242:271) - fairseq/models/text_to_speech/fastspeech2.py (353:361) duplicated block id: 728 size: 7 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (94:102) - fairseq/models/nat/nonautoregressive_transformer.py (86:94) duplicated block id: 729 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 730 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:132) - fairseq/models/speech_to_text/s2t_transformer.py (106:114) duplicated block id: 731 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (340:347) - fairseq/models/lightconv.py (470:477) duplicated block id: 732 size: 7 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (242:271) - fairseq/models/speech_to_text/xm_transformer.py (469:477) duplicated block id: 733 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (185:191) - fairseq/models/speech_to_text/xm_transformer.py (405:411) duplicated block id: 734 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/multi_modality_dataset.py (217:225) - fairseq/data/transform_eos_lang_pair_dataset.py (105:113) duplicated block id: 735 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml (17:23) duplicated block id: 736 size: 7 cleaned lines of code in 2 files: - fairseq/file_io.py (54:60) - fairseq/file_io.py (178:184) duplicated block id: 737 size: 7 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (1099:1105) - fairseq/data/multilingual/multilingual_data_manager.py (1124:1130) duplicated block id: 738 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (71:84) - fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py (39:52) duplicated block id: 739 size: 7 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (51:57) - fairseq/models/speech_to_speech/s2s_transformer.py (271:277) duplicated block id: 740 size: 7 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/fastspeech2.py (277:283) - fairseq/models/text_to_speech/tts_transformer.py (175:181) duplicated block id: 741 size: 7 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (51:57) - fairseq/models/lightconv.py (208:216) duplicated block id: 742 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (365:373) - fairseq/models/text_to_speech/tts_transformer.py (347:355) duplicated block id: 743 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (185:191) - fairseq/models/speech_to_text/convtransformer.py (111:117) duplicated block id: 744 size: 7 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (570:577) - fairseq/tasks/translation.py (104:111) duplicated block id: 745 size: 7 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (323:329) - fairseq/models/text_to_speech/tts_transformer.py (382:389) duplicated block id: 746 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (96:103) - fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py (102:109) duplicated block id: 747 size: 7 cleaned lines of code in 2 files: - fairseq/speech_generator.py (120:126) - fairseq/speech_generator.py (174:180) duplicated block id: 748 size: 7 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (933:940) - fairseq/models/speech_to_text/xm_transformer.py (653:660) duplicated block id: 749 size: 7 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (404:419) - fairseq/data/monolingual_dataset.py (226:241) duplicated block id: 750 size: 7 cleaned lines of code in 2 files: - fairseq/data/mask_tokens_dataset.py (101:109) - fairseq/data/shorten_dataset.py (42:50) duplicated block id: 751 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (91:97) - fairseq/models/speech_to_text/s2t_transformer.py (219:225) duplicated block id: 752 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (139:146) - fairseq/tasks/multilingual_language_modeling.py (182:189) duplicated block id: 753 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 754 size: 7 cleaned lines of code in 2 files: - fairseq/modules/learned_positional_embedding.py (55:61) - fairseq/modules/quantization/pq/modules/qemb.py (85:91) duplicated block id: 755 size: 7 cleaned lines of code in 2 files: - fairseq/file_io.py (63:70) - fairseq/file_io.py (179:186) duplicated block id: 756 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:132) - fairseq/models/speech_to_text/xm_transformer.py (469:477) duplicated block id: 757 size: 7 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qconv.py (40:46) - fairseq/modules/quantization/scalar/modules/qconv.py (36:42) duplicated block id: 758 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (17:23) duplicated block id: 759 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (124:132) - fairseq/models/text_to_speech/tts_transformer.py (347:355) duplicated block id: 760 size: 7 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (368:374) - fairseq/models/text_to_speech/tts_transformer.py (439:446) duplicated block id: 761 size: 7 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:122) - fairseq/models/roberta/model.py (358:364) duplicated block id: 762 size: 7 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/fastspeech2.py (277:283) - fairseq/models/text_to_speech/tacotron2.py (190:196) duplicated block id: 763 size: 7 cleaned lines of code in 2 files: - fairseq_cli/generate.py (153:159) - fairseq_cli/validate.py (106:112) duplicated block id: 764 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (232:238) - fairseq/criterions/tacotron2_loss.py (104:110) duplicated block id: 765 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (515:522) - fairseq/models/text_to_speech/tacotron2.py (323:329) duplicated block id: 766 size: 7 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (161:167) - fairseq/models/wav2vec/wav2vec2.py (197:203) duplicated block id: 767 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/audio_pretraining.py (152:158) - fairseq/tasks/audio_pretraining.py (166:172) duplicated block id: 768 size: 7 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec.py (545:552) - fairseq/models/wav2vec/wav2vec2.py (512:519) duplicated block id: 769 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_translation.py (81:88) - fairseq/tasks/online_backtranslation.py (114:121) duplicated block id: 770 size: 7 cleaned lines of code in 2 files: - fairseq/models/transformer_lm.py (81:87) - fairseq/models/wav2vec/wav2vec2_asr.py (274:280) duplicated block id: 771 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (675:682) - fairseq/models/text_to_speech/tacotron2.py (368:374) duplicated block id: 772 size: 7 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (331:337) - fairseq/data/audio/text_to_speech_dataset.py (134:140) duplicated block id: 773 size: 7 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (91:97) - fairseq/models/speech_to_text/convtransformer.py (127:133) duplicated block id: 774 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/frm_text_to_speech.py (39:45) - fairseq/tasks/text_to_speech.py (85:91) duplicated block id: 775 size: 7 cleaned lines of code in 2 files: - fairseq/data/base_wrapper_dataset.py (54:60) - fairseq/data/multi_corpus_dataset.py (222:228) duplicated block id: 776 size: 7 cleaned lines of code in 2 files: - fairseq/criterions/sentence_prediction.py (94:102) - fairseq/criterions/sentence_ranking.py (89:97) duplicated block id: 777 size: 7 cleaned lines of code in 2 files: - fairseq/tasks/audio_pretraining.py (66:72) - fairseq/tasks/hubert_pretraining.py (67:73) duplicated block id: 778 size: 7 cleaned lines of code in 2 files: - fairseq/ngram_repeat_block.py (52:58) - fairseq/ngram_repeat_block.py (64:70) duplicated block id: 779 size: 7 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (586:592) - fairseq/models/speech_to_text/modules/emformer.py (698:704) duplicated block id: 780 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (281:287) - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (294:300) duplicated block id: 781 size: 7 cleaned lines of code in 2 files: - fairseq_cli/generate.py (97:103) - fairseq_cli/interactive.py (146:152) duplicated block id: 782 size: 7 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (440:446) - fairseq/models/lightconv.py (783:789) duplicated block id: 783 size: 7 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml (17:23) - fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml (17:23) duplicated block id: 784 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (356:361) - fairseq/models/transformer/transformer_decoder.py (448:453) duplicated block id: 785 size: 6 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (25:31) - fairseq/data/language_pair_dataset.py (26:32) duplicated block id: 786 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (27:32) - fairseq/model_parallel/models/transformer.py (44:49) duplicated block id: 787 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (31:38) - fairseq/model_parallel/models/transformer.py (25:33) duplicated block id: 788 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (268:276) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 789 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adamax.py (99:105) - fairseq/optim/fused_adam.py (275:281) duplicated block id: 790 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (24:29) - fairseq/modules/transformer_sentence_encoder_layer.py (24:29) duplicated block id: 791 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (561:566) - fairseq/models/wav2vec/wav2vec2_asr.py (723:728) duplicated block id: 792 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (34:41) - fairseq/models/speech_to_text/xm_transformer.py (470:477) duplicated block id: 793 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (325:330) - fairseq/data/audio/speech_to_text_dataset.py (394:399) duplicated block id: 794 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (38:45) - fairseq/models/text_to_speech/fastspeech2.py (354:361) duplicated block id: 795 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (23:28) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 796 size: 6 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (360:365) - fairseq/tasks/multilingual_translation.py (129:134) duplicated block id: 797 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (121:126) - fairseq/tasks/translation.py (78:83) duplicated block id: 798 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (34:41) - fairseq/models/text_to_speech/tts_transformer.py (348:355) duplicated block id: 799 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (134:140) - fairseq/models/speech_to_text/convtransformer.py (160:166) duplicated block id: 800 size: 6 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/fixed_schedule.py (44:51) - fairseq/optim/lr_scheduler/manual_lr_scheduler.py (75:82) duplicated block id: 801 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) - fairseq/models/transformer/transformer_legacy.py (178:183) duplicated block id: 802 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (17:22) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (78:83) duplicated block id: 803 size: 6 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (66:77) - fairseq/optim/nag.py (32:43) duplicated block id: 804 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (440:445) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 805 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/roberta/model_gottbert.py (23:28) duplicated block id: 806 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (31:36) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (78:83) duplicated block id: 807 size: 6 cleaned lines of code in 2 files: - fairseq_cli/train.py (262:267) - fairseq_cli/train.py (447:452) duplicated block id: 808 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/cuda_function_gen.py (98:103) - fairseq/modules/lightconv_layer/cuda_function_gen.py (107:112) duplicated block id: 809 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qconv.py (95:100) - fairseq/modules/quantization/scalar/modules/qconv.py (89:94) duplicated block id: 810 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (79:85) - fairseq/tasks/denoising.py (268:276) duplicated block id: 811 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (303:308) - fairseq/data/audio/speech_to_text_joint_dataset.py (319:324) duplicated block id: 812 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (148:153) - fairseq/tasks/multilingual_masked_lm.py (181:186) duplicated block id: 813 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (391:396) - fairseq/models/speech_to_speech/s2s_transformer.py (534:539) duplicated block id: 814 size: 6 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (143:148) - fairseq/models/roberta/model.py (86:91) duplicated block id: 815 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (386:391) - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (577:582) duplicated block id: 816 size: 6 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (1249:1256) - fairseq/models/wav2vec/wav2vec2.py (1272:1279) duplicated block id: 817 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (23:28) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 818 size: 6 cleaned lines of code in 2 files: - fairseq/modules/gumbel_vector_quantizer.py (158:163) - fairseq/modules/kmeans_vector_quantizer.py (106:111) duplicated block id: 819 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (1:6) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (1:6) duplicated block id: 820 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/transformer.py (44:49) - fairseq/model_parallel/modules/multihead_attention.py (49:54) duplicated block id: 821 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (595:600) - fairseq/models/transformer/transformer_decoder.py (448:453) duplicated block id: 822 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (243:271) - fairseq/models/roberta/model_camembert.py (38:45) duplicated block id: 823 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (13:18) - fairseq/models/speech_to_text/berard.py (12:17) duplicated block id: 824 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (300:308) - fairseq/models/wav2vec/wav2vec2_asr.py (659:667) duplicated block id: 825 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (182:187) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 826 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (90:95) - fairseq/criterions/tacotron2_loss.py (157:162) duplicated block id: 827 size: 6 cleaned lines of code in 2 files: - fairseq_cli/generate.py (73:83) - fairseq_cli/interactive.py (130:140) duplicated block id: 828 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (30:35) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 829 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qconv.py (37:42) - fairseq/modules/quantization/pq/utils.py (165:170) duplicated block id: 830 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (540:548) - fairseq/models/transformer/transformer_decoder.py (300:308) duplicated block id: 831 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (34:41) - fairseq/models/text_to_speech/fastspeech2.py (354:361) duplicated block id: 832 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (243:271) - fairseq/models/roberta/model_gottbert.py (34:41) duplicated block id: 833 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (58:63) - fairseq/criterions/speech_to_speech_criterion.py (66:71) duplicated block id: 834 size: 6 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (92:97) - fairseq/tasks/cross_lingual_lm.py (34:40) duplicated block id: 835 size: 6 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (1258:1263) - fairseq/modules/transformer_sentence_encoder_layer.py (120:125) duplicated block id: 836 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (13:18) - fairseq/models/speech_to_text/berard.py (12:17) duplicated block id: 837 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (17:25) - fairseq/models/text_to_speech/tts_transformer.py (18:26) duplicated block id: 838 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 839 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (729:734) - fairseq/models/lightconv.py (599:604) duplicated block id: 840 size: 6 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2_asr.py (747:752) - fairseq/modules/dynamic_convolution.py (61:66) duplicated block id: 841 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (88:94) - fairseq/tasks/denoising.py (268:276) duplicated block id: 842 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adagrad.py (21:34) - fairseq/optim/sgd.py (23:36) duplicated block id: 843 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (51:56) - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (31:36) duplicated block id: 844 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (386:391) - fairseq/models/lightconv.py (870:875) duplicated block id: 845 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (213:219) - fairseq/models/nat/nonautoregressive_transformer.py (210:215) duplicated block id: 846 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (916:921) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 847 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adam.py (151:157) - fairseq/optim/fused_adam.py (275:281) duplicated block id: 848 size: 6 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/fixed_schedule.py (61:69) - fairseq/optim/lr_scheduler/polynomial_decay_schedule.py (66:74) duplicated block id: 849 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (63:68) - fairseq/models/speech_to_text/convtransformer.py (199:204) duplicated block id: 850 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/ctc.py (236:242) - fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py (103:109) duplicated block id: 851 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (13:18) - fairseq/models/text_to_speech/tacotron2.py (13:18) duplicated block id: 852 size: 6 cleaned lines of code in 2 files: - fairseq/clib/libnat_cuda/edit_dist.cu (87:92) - fairseq/clib/libnat_cuda/edit_dist.cu (170:175) duplicated block id: 853 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (89:94) - fairseq/models/lightconv_lm.py (34:39) duplicated block id: 854 size: 6 cleaned lines of code in 2 files: - fairseq/models/__init__.py (137:143) - fairseq/tasks/__init__.py (85:91) duplicated block id: 855 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (484:489) - fairseq/models/speech_to_text/xm_transformer.py (657:663) duplicated block id: 856 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (77:83) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 857 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (34:41) - fairseq/models/speech_to_text/xm_transformer.py (470:477) duplicated block id: 858 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/sentence_prediction.py (276:282) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 859 size: 6 cleaned lines of code in 2 files: - fairseq_cli/eval_lm.py (28:33) - fairseq_cli/interactive.py (29:34) duplicated block id: 860 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (454:459) - fairseq/models/speech_to_text/xm_transformer.py (147:152) duplicated block id: 861 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (337:343) - fairseq/models/text_to_speech/tts_transformer.py (397:403) duplicated block id: 862 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/roberta/model_camembert.py (30:35) duplicated block id: 863 size: 6 cleaned lines of code in 2 files: - fairseq/modules/learned_positional_embedding.py (56:61) - fairseq/modules/quantization/scalar/modules/qemb.py (126:131) duplicated block id: 864 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (24:29) - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (17:22) duplicated block id: 865 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (619:624) - fairseq/models/speech_to_speech/s2s_transformer.py (65:70) duplicated block id: 866 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (15:20) - fairseq/tasks/multilingual_masked_lm.py (14:19) duplicated block id: 867 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/fairseq_model.py (236:241) duplicated block id: 868 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/roberta/model_xlmr.py (26:31) duplicated block id: 869 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (26:31) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 870 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/transformer_lm.py (150:155) - fairseq/model_parallel/models/transformer_lm.py (164:169) duplicated block id: 871 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (13:18) - fairseq/models/text_to_speech/tacotron2.py (13:18) duplicated block id: 872 size: 6 cleaned lines of code in 2 files: - fairseq/data/multi_corpus_dataset.py (215:220) - fairseq/data/multi_corpus_sampled_dataset.py (147:152) duplicated block id: 873 size: 6 cleaned lines of code in 2 files: - fairseq/data/legacy/masked_lm_dictionary.py (15:20) - fairseq/data/legacy/masked_lm_dictionary.py (38:43) duplicated block id: 874 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (76:81) - fairseq/models/wav2vec/wav2vec2_asr.py (104:109) duplicated block id: 875 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (262:268) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 876 size: 6 cleaned lines of code in 2 files: - fairseq/modules/cuda_utils.cu (1:6) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (1:6) duplicated block id: 877 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (79:85) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 878 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py (67:72) - fairseq/criterions/speech_to_speech_criterion.py (131:136) duplicated block id: 879 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/fastspeech2.py (227:232) - fairseq/models/text_to_speech/tacotron2.py (34:39) duplicated block id: 880 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/masked_lm.py (74:81) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (54:61) duplicated block id: 881 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/roberta/model_gottbert.py (23:28) duplicated block id: 882 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (306:311) - fairseq/models/speech_to_text/modules/emformer.py (985:990) duplicated block id: 883 size: 6 cleaned lines of code in 2 files: - fairseq/optim/fp16_optimizer.py (80:93) - fairseq/optim/fp16_optimizer.py (349:362) duplicated block id: 884 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (213:219) - fairseq/models/nat/levenshtein_transformer.py (275:280) duplicated block id: 885 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (77:83) - fairseq/tasks/masked_lm.py (262:268) duplicated block id: 886 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (171:178) - fairseq/models/nat/levenshtein_transformer.py (138:145) duplicated block id: 887 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (374:379) - fairseq/tasks/text_to_speech.py (181:186) duplicated block id: 888 size: 6 cleaned lines of code in 2 files: - fairseq_cli/interactive.py (29:34) - fairseq_cli/train.py (18:23) duplicated block id: 889 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (268:276) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 890 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_masked_lm.py (332:338) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 891 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (121:126) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (325:330) duplicated block id: 892 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/utils.py (315:320) - fairseq/models/speech_to_text/utils.py (324:329) duplicated block id: 893 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (25:30) - fairseq/models/text_to_speech/hifigan.py (71:76) duplicated block id: 894 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (237:242) - fairseq/data/audio/speech_to_text_joint_dataset.py (319:324) duplicated block id: 895 size: 6 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (92:97) - fairseq/tasks/multilingual_masked_lm.py (39:45) duplicated block id: 896 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adam.py (79:90) - fairseq/optim/nag.py (32:43) duplicated block id: 897 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/cross_entropy.py (79:90) - fairseq/criterions/masked_lm.py (87:98) duplicated block id: 898 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/roberta/model_camembert.py (30:35) duplicated block id: 899 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (25:30) - fairseq/models/text_to_speech/hifigan.py (81:86) duplicated block id: 900 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (13:18) - fairseq/models/lstm.py (13:18) duplicated block id: 901 size: 6 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (99:104) - fairseq/optim/fused_adam.py (92:97) duplicated block id: 902 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (394:399) - fairseq/data/audio/speech_to_text_joint_dataset.py (230:235) duplicated block id: 903 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/megatron_trainer.py (31:36) - fairseq/model_parallel/models/transformer_lm.py (32:37) duplicated block id: 904 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (112:123) - fairseq/criterions/masked_lm.py (87:98) duplicated block id: 905 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (133:139) - fairseq/models/nat/nonautoregressive_transformer.py (78:84) duplicated block id: 906 size: 6 cleaned lines of code in 2 files: - fairseq/optim/fused_lamb.py (30:43) - fairseq/optim/sgd.py (23:36) duplicated block id: 907 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (135:140) - fairseq/models/hubert/hubert_asr.py (70:75) duplicated block id: 908 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 909 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/raw_audio_dataset.py (32:37) - fairseq/data/audio/raw_audio_dataset.py (252:257) duplicated block id: 910 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (13:18) - fairseq/models/text_to_speech/tts_transformer.py (15:20) duplicated block id: 911 size: 6 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (304:310) - fairseq/data/denoising_dataset.py (340:346) duplicated block id: 912 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_model.py (86:91) - fairseq/models/roberta/model.py (321:327) duplicated block id: 913 size: 6 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (304:310) - fairseq/data/denoising_dataset.py (325:331) duplicated block id: 914 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (37:42) - fairseq/criterions/tacotron2_loss.py (116:121) duplicated block id: 915 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/hubert_dataset.py (324:330) - fairseq/data/monolingual_dataset.py (238:245) duplicated block id: 916 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (25:30) - fairseq/models/text_to_speech/hifigan.py (35:40) duplicated block id: 917 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 918 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (354:359) - fairseq/tasks/speech_to_speech.py (301:306) duplicated block id: 919 size: 6 cleaned lines of code in 2 files: - fairseq/data/indexed_dataset.py (202:209) - fairseq/data/indexed_dataset.py (312:319) duplicated block id: 920 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_joint_dataset.py (230:235) - fairseq/data/audio/text_to_speech_dataset.py (187:192) duplicated block id: 921 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/transformer.py (44:49) - fairseq/model_parallel/models/transformer_lm.py (32:37) duplicated block id: 922 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (45:50) - fairseq/models/text_to_speech/hifigan.py (61:66) duplicated block id: 923 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (189:194) - fairseq/tasks/multilingual_language_modeling.py (385:390) duplicated block id: 924 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (25:30) - fairseq/models/text_to_speech/hifigan.py (45:50) duplicated block id: 925 size: 6 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (123:129) - fairseq/optim/fused_adam.py (106:112) duplicated block id: 926 size: 6 cleaned lines of code in 2 files: - fairseq/modules/cuda_utils.cu (1:6) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (1:6) duplicated block id: 927 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (43:48) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (17:22) duplicated block id: 928 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/roberta/model.py (358:363) duplicated block id: 929 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (13:18) - fairseq/models/text_to_speech/tts_transformer.py (15:20) duplicated block id: 930 size: 6 cleaned lines of code in 2 files: - fairseq/search.py (185:194) - fairseq/search.py (323:332) duplicated block id: 931 size: 6 cleaned lines of code in 2 files: - fairseq/distributed/fully_sharded_data_parallel.py (81:87) - fairseq/models/distributed_fairseq_model.py (126:132) duplicated block id: 932 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (235:240) - fairseq/models/lightconv.py (498:503) duplicated block id: 933 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/megatron_trainer.py (31:36) - fairseq/model_parallel/modules/multihead_attention.py (49:54) duplicated block id: 934 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (227:232) - fairseq/models/lstm.py (406:411) duplicated block id: 935 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (258:264) - fairseq/models/speech_to_text/convtransformer.py (294:299) duplicated block id: 936 size: 6 cleaned lines of code in 2 files: - fairseq_cli/preprocess.py (26:31) - fairseq_cli/validate.py (21:26) duplicated block id: 937 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (151:156) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (283:288) duplicated block id: 938 size: 6 cleaned lines of code in 2 files: - fairseq/logging/meters.py (102:107) - fairseq/logging/meters.py (187:192) duplicated block id: 939 size: 6 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (143:148) - fairseq/models/speech_to_text/convtransformer.py (85:90) duplicated block id: 940 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (88:94) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 941 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/roberta/model_xlmr.py (26:31) duplicated block id: 942 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (595:600) - fairseq/modules/dynamic_convolution.py (61:66) duplicated block id: 943 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (27:32) - fairseq/model_parallel/modules/multihead_attention.py (49:54) duplicated block id: 944 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (898:903) - fairseq/models/transformer/transformer_decoder.py (448:453) duplicated block id: 945 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (137:142) - fairseq/tasks/sentence_ranking.py (112:117) duplicated block id: 946 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adam.py (151:157) - fairseq/optim/cpu_adam.py (123:129) duplicated block id: 947 size: 6 cleaned lines of code in 2 files: - fairseq/checkpoint_utils.py (451:456) - fairseq/checkpoint_utils.py (471:476) duplicated block id: 948 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (258:263) - fairseq/models/bart/model.py (267:272) duplicated block id: 949 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/hub_interface.py (27:32) - fairseq/models/text_to_speech/hub_interface.py (18:23) duplicated block id: 950 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (282:287) - fairseq/models/transformer/transformer_base.py (165:170) duplicated block id: 951 size: 6 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (143:148) - fairseq/models/speech_to_text/s2t_transformer.py (183:188) duplicated block id: 952 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (735:740) - fairseq/models/fconv.py (751:756) duplicated block id: 953 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (331:336) - fairseq/models/text_to_speech/tts_transformer.py (126:131) duplicated block id: 954 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (262:268) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 955 size: 6 cleaned lines of code in 2 files: - fairseq_cli/generate.py (175:180) - fairseq_cli/interactive.py (182:187) duplicated block id: 956 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (280:288) - fairseq/models/wav2vec/wav2vec2_asr.py (418:426) duplicated block id: 957 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (247:253) - fairseq/models/wav2vec/wav2vec2_asr.py (378:384) duplicated block id: 958 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (146:152) - fairseq/tasks/multilingual_masked_lm.py (108:113) duplicated block id: 959 size: 6 cleaned lines of code in 2 files: - fairseq/models/wav2vec/wav2vec2.py (1052:1060) - fairseq/models/wav2vec/wav2vec2.py (1160:1168) duplicated block id: 960 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 961 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightweight_convolution.py (88:94) - fairseq/modules/lightweight_convolution.py (171:177) duplicated block id: 962 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) - fairseq/models/speech_to_text/convtransformer.py (408:413) duplicated block id: 963 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (1:6) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (1:6) duplicated block id: 964 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adam.py (151:157) - fairseq/optim/fused_adam.py (106:112) duplicated block id: 965 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (1:6) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (1:6) duplicated block id: 966 size: 6 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (80:85) - fairseq/data/language_pair_dataset.py (116:121) duplicated block id: 967 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (27:32) - fairseq/model_parallel/megatron_trainer.py (31:36) duplicated block id: 968 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (88:94) - fairseq/tasks/masked_lm.py (262:268) duplicated block id: 969 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (27:32) - fairseq/model_parallel/models/transformer_lm.py (32:37) duplicated block id: 970 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qemb.py (86:91) - fairseq/modules/quantization/scalar/modules/qemb.py (126:131) duplicated block id: 971 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_translation.py (250:255) - fairseq/tasks/semisupervised_translation.py (349:354) duplicated block id: 972 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (34:41) - fairseq/models/text_to_speech/tts_transformer.py (348:355) duplicated block id: 973 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (34:39) - fairseq/modules/lightweight_convolution.py (29:34) duplicated block id: 974 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (271:277) - fairseq/models/wav2vec/wav2vec2.py (391:397) duplicated block id: 975 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (34:39) - fairseq/modules/lightweight_convolution.py (40:45) duplicated block id: 976 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/hifigan.py (35:40) - fairseq/models/text_to_speech/hifigan.py (61:66) duplicated block id: 977 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (75:82) - fairseq/models/fconv_self_att.py (71:78) duplicated block id: 978 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (23:28) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 979 size: 6 cleaned lines of code in 2 files: - fairseq/data/denoising_dataset.py (38:43) - fairseq/data/language_pair_dataset.py (67:72) duplicated block id: 980 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 981 size: 6 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (143:148) - fairseq/models/speech_to_speech/s2s_transformer.py (307:312) duplicated block id: 982 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (88:94) - fairseq/tasks/multilingual_masked_lm.py (332:338) duplicated block id: 983 size: 6 cleaned lines of code in 2 files: - fairseq_cli/interactive.py (29:34) - fairseq_cli/preprocess.py (26:31) duplicated block id: 984 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (276:281) - fairseq/models/roberta/model.py (449:454) duplicated block id: 985 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (17:22) - fairseq/modules/transformer_sentence_encoder.py (83:88) duplicated block id: 986 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (38:45) - fairseq/models/speech_to_text/xm_transformer.py (470:477) duplicated block id: 987 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (201:206) - fairseq/models/lightconv_lm.py (86:91) duplicated block id: 988 size: 6 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/inverse_square_root_schedule.py (70:80) - fairseq/optim/lr_scheduler/step_lr_scheduler.py (67:77) duplicated block id: 989 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (431:436) - fairseq/models/speech_to_text/xm_transformer.py (657:663) duplicated block id: 990 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/roberta/model_xlmr.py (26:31) duplicated block id: 991 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/adaptive_loss.py (97:103) - fairseq/criterions/ctc.py (236:242) duplicated block id: 992 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/megatron_trainer.py (31:36) - fairseq/model_parallel/models/transformer.py (44:49) duplicated block id: 993 size: 6 cleaned lines of code in 2 files: - fairseq/data/base_wrapper_dataset.py (36:43) - fairseq/data/transform_eos_dataset.py (108:117) duplicated block id: 994 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/sentence_prediction.py (16:21) - fairseq/tasks/sentence_ranking.py (12:17) duplicated block id: 995 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (88:93) - fairseq/tasks/fairseq_task.py (113:118) duplicated block id: 996 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 997 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (77:84) - fairseq/tasks/multilingual_language_modeling.py (615:624) duplicated block id: 998 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (258:264) - fairseq/models/speech_to_text/modules/augmented_memory_attention.py (61:66) duplicated block id: 999 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (448:453) - fairseq/modules/dynamic_convolution.py (61:66) duplicated block id: 1000 size: 6 cleaned lines of code in 2 files: - fairseq/optim/fused_adam.py (106:112) - fairseq/optim/fused_adam.py (275:281) duplicated block id: 1001 size: 6 cleaned lines of code in 2 files: - fairseq/speech_generator.py (97:104) - fairseq/speech_generator.py (206:211) duplicated block id: 1002 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/frm_text_to_speech.py (13:18) - fairseq/tasks/text_to_speech.py (23:28) duplicated block id: 1003 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (48:53) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (283:288) duplicated block id: 1004 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (77:83) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 1005 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:931) - fairseq/models/speech_to_text/convtransformer.py (419:424) duplicated block id: 1006 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/transformer_lm.py (32:37) - fairseq/model_parallel/modules/multihead_attention.py (49:54) duplicated block id: 1007 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (203:208) - fairseq/modules/multihead_attention.py (424:429) duplicated block id: 1008 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (134:139) - fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py (107:112) duplicated block id: 1009 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (34:41) - fairseq/models/speech_to_text/s2t_transformer.py (107:114) duplicated block id: 1010 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/pq.py (53:58) - fairseq/modules/quantization/pq/utils.py (91:96) duplicated block id: 1011 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (23:28) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 1012 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (13:18) - fairseq/models/text_to_speech/tacotron2.py (13:18) duplicated block id: 1013 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 1014 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (21:27) - fairseq/models/lightconv.py (81:87) duplicated block id: 1015 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (34:41) - fairseq/models/text_to_speech/fastspeech2.py (354:361) duplicated block id: 1016 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (63:68) - fairseq/models/speech_to_text/s2t_transformer.py (282:287) duplicated block id: 1017 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (193:198) - fairseq/models/roberta/model.py (200:205) duplicated block id: 1018 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (250:255) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1019 size: 6 cleaned lines of code in 2 files: - fairseq_cli/eval_lm.py (28:33) - fairseq_cli/train.py (18:23) duplicated block id: 1020 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/roberta/model_camembert.py (30:35) duplicated block id: 1021 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_masked_lm.py (177:182) - fairseq/tasks/sentence_ranking.py (112:117) duplicated block id: 1022 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (199:204) - fairseq/models/transformer/transformer_base.py (165:170) duplicated block id: 1023 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (97:102) - fairseq/modules/lightweight_convolution.py (16:21) duplicated block id: 1024 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (13:18) - fairseq/models/lstm.py (13:18) duplicated block id: 1025 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_denoising.py (91:101) - fairseq/tasks/multilingual_masked_lm.py (141:151) duplicated block id: 1026 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/cross_lingual_lm.py (145:150) - fairseq/tasks/legacy_masked_lm.py (135:140) duplicated block id: 1027 size: 6 cleaned lines of code in 2 files: - fairseq/data/multilingual/multilingual_data_manager.py (92:97) - fairseq/tasks/legacy_masked_lm.py (30:36) duplicated block id: 1028 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (31:36) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh (43:48) duplicated block id: 1029 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (30:35) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 1030 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (13:22) - fairseq/models/speech_to_text/xm_transformer.py (18:28) duplicated block id: 1031 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (123:128) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1032 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (295:300) - fairseq/tasks/translation_multi_simple_epoch.py (317:322) duplicated block id: 1033 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (66:71) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (151:156) duplicated block id: 1034 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adam.py (79:90) - fairseq/optim/cpu_adam.py (66:77) duplicated block id: 1035 size: 6 cleaned lines of code in 2 files: - fairseq/data/huffman/huffman_mmap_indexed_dataset.py (201:207) - fairseq/data/indexed_dataset.py (531:537) duplicated block id: 1036 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/online_backtranslation.py (130:135) - fairseq/tasks/semisupervised_translation.py (115:120) duplicated block id: 1037 size: 6 cleaned lines of code in 2 files: - fairseq_cli/preprocess.py (26:31) - fairseq_cli/train.py (18:23) duplicated block id: 1038 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/modules/emformer.py (1287:1292) - fairseq/models/speech_to_text/modules/emformer.py (1294:1299) duplicated block id: 1039 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (51:56) - fairseq/modules/transformer_sentence_encoder.py (203:208) duplicated block id: 1040 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (79:85) - fairseq/tasks/multilingual_masked_lm.py (332:338) duplicated block id: 1041 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (57:62) - fairseq/models/speech_to_text/xm_transformer.py (147:152) duplicated block id: 1042 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (240:246) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (364:370) duplicated block id: 1043 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (38:43) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (15:20) duplicated block id: 1044 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (325:330) - fairseq/data/audio/text_to_speech_dataset.py (187:192) duplicated block id: 1045 size: 6 cleaned lines of code in 2 files: - fairseq/data/monolingual_dataset.py (23:28) - fairseq/data/monolingual_dataset.py (34:39) duplicated block id: 1046 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (71:76) - fairseq/benchmark/dummy_mt.py (61:66) duplicated block id: 1047 size: 6 cleaned lines of code in 2 files: - fairseq/data/encoders/gpt2_bpe.py (39:45) - fairseq/data/encoders/hf_byte_bpe.py (44:50) duplicated block id: 1048 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_encoder.py (137:142) - fairseq/models/transformer/transformer_encoder.py (175:180) duplicated block id: 1049 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (91:96) - fairseq/models/fconv_self_att.py (83:88) duplicated block id: 1050 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (199:204) - fairseq/models/speech_to_text/xm_transformer.py (565:570) duplicated block id: 1051 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (77:83) - fairseq/tasks/denoising.py (268:276) duplicated block id: 1052 size: 6 cleaned lines of code in 2 files: - fairseq/search.py (114:122) - fairseq/search.py (681:689) duplicated block id: 1053 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (317:325) - fairseq/models/wav2vec/wav2vec2_asr.py (494:502) duplicated block id: 1054 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/masked_lm.py (87:98) - fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py (76:87) duplicated block id: 1055 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (162:167) - fairseq/models/nat/nat_crf_transformer.py (81:86) duplicated block id: 1056 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (30:35) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 1057 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_decoder.py (59:64) - fairseq/models/speech_to_text/s2t_transformer.py (282:287) duplicated block id: 1058 size: 6 cleaned lines of code in 2 files: - fairseq/optim/lr_scheduler/cosine_lr_scheduler.py (108:118) - fairseq/optim/lr_scheduler/step_lr_scheduler.py (67:77) duplicated block id: 1059 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_config.py (94:99) - fairseq/models/transformer_lm.py (34:39) duplicated block id: 1060 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (172:177) - fairseq/tasks/masked_lm.py (148:153) duplicated block id: 1061 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (79:85) - fairseq/tasks/sentence_prediction.py (276:282) duplicated block id: 1062 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/fastspeech2_loss.py (90:95) - fairseq/criterions/speech_to_speech_criterion.py (271:276) duplicated block id: 1063 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (113:118) - fairseq/tasks/legacy_masked_lm.py (64:69) duplicated block id: 1064 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adagrad.py (21:34) - fairseq/optim/fused_lamb.py (30:43) duplicated block id: 1065 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 1066 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/scalar/modules/qemb.py (78:84) - fairseq/modules/quantization/scalar/modules/qlinear.py (57:63) duplicated block id: 1067 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (243:271) - fairseq/models/roberta/model_xlmr.py (34:41) duplicated block id: 1068 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (1:6) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (1:6) duplicated block id: 1069 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (86:93) - fairseq/tasks/multilingual_language_modeling.py (615:624) duplicated block id: 1070 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_speech_dataset.py (325:330) - fairseq/data/audio/speech_to_text_joint_dataset.py (230:235) duplicated block id: 1071 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (170:177) - fairseq/modules/multihead_attention.py (395:402) duplicated block id: 1072 size: 6 cleaned lines of code in 2 files: - fairseq_cli/eval_lm.py (28:33) - fairseq_cli/preprocess.py (26:31) duplicated block id: 1073 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/legacy_masked_lm.py (109:114) - fairseq/tasks/translation.py (78:83) duplicated block id: 1074 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (13:18) - fairseq/models/text_to_speech/tts_transformer.py (15:20) duplicated block id: 1075 size: 6 cleaned lines of code in 2 files: - fairseq/file_io.py (55:60) - fairseq/file_io.py (63:68) duplicated block id: 1076 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (354:359) - fairseq/tasks/speech_to_text.py (124:129) duplicated block id: 1077 size: 6 cleaned lines of code in 2 files: - fairseq/data/data_utils.py (230:235) - fairseq/tasks/fairseq_task.py (186:191) duplicated block id: 1078 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tacotron2.py (245:250) - fairseq/models/text_to_speech/tts_transformer.py (262:267) duplicated block id: 1079 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/raw_audio_dataset.py (32:37) - fairseq/data/audio/raw_audio_dataset.py (334:339) duplicated block id: 1080 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 1081 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (13:18) - fairseq/models/speech_to_text/berard.py (12:17) duplicated block id: 1082 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (954:959) - fairseq/models/lightconv_lm.py (286:291) duplicated block id: 1083 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (416:421) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 1084 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (30:35) - fairseq/models/text_to_speech/fastspeech2.py (344:349) duplicated block id: 1085 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (28:33) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (79:84) duplicated block id: 1086 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv_self_att.py (71:78) - fairseq/models/lstm.py (32:39) duplicated block id: 1087 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (448:453) - fairseq/models/wav2vec/wav2vec2_asr.py (747:752) duplicated block id: 1088 size: 6 cleaned lines of code in 2 files: - fairseq/config/model/transformer_lm/transformer_lm_big.yaml (5:10) - fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml (5:10) duplicated block id: 1089 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (354:359) - fairseq/models/fairseq_decoder.py (80:85) duplicated block id: 1090 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/label_smoothed_cross_entropy.py (90:95) - fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py (127:133) duplicated block id: 1091 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adamax.py (99:105) - fairseq/optim/cpu_adam.py (123:129) duplicated block id: 1092 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (77:83) - fairseq/tasks/multilingual_masked_lm.py (332:338) duplicated block id: 1093 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/insertion_transformer.py (250:255) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 1094 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (15:20) - fairseq/tasks/language_modeling.py (20:25) duplicated block id: 1095 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (462:467) - fairseq/models/transformer/transformer_legacy.py (178:183) duplicated block id: 1096 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_mt.py (79:85) - fairseq/tasks/masked_lm.py (262:268) duplicated block id: 1097 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (488:493) - fairseq/models/speech_to_text/xm_transformer.py (417:422) duplicated block id: 1098 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (48:53) - fairseq/modules/lightweight_convolution.py (40:45) duplicated block id: 1099 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 1100 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (12:17) - fairseq/models/text_to_speech/tacotron2.py (13:18) duplicated block id: 1101 size: 6 cleaned lines of code in 2 files: - fairseq/modules/quantization/pq/modules/qlinear.py (52:57) - fairseq/modules/quantization/pq/pq.py (114:119) duplicated block id: 1102 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (133:138) - fairseq/models/speech_to_text/xm_transformer.py (417:422) duplicated block id: 1103 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (262:267) - fairseq/models/fconv.py (528:533) duplicated block id: 1104 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (47:52) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (121:126) duplicated block id: 1105 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_gottbert.py (34:41) - fairseq/models/speech_to_text/s2t_transformer.py (107:114) duplicated block id: 1106 size: 6 cleaned lines of code in 2 files: - fairseq/optim/fused_adam.py (275:281) - fairseq/optim/nag.py (54:60) duplicated block id: 1107 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/hubert_dataset.py (324:330) - fairseq/data/subsample_dataset.py (61:68) duplicated block id: 1108 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (137:142) - fairseq/models/masked_lm.py (143:148) duplicated block id: 1109 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (180:185) - fairseq/models/transformer_lm.py (177:182) duplicated block id: 1110 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (48:53) - fairseq/modules/lightconv_layer/lightconv_cuda.cuh (66:71) duplicated block id: 1111 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_decoder.py (59:64) - fairseq/models/speech_to_text/convtransformer.py (199:204) duplicated block id: 1112 size: 6 cleaned lines of code in 2 files: - fairseq_cli/train.py (18:23) - fairseq_cli/validate.py (21:26) duplicated block id: 1113 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (307:312) - fairseq/models/roberta/model.py (525:530) duplicated block id: 1114 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (926:931) - fairseq/models/nat/cmlm_transformer.py (134:139) duplicated block id: 1115 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (17:22) - fairseq/modules/lightweight_convolution.py (155:160) duplicated block id: 1116 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (223:228) - fairseq/criterions/tacotron2_loss.py (88:93) duplicated block id: 1117 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (916:921) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 1118 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (13:18) - fairseq/models/text_to_speech/tts_transformer.py (15:20) duplicated block id: 1119 size: 6 cleaned lines of code in 2 files: - fairseq/models/fairseq_model.py (236:241) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 1120 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (38:45) - fairseq/models/text_to_speech/tts_transformer.py (348:355) duplicated block id: 1121 size: 6 cleaned lines of code in 2 files: - fairseq_cli/eval_lm.py (28:33) - fairseq_cli/validate.py (21:26) duplicated block id: 1122 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer/transformer_decoder.py (18:23) - fairseq/models/transformer/transformer_encoder.py (15:20) duplicated block id: 1123 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (48:53) - fairseq/modules/lightweight_convolution.py (29:34) duplicated block id: 1124 size: 6 cleaned lines of code in 2 files: - fairseq/modules/sparse_transformer_sentence_encoder.py (41:47) - fairseq/modules/sparse_transformer_sentence_encoder_layer.py (24:30) duplicated block id: 1125 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/language_modeling.py (373:383) - fairseq/tasks/multilingual_language_modeling.py (617:627) duplicated block id: 1126 size: 6 cleaned lines of code in 2 files: - fairseq/models/masked_lm.py (143:148) - fairseq/models/speech_to_speech/s2s_transformer.py (482:487) duplicated block id: 1127 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (408:413) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1128 size: 6 cleaned lines of code in 2 files: - fairseq/modules/cuda_utils.cu (1:6) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (1:6) duplicated block id: 1129 size: 6 cleaned lines of code in 2 files: - fairseq/modules/transformer_sentence_encoder.py (83:88) - fairseq/modules/transformer_sentence_encoder_layer.py (24:29) duplicated block id: 1130 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (26:31) - fairseq/models/text_to_speech/tts_transformer.py (338:343) duplicated block id: 1131 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (21:27) - fairseq/models/masked_lm.py (47:54) duplicated block id: 1132 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert.py (171:176) - fairseq/models/hubert/hubert_asr.py (102:107) duplicated block id: 1133 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/masked_lm.py (148:153) - fairseq/tasks/multilingual_denoising.py (141:146) duplicated block id: 1134 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/iterative_nonautoregressive_transformer.py (182:187) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1135 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (268:276) - fairseq/tasks/masked_lm.py (262:268) duplicated block id: 1136 size: 6 cleaned lines of code in 2 files: - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (248:254) - fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu (372:377) duplicated block id: 1137 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_translation.py (255:261) - fairseq/tasks/translation_multi_simple_epoch.py (169:175) duplicated block id: 1138 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_translation.py (230:235) - fairseq/tasks/online_backtranslation.py (337:342) duplicated block id: 1139 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (294:299) - fairseq/models/roberta/model.py (507:512) duplicated block id: 1140 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (26:31) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 1141 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv_lm.py (44:49) - fairseq/models/lightconv.py (201:206) duplicated block id: 1142 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (440:445) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1143 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/denoising.py (268:276) - fairseq/tasks/multilingual_masked_lm.py (332:338) duplicated block id: 1144 size: 6 cleaned lines of code in 2 files: - fairseq/models/lightconv.py (898:903) - fairseq/modules/dynamic_convolution.py (61:66) duplicated block id: 1145 size: 6 cleaned lines of code in 2 files: - fairseq/models/bart/model.py (116:121) - fairseq/models/roberta/model_gottbert.py (23:28) duplicated block id: 1146 size: 6 cleaned lines of code in 2 files: - fairseq/data/audio/speech_to_text_dataset.py (528:533) - fairseq/data/audio/speech_to_text_joint_dataset.py (331:336) duplicated block id: 1147 size: 6 cleaned lines of code in 2 files: - fairseq/optim/adamax.py (99:105) - fairseq/optim/fused_adam.py (106:112) duplicated block id: 1148 size: 6 cleaned lines of code in 2 files: - fairseq/modules/cuda_utils.cu (1:6) - fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu (1:6) duplicated block id: 1149 size: 6 cleaned lines of code in 2 files: - fairseq/search.py (119:128) - fairseq/search.py (323:332) duplicated block id: 1150 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (172:177) - fairseq/models/lstm_lm.py (101:106) duplicated block id: 1151 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/levenshtein_transformer.py (67:73) - fairseq/models/nat/nonautoregressive_transformer.py (78:84) duplicated block id: 1152 size: 6 cleaned lines of code in 2 files: - fairseq/modules/gumbel_vector_quantizer.py (24:45) - fairseq/modules/kmeans_vector_quantizer.py (14:31) duplicated block id: 1153 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/convtransformer.py (220:225) - fairseq/models/speech_to_text/s2t_transformer.py (293:303) duplicated block id: 1154 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_transformer.py (416:421) - fairseq/models/speech_to_text/s2t_transformer.py (462:467) duplicated block id: 1155 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_masked_lm.py (88:94) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 1156 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/berard.py (12:17) - fairseq/models/text_to_speech/tts_transformer.py (15:20) duplicated block id: 1157 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (75:82) - fairseq/tasks/multilingual_language_modeling.py (615:624) duplicated block id: 1158 size: 6 cleaned lines of code in 2 files: - fairseq/models/fconv.py (288:298) - fairseq/models/fconv_self_att.py (270:280) duplicated block id: 1159 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (509:514) - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (659:664) duplicated block id: 1160 size: 6 cleaned lines of code in 2 files: - fairseq/models/lstm.py (369:374) - fairseq/models/speech_to_text/berard.py (361:366) duplicated block id: 1161 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (561:566) - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (729:734) duplicated block id: 1162 size: 6 cleaned lines of code in 2 files: - fairseq/optim/cpu_adam.py (123:129) - fairseq/optim/nag.py (54:60) duplicated block id: 1163 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (155:160) - fairseq/models/speech_to_text/xm_transformer.py (147:152) duplicated block id: 1164 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (76:81) - fairseq/models/wav2vec/wav2vec2.py (167:172) duplicated block id: 1165 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_xlmr.py (26:31) - fairseq/models/speech_to_text/s2t_transformer.py (99:104) duplicated block id: 1166 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_text/s2t_transformer.py (225:230) - fairseq/models/speech_to_text/xm_transformer.py (417:422) duplicated block id: 1167 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model_camembert.py (38:45) - fairseq/models/speech_to_text/s2t_transformer.py (107:114) duplicated block id: 1168 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (215:220) - fairseq/models/wav2vec/wav2vec2_asr.py (294:299) duplicated block id: 1169 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/multilingual_masked_lm.py (332:338) - fairseq/tasks/sentence_ranking.py (213:219) duplicated block id: 1170 size: 6 cleaned lines of code in 2 files: - fairseq/benchmark/dummy_lm.py (60:65) - fairseq/benchmark/dummy_mt.py (61:66) duplicated block id: 1171 size: 6 cleaned lines of code in 2 files: - fairseq/models/transformer_from_pretrained_xlm.py (122:127) - fairseq/models/transformer_from_pretrained_xlm.py (139:145) duplicated block id: 1172 size: 6 cleaned lines of code in 2 files: - fairseq/data/iterators.py (176:182) - fairseq/data/iterators.py (351:357) duplicated block id: 1173 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (349:354) - fairseq/models/speech_to_text/xm_transformer.py (417:422) duplicated block id: 1174 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (279:284) - fairseq/models/speech_to_text/xm_transformer.py (147:152) duplicated block id: 1175 size: 6 cleaned lines of code in 2 files: - fairseq/models/speech_to_speech/s2s_transformer.py (389:395) - fairseq/models/transformer/transformer_base.py (127:135) duplicated block id: 1176 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/fairseq_task.py (354:359) - fairseq/tasks/translation_multi_simple_epoch.py (199:204) duplicated block id: 1177 size: 6 cleaned lines of code in 2 files: - fairseq/models/hubert/hubert_asr.py (356:361) - fairseq/modules/dynamic_convolution.py (61:66) duplicated block id: 1178 size: 6 cleaned lines of code in 2 files: - fairseq/modules/dynamic_convolution.py (17:22) - fairseq/modules/lightweight_convolution.py (16:21) duplicated block id: 1179 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/cmlm_transformer.py (123:128) - fairseq/models/speech_to_speech/s2s_transformer.py (632:637) duplicated block id: 1180 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/modules/multihead_attention.py (20:26) - fairseq/model_parallel/modules/transformer_layer.py (12:18) duplicated block id: 1181 size: 6 cleaned lines of code in 2 files: - fairseq/models/roberta/model.py (358:363) - fairseq/models/speech_to_text/xm_transformer.py (462:467) duplicated block id: 1182 size: 6 cleaned lines of code in 2 files: - fairseq/optim/fused_adam.py (106:112) - fairseq/optim/nag.py (54:60) duplicated block id: 1183 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/roberta/model.py (167:172) - fairseq/models/bart/model.py (307:312) duplicated block id: 1184 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_speech.py (147:152) - fairseq/tasks/text_to_speech.py (52:57) duplicated block id: 1185 size: 6 cleaned lines of code in 2 files: - fairseq/models/nat/nonautoregressive_ensembles.py (112:117) - fairseq/models/nat/nonautoregressive_ensembles.py (181:186) duplicated block id: 1186 size: 6 cleaned lines of code in 2 files: - fairseq_cli/interactive.py (29:34) - fairseq_cli/validate.py (21:26) duplicated block id: 1187 size: 6 cleaned lines of code in 2 files: - fairseq/tasks/speech_to_text.py (37:42) - fairseq/tasks/text_to_speech.py (52:57) duplicated block id: 1188 size: 6 cleaned lines of code in 2 files: - fairseq/models/text_to_speech/tts_transformer.py (192:197) - fairseq/models/text_to_speech/tts_transformer.py (265:270) duplicated block id: 1189 size: 6 cleaned lines of code in 2 files: - fairseq/model_parallel/models/pipeline_parallel_transformer/model.py (619:624) - fairseq/models/transformer/transformer_decoder.py (457:462) duplicated block id: 1190 size: 6 cleaned lines of code in 2 files: - fairseq/criterions/speech_to_speech_criterion.py (215:220) - fairseq/criterions/speech_to_speech_criterion.py (223:228)