26 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (123:153, 9%) - fairseq/modules/multihead_attention.py (331:361, 5%) 17 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (331:349, 6%) - fairseq/modules/multihead_attention.py (585:603, 3%) 16 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (283:303, 5%) - fairseq/modules/multihead_attention.py (522:542, 3%) 10 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (67:77, 3%) - fairseq/modules/multihead_attention.py (48:59, 1%) 9 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (250:258, 3%) - fairseq/modules/multihead_attention.py (483:491, 1%) 9 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (192:200, 3%) - fairseq/modules/multihead_attention.py (414:422, 1%) 8 duplicated lines in: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (534:541, 1%) - fairseq/modules/transformer_layer.py (483:491, 1%) 8 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (37:44, 2%) - fairseq/modules/multihead_attention.py (26:33, 1%) 8 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (37:44, 2%) - fairseq/modules/sparse_multihead_attention.py (24:31, 8%) 8 duplicated lines in: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (334:346, 1%) - fairseq/modules/transformer_layer.py (149:161, 1%) 7 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (96:102, 2%) - fairseq/modules/multihead_attention.py (243:249, 1%) 6 duplicated lines in: - fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py (595:600, 1%) - fairseq/modules/dynamic_convolution.py (61:66, 2%) 6 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (203:208, 2%) - fairseq/modules/multihead_attention.py (424:429, 1%) 6 duplicated lines in: - fairseq/model_parallel/modules/multihead_attention.py (170:177, 2%) - fairseq/modules/multihead_attention.py (395:402, 1%)