fairseq/tasks/multilingual_translation.py (2 lines): - line 276: # TODO make summing of the sample sizes configurable - line 293: # TODO make summing of the sample sizes configurable fairseq/modules/transformer_layer.py (2 lines): - line 94: # TODO: to formally solve this problem, we need to change fairseq's - line 164: # TODO remove this once we update apex with the fix fairseq/iterative_refinement_generator.py (2 lines): - line 110: # TODO: iterative refinement generator does not support ensemble for now. - line 127: # TODO: better encoder inputs? fairseq/optim/adafactor.py (1 line): - line 226: # TODO: remove check once pyTorch avoids a copy for this case fairseq/data/round_robin_zip_datasets.py (1 line): - line 76: # TODO make it configurable whether to use max() or sum() here fairseq/data/transform_eos_lang_pair_dataset.py (1 line): - line 65: # TODO: support different padding direction on target side fairseq/optim/nag.py (1 line): - line 96: # TODO: remove check once pyTorch avoids a copy for this case fairseq/models/nat/nonautoregressive_transformer.py (1 line): - line 362: # TODO: implementing length-beam fairseq/data/legacy/block_pair_dataset.py (1 line): - line 219: TODO: ids in skip_ids should be consecutive, we can extend it to more generic version later fairseq/utils.py (1 line): - line 160: # TODO: Very rare cases where the replacement is '' should be handled gracefully fairseq/data/legacy/masked_lm_dataset.py (1 line): - line 208: # TODO: Can we add deteminism without this constraint? fairseq/data/noising.py (1 line): - line 113: # TODO: speed up the following loop fairseq/optim/adam.py (1 line): - line 198: # TODO: remove check once pyTorch avoids a copy for this case fairseq/models/nat/insertion_transformer.py (1 line): - line 179: # TODO: decoding for InsertionTransformer fairseq/tasks/semisupervised_translation.py (1 line): - line 343: # TODO make summing of the sample sizes configurable fairseq/modules/positional_embedding.py (1 line): - line 20: # TODO: The right place for this offset would be inside fairseq/criterions/legacy_masked_lm.py (1 line): - line 96: # TODO: Remove this after refactor of BERTModel fairseq/models/lstm.py (1 line): - line 359: # TODO make bias configurable