fairseq/trainer.py (3 lines): - line 236: # TODO: later merge it into the model - line 284: # TODO: print should really go to logger, this print goes - line 396: # TODO: later merge it into the model fairseq/criterions/label_smoothed_length_cross_entropy.py (2 lines): - line 41: sample_size = ntokens #TODO why not merge ntokens and sample_size? what is the difference? - line 60: length_target = sample['net_input']['prev_output_tokens'].ne(self.padding_idx).sum(-1).unsqueeze(-1) #TODO doesn't work for dynamic length. change to eos-based method. fairseq/models/disco_transformer.py (1 line): - line 42: # # TODO: Completely move masking to the model for general purposes. fairseq/utils.py (1 line): - line 148: # TODO: Very rare cases where the replacement is '' should be handled gracefully fairseq/fb_hub.py (1 line): - line 23: # TODO: fix it after Python2 EOL fairseq/models/transformer.py (1 line): - line 632: # TODO remove this once we update apex with the fix fairseq/modules/positional_embedding.py (1 line): - line 23: # TODO: The right place for this offset would be inside