pytext/models/model.py (2 lines): - line 153: # TODO: add back after migration - line 301: # TODO (geoffreygoh): Using config for such a purpose is really a hack, pytext/metric_reporters/metric_reporter.py (2 lines): - line 215: # TODO this function can be merged with batch_context once data migration is done - line 222: # TODO this method can be removed by moving Channel construction to Task pytext/legacy/datasets/translation.py (2 lines): - line 135: # TODO: This is a _HORRIBLE_ patch related to #208 - line 288: # TODO: This is a _HORRIBLE_ patch related to #208 pytext/optimizer/fp16_optimizer.py (2 lines): - line 31: # TODO: temporary fix fairseq dependency, remove after fairseq new release. - line 34: # TODO: remove this try block after the new release by fairseq that pytext/models/seq_models/conv_decoder.py (2 lines): - line 355: # TODO : Verify incremental generation for AR mode - line 412: # TODO : Positional embeddings needs to be tested in AR mode pytext/config/pytext_config.py (1 line): - line 187: # TODO these two configs are only kept only to be backward comptible with pytext/metric_reporters/intent_slot_detection_metric_reporter.py (1 line): - line 94: # TODO this part should be handled more elegantly pytext/torchscript/tensorizer/normalizer.py (1 line): - line 51: # TODO: this is only to satisfy the TorchScript compiler. pytext/metric_reporters/classification_metric_reporter.py (1 line): - line 90: # TODO: refactor metric reporting and remove this hack pytext/utils/file_io.py (1 line): - line 10: # TODO: @stevenliu use PathManagerFactory after it's released to PyPI pytext/data/sources/dense_retrieval.py (1 line): - line 19: # TODO: Remove assumption that only 1 +ve passage is sample per question. pytext/trainers/trainer.py (1 line): - line 756: # TODO merge this step into add_batch_stats once all data pytext/models/representations/biseqcnn.py (1 line): - line 41: TODO: Current implementation has a single layer conv-maxpool operation. pytext/metrics/__init__.py (1 line): - line 584: TODO: This is too slow, improve the performance pytext/models/representations/transformer/luna_attention.py (1 line): - line 277: # TODO save prev_pcontext for causal attention pytext/legacy/vocab.py (1 line): - line 28: # TODO (@mttk): Populate classs with default values of special symbols pytext/models/representations/transformer/sentence_encoder.py (1 line): - line 169: # TODO: segment_embeddings? pytext/models/word_model.py (1 line): - line 57: BiLSTMSlotAttention.Config, # TODO: make default when sorting solved pytext/data/featurizer/simple_featurizer.py (1 line): - line 60: # TODO: support remaining features (see OutputRecord) pytext/models/embeddings/dict_embedding.py (1 line): - line 72: # TODO: clean this up once fully migrated to new data handler design pytext/data/tokenizers/tokenizer.py (1 line): - line 241: # TODO: T57433776 remove once FairSeq support PathManager pytext/data/tensorizers.py (1 line): - line 740: TODO: Even though very similar, 'FloatListTensorizer' currently does not support this vanilla case for tensorization of List[float]. pytext/task/new_task.py (1 line): - line 133: # TODO: deprecate this pytext/metric_reporters/pairwise_ranking_metric_reporter.py (1 line): - line 14: # TODO: add file channel pytext/task/serialize.py (1 line): - line 104: # TODO: T53664090 @stevenliu save & load state_dict() of optimizer and scheduler pytext/metric_reporters/channel.py (1 line): - line 105: # TODO change print_metrics function to __str__ T33522209 pytext/metric_reporters/language_model_metric_reporter.py (1 line): - line 265: # TODO: remove GPU0 report pytext/models/seq_models/light_conv.py (1 line): - line 28: # ARBABU TODO : convert this to a enum