Summary: 33 instances, 32 unique Text Count # TODO: add file channel 1 # TODO: clean this up once fully migrated to new data handler design 1 # TODO: remove this try block after the new release by fairseq that 1 # ARBABU TODO : convert this to a enum 1 # TODO: This is a _HORRIBLE_ patch related to #208 2 TODO: This is too slow, improve the performance 1 # TODO: segment_embeddings? 1 # TODO these two configs are only kept only to be backward comptible with 1 # TODO: this is only to satisfy the TorchScript compiler. 1 # TODO: add back after migration 1 TODO: Even though very similar, 'FloatListTensorizer' currently does not support this vanilla case for tensorization of List[float]. 1 # TODO merge this step into add_batch_stats once all data 1 # TODO (@mttk): Populate classs with default values of special symbols 1 # TODO : Positional embeddings needs to be tested in AR mode 1 # TODO: temporary fix fairseq dependency, remove after fairseq new release. 1 # TODO (geoffreygoh): Using config for such a purpose is really a hack, 1 # TODO: deprecate this 1 # TODO : Verify incremental generation for AR mode 1 # TODO change print_metrics function to __str__ T33522209 1 # TODO: @stevenliu use PathManagerFactory after it's released to PyPI 1 # TODO this function can be merged with batch_context once data migration is done 1 # TODO this method can be removed by moving Channel construction to Task 1 BiLSTMSlotAttention.Config, # TODO: make default when sorting solved 1 # TODO: refactor metric reporting and remove this hack 1 # TODO: T53664090 @stevenliu save & load state_dict() of optimizer and scheduler 1 # TODO: remove GPU0 report 1 # TODO: Remove assumption that only 1 +ve passage is sample per question. 1 # TODO: T57433776 remove once FairSeq support PathManager 1 # TODO this part should be handled more elegantly 1 # TODO: support remaining features (see OutputRecord) 1 # TODO save prev_pcontext for causal attention 1 TODO: Current implementation has a single layer conv-maxpool operation. 1