torchbiggraph/rpc.py (3 lines): - line 17: # FIXME: is it efficient to torch.save into a buf? It's going to have to copy - line 23: # FIXME: how to properly copy bytes to ByteTensor? - line 44: # FIXME: we've got to get rid of this two-pass nonsense for dynamically sized torchbiggraph/config.py (3 lines): - line 501: # TODO Check that all partitioned entity types have the same number of - line 503: # TODO Check that the batch size is a multiple of the batch negative number - line 518: # TODO make this a non-inplace operation torchbiggraph/losses.py (3 lines): - line 37: # FIXME: This is not ideal. Perhaps we should pass in the config - line 101: # FIXME Workaround for https://github.com/pytorch/pytorch/issues/15223. - line 139: # FIXME Workaround for https://github.com/pytorch/pytorch/issues/15870 torchbiggraph/bucket_scheduling.py (3 lines): - line 127: # TODO Change this function to use the same cost model as the LockServer - line 434: # TODO If init_tree is enabled, during the first pass we might want to - line 469: # TODO It may be interesting to get a random bucket among the acquirable torchbiggraph/train_cpu.py (2 lines): - line 727: # FIXME join distributed workers (not really necessary) - line 855: # FIXME should we only delay if iteration_idx == 0? torchbiggraph/schema.py (2 lines): - line 21: # TODO Remove noqa when flake8 will understand kw_only added in attrs-18.2.0. - line 211: # TODO Remove noqa when flake8 will understand kw_only added in attrs-18.2.0. torchbiggraph/util.cpp (2 lines): - line 246: * the left- and right-hand side of relation type i. (TODO: remove these - line 317: // TODO check type and sizes first; torchbiggraph/operators.py (2 lines): - line 121: # FIXME This adapts from the pre-D14024710 format; remove eventually. - line 300: # FIXME This adapts from the pre-D14024710 format; remove eventually. torchbiggraph/batching.py (2 lines): - line 28: # FIXME Is PyTorch's sort stable? Won't this risk messing up the random shuffle? - line 109: # FIXME: it's not really safe to do partial batches if num_batch_negs != 0 torchbiggraph/parameter_sharing.py (2 lines): - line 68: FIXME: torchbiggraph.rpc should be fixed to not require torch.serialization, - line 70: FIXME: torch.distributed.recv should not require you to provide the torchbiggraph/entitylist.py (1 line): - line 67: # TODO We could check that, for all i, we have either tensor[i] < 0 or torchbiggraph/checkpoint_storage.py (1 line): - line 293: # FIXME: there's a slight danger here, say that a multi-machine job fails torchbiggraph/train_gpu.py (1 line): - line 232: # TODO have two permanent storages on GPU and move stuff in and out torchbiggraph/tensorlist.py (1 line): - line 137: # FIXME: this is a terrible API torchbiggraph/util.py (1 line): - line 287: # FIXME Add the rank to the name of each process. torchbiggraph/eval.py (1 line): - line 148: # FIXME This order assumes higher affinity on the left-hand side, as it's