Summary: 30 instances, 29 unique Text Count # TODO If init_tree is enabled, during the first pass we might want to 1 # TODO It may be interesting to get a random bucket among the acquirable 1 # TODO We could check that, for all i, we have either tensor[i] < 0 or 1 # TODO Change this function to use the same cost model as the LockServer 1 * the left- and right-hand side of relation type i. (TODO: remove these 1 # TODO Check that the batch size is a multiple of the batch negative number 1 # FIXME join distributed workers (not really necessary) 1 FIXME: torchbiggraph.rpc should be fixed to not require torch.serialization, 1 FIXME: torch.distributed.recv should not require you to provide the 1 # TODO have two permanent storages on GPU and move stuff in and out 1 # FIXME: how to properly copy bytes to ByteTensor? 1 # FIXME: there's a slight danger here, say that a multi-machine job fails 1 # FIXME Workaround for https://github.com/pytorch/pytorch/issues/15870 1 # TODO Remove noqa when flake8 will understand kw_only added in attrs-18.2.0. 1 # TODO Remove noqa when flake8 will understand kw_only added in attrs-18.2.0. 1 # FIXME This adapts from the pre-D14024710 format; remove eventually. 2 # FIXME Add the rank to the name of each process. 1 # FIXME should we only delay if iteration_idx == 0? 1 # FIXME: this is a terrible API 1 # FIXME Workaround for https://github.com/pytorch/pytorch/issues/15223. 1 # FIXME: we've got to get rid of this two-pass nonsense for dynamically sized 1 # TODO Check that all partitioned entity types have the same number of 1 # FIXME: This is not ideal. Perhaps we should pass in the config 1 // TODO check type and sizes first; 1 # FIXME: is it efficient to torch.save into a buf? It's going to have to copy 1 # FIXME Is PyTorch's sort stable? Won't this risk messing up the random shuffle? 1 # TODO make this a non-inplace operation 1 # FIXME This order assumes higher affinity on the left-hand side, as it's 1 # FIXME: it's not really safe to do partial batches if num_batch_negs != 0 1