Summary: 33 instances, 27 unique Text Count # FIXME: Models will use context "with torch.no_grad():", so the lifetime of no_grad will end after the eval(). 1 # TODO: This is a _HORRIBLE_ patch related to #208 2 # TODO: deduplicate with `torchbenchmark.util.model.no_grad` 1 # TODO: perform subsamping 1 # TODO: currently only support 1 GPU device 1 TODO: this is a little HACK now, put batch_size here now. 1 # TODO (@mttk): Populate classs with default values of special symbols 1 # TODO: this should also be done with the ProgressMeter 1 # TODO: implement state passing for lstms 1 # FIXME: Must incorporate this "torch.is_grad_enabled()" inside of actual eval() func. 1 # TODO: Modify and update the model to apply metadata changes by the user. 1 # TODO: Translate bpe encoded files 1 # TODO: in torch 1.0, torch.mean() support dim list 1 # TODO: make it with torch instead of numpy 1 # TODO: expand to batch operation. 1 # TODO - a lot of this was copied from pytorch/jit/scripts/log_extract.py, 1 # assert 'weight' not in benchmarks[benchmark], "TODO implement manual benchmark weights" 2 # TODO: Batch translation 1 # TODO - currently load_model assumes cuda 1 # TODO: Use nn.LayerNorm to impl cLN to speed up 1 assert 'weight' not in category_spec, "TODO implement manual category weights" 2 assert benchmarks[benchmark] is None, "TODO handle benchmark as dict of config specs" 2 assert 'weight' not in benchmarks, "TODO implement manual task weights" 2 # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad? 1 assert 'weight' not in tasks, "TODO implement manual domain weights" 2 # TODO: Try different terminate conditions. 1 # TODO: Also update the `freq`, although it is not likely to be used. 1