Summary: 37 instances, 31 unique Text Count // TODO remove the dependency on input and use instead its sizes -> save memory 2 # TODO use size instead of scale to make it robust to different sizes 1 # TODO Ideally, just remove this and let me model handle arbitrary 1 # TODO think about a representation of batch of boxes 1 # TODO maybe add an API for querying the ws / hs 1 # TODO resolve this difference and make it consistent. It should be per image, 1 // TODO raise error if not compiled with CUDA 1 # TODO maybe push this to nn? 1 TO_REMOVE = 1 # TODO remove 1 # FIXME ideally this would be achieved with a CombinedLRScheduler, 1 THCState *state = at::globalContext().lazyInitCUDA(); // TODO replace with getTHCState 1 # TODO make it pretty 1 # TODO add squeeze? 1 # TODO might be better to add an extra field 1 // TODO add more checks 1 # TODO need to make the use_07_metric format available 1 # TODO: specify init for the above 2 //TODO: larger threads-per-block might be better here, because each CTA uses 32 KB of shmem, 1 # TODO check if want to return a single BoxList or a composite 1 # TODO should I filter empty boxes here? 1 TO_REMOVE = 1 # TODO remove 1 # TODO rename x to roi_box_features, if it doesn't increase memory consumption 1 # TODO redundant, remove 1 # TODO chck if necessary 1 # TODO: Is this JIT compatible? 1 // TODO make it in a common file 2 # TODO replace with get_img_info? 3 torch.cuda.empty_cache() # TODO check if it helps 2 # TODO maybe remove this and make it explicit in the documentation 1 # FIXME remove this once c10d fixes the bug it has 1 // TODO improve this part 1