Summary: 90 instances, 67 unique Text Count # TODO: (eellison) T54974082 https://github.com/pytorch/pytorch/issues/26744/pytorch/issues/26744 1 # TODO add some warnings in this case 1 # FIXME this is kind of a hack, but we will jump to the previous keyframe 1 // TODO: reset params.formats 1 # TODO: should this handle containers? 1 // TODO: consider supporting PNG_INFO_tRNS 1 # TODO check if stream needs to always be the video stream here or not 1 // TODO: There should be an easier way to do this 5 // FIXME: change this when batches of size > 1 are allowed 1 # TODO format that into engineering format 1 # TODO change this 1 # TODO: replace this method with torch.iinfo when it gets torchscript support. 1 # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 11 # TODO: simplify when indexing without rank will be supported by ONNX 1 # TODO: https://github.com/pytorch/pytorch/issues/26792 1 # TODO: deprecate eventually 3 // TODO here torch::relu_ and torch::adaptive_avg_pool2d wrapped in 1 # FIXME: Is this needed? SqueezeNet should only be called from the 1 # FIXME: Building torchvision with ffmpeg on MacOS or with Python 3.9 1 # TODO: Revert it once the bug is fixed. 1 # TODO: Also save the optimizer and scheduler 1 // FIXME: This is private, API might change: 1 # TODO: we should expect bincount to always be faster than histc, but this 1 // TODO: maybe we can specify dynamic shared memory size before calling the 1 # TODO raise a warning? 1 # FIXME remove this and make paste_masks_in_image run on the GPU 1 # TODO: support test 1 # FIXME: causes crash. See the following GitHub issues for more details. 1 # TODO: Do we want a list or a dict? 1 // TODO: Once torch::from_file handles UTF-8 paths correctly, we should move 1 # TODO: replace with dtype.is_floating_point when torchscript supports it 2 # TODO: make this more precise 1 # TODO: refactor with utils.verify_str_arg 1 # TODO: There might be a way to vectorize this 1 // TODO: maybe we can specify dynamic shared memory size before calling the 1 # TODO use pretrained as a string to specify the backend 5 # TODO: resume, pretrained, and weights should be in an exclusive arg group 1 # TODO: https://github.com/pytorch/pytorch/issues/26731 1 # FIXME We don't know if we should expect this to happen 1 # TODO: Ideally, this should be part of the eval transforms preset, instead 1 # See FIXME above 1 # TODO: create a segmentation feature 1 // FIXME: Remove this section once we can use at::native for android xplat 1 # FIXME: https://github.com/pytorch/vision/issues/3367 1 # FIXME: This checking is not done for the other models 1 # TODO: Benchmark this against the approach described at https://github.com/pytorch/vision/pull/5197#discussion_r786251298 1 # TODO: Move this to a function 2 # TODO: might be weird to not take `num_workers` into account 1 # TODO: make this more informative 1 # FIXME need to take into account that the datasets 2 # TODO add a warning 1 # FIXME: https://github.com/pytorch/pytorch/issues/65000 1 # TODO format that into engineering format 1 # TODO: this needs to be more elaborate 1 # FIXME: squeezenet1_x() functions 1 # FIXME: Eliminate copy-pasted code for fill standardization and _augmentation_space() by moving stuff on a base class 1 # FIXME assume for now that testing uses the largest scale 1 # TODO: specify the return type 1 # TODO: this is weird as it drops more elements than it should 1 # FIXME: pad input in case it is smaller than crop_box 1 // TODO: add read from memory option 1 # TODO: vectorize this instead of using a for loop 1 # TODO: this needs to be revisited to allow subclassing of custom transforms 1 # TODO: Once torchscript supports Enums with staticmethod 1 # TODO : replace below with a dynamic padding when support is added in ONNX 1 # TODO: Remove this warning after ONNX opset 16 is supported. 1 # TODO add back the assert 1