torchvision/models/quantization/inception.py (7 lines): - line 43: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 54: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 65: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 76: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 87: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 121: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 215: # TODO use pretrained as a string to specify the backend torchvision/io/video.py (5 lines): - line 188: # FIXME this is kind of a hack, but we will jump to the previous keyframe - line 192: # TODO check if stream needs to always be the video stream here or not - line 195: # TODO add some warnings in this case - line 208: # TODO add a warning - line 323: # TODO raise a warning? setup.py (4 lines): - line 344: # FIXME: Building torchvision with ffmpeg on MacOS or with Python 3.9 - line 345: # FIXME: causes crash. See the following GitHub issues for more details. - line 346: # FIXME: https://github.com/pytorch/pytorch/issues/65000 - line 347: # FIXME: https://github.com/pytorch/vision/issues/3367 torchvision/ops/poolers.py (4 lines): - line 37: # TODO: (eellison) T54974082 https://github.com/pytorch/pytorch/issues/26744/pytorch/issues/26744 - line 291: # TODO: deprecate eventually - line 296: # TODO: deprecate eventually - line 305: # TODO: deprecate eventually torchvision/transforms/functional_tensor.py (4 lines): - line 41: # TODO: replace this method with torch.iinfo when it gets torchscript support. - line 70: # TODO: replace with dtype.is_floating_point when torchscript supports it - line 94: # TODO: replace with dtype.is_floating_point when torchscript supports it - line 957: # TODO: we should expect bincount to always be faster than histc, but this torchvision/models/detection/roi_heads.py (3 lines): - line 203: # TODO: simplify when indexing without rank will be supported by ONNX - line 449: # TODO : replace below with a dynamic padding when support is added in ONNX - line 737: # TODO: https://github.com/pytorch/pytorch/issues/26731 torchvision/csrc/ops/quantized/cpu/qroi_align_kernel.cpp (3 lines): - line 17: // FIXME: Remove this section once we can use at::native for android xplat - line 126: // FIXME: change this when batches of size > 1 are allowed - line 252: // FIXME: This is private, API might change: torchvision/models/quantization/googlenet.py (3 lines): - line 48: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 72: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 145: # TODO use pretrained as a string to specify the backend torchvision/prototype/datasets/utils/_internal.py (3 lines): - line 178: # TODO: this needs to be more elaborate - line 241: # TODO: this is weird as it drops more elements than it should - line 255: # TODO: might be weird to not take `num_workers` into account torchvision/models/squeezenet.py (3 lines): - line 74: # FIXME: Is this needed? SqueezeNet should only be called from the - line 75: # FIXME: squeezenet1_x() functions - line 76: # FIXME: This checking is not done for the other models torchvision/prototype/transforms/_transform.py (2 lines): - line 219: # TODO: this needs to be revisited to allow subclassing of custom transforms - line 305: # TODO: should this handle containers? torchvision/prototype/datasets/benchmark.py (2 lines): - line 349: # TODO format that into engineering format - line 353: # TODO format that into engineering format torchvision/prototype/datasets/utils/_resource.py (2 lines): - line 149: # TODO: make this more precise - line 155: # TODO: make this more informative references/classification/train.py (2 lines): - line 81: # FIXME need to take into account that the datasets - line 96: # See FIXME above torchvision/models/quantization/shufflenetv2.py (2 lines): - line 41: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 - line 88: # TODO use pretrained as a string to specify the backend torchvision/csrc/io/video/video.cpp (2 lines): - line 141: // TODO: reset params.formats - line 178: // TODO: add read from memory option torchvision/csrc/ops/cuda/interpolate_aa_kernels.cu (2 lines): - line 157: // TODO: maybe we can specify dynamic shared memory size before calling the - line 322: // TODO: maybe we can specify dynamic shared memory size before calling the references/optical_flow/train.py (2 lines): - line 273: # TODO: Also save the optimizer and scheduler - line 324: # TODO: resume, pretrained, and weights should be in an exclusive arg group torchvision/models/detection/retinanet.py (2 lines): - line 511: # TODO: Move this to a function - line 530: # TODO: Do we want a list or a dict? torchvision/models/detection/anchor_utils.py (2 lines): - line 43: # TODO change this - line 56: # TODO: https://github.com/pytorch/pytorch/issues/26792 torchvision/csrc/models/modelsimpl.h (1 line): - line 9: // TODO here torch::relu_ and torch::adaptive_avg_pool2d wrapped in references/optical_flow/utils.py (1 line): - line 203: # TODO: Ideally, this should be part of the eval transforms preset, instead torchvision/models/detection/fcos.py (1 line): - line 79: # TODO: vectorize this instead of using a for loop torchvision/ops/_utils.py (1 line): - line 11: # TODO add back the assert torchvision/models/feature_extraction.py (1 line): - line 478: # FIXME We don't know if we should expect this to happen torchvision/models/quantization/mobilenetv2.py (1 line): - line 87: # TODO use pretrained as a string to specify the backend torchvision/utils.py (1 line): - line 305: # TODO: There might be a way to vectorize this torchvision/csrc/ops/autograd/deform_conv2d_kernel.cpp (1 line): - line 124: // TODO: There should be an easier way to do this torchvision/csrc/io/image/cpu/read_write_file.cpp (1 line): - line 58: // TODO: Once torch::from_file handles UTF-8 paths correctly, we should move torchvision/models/detection/transform.py (1 line): - line 172: # FIXME assume for now that testing uses the largest scale torchvision/datasets/kinetics.py (1 line): - line 109: # TODO: support test torchvision/csrc/ops/autograd/roi_align_kernel.cpp (1 line): - line 71: // TODO: There should be an easier way to do this torchvision/prototype/transforms/_geometry.py (1 line): - line 100: # FIXME: pad input in case it is smaller than crop_box torchvision/models/quantization/utils.py (1 line): - line 36: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 torchvision/datasets/folder.py (1 line): - line 252: # TODO: specify the return type torchvision/datasets/celeba.py (1 line): - line 168: # TODO: refactor with utils.verify_str_arg torchvision/models/quantization/resnet.py (1 line): - line 124: # TODO use pretrained as a string to specify the backend torchvision/prototype/models/convnext.py (1 line): - line 27: # TODO: Benchmark this against the approach described at https://github.com/pytorch/vision/pull/5197#discussion_r786251298 torchvision/csrc/ops/autograd/roi_pool_kernel.cpp (1 line): - line 65: // TODO: There should be an easier way to do this torchvision/models/detection/generalized_rcnn.py (1 line): - line 81: # TODO: Move this to a function torchvision/csrc/io/image/cpu/decode_png.cpp (1 line): - line 108: // TODO: consider supporting PNG_INFO_tRNS torchvision/csrc/ops/autograd/ps_roi_align_kernel.cpp (1 line): - line 75: // TODO: There should be an easier way to do this torchvision/ops/_register_onnx_ops.py (1 line): - line 29: # TODO: Remove this warning after ONNX opset 16 is supported. torchvision/transforms/autoaugment.py (1 line): - line 93: # FIXME: Eliminate copy-pasted code for fill standardization and _augmentation_space() by moving stuff on a base class references/detection/engine.py (1 line): - line 78: # FIXME remove this and make paste_masks_in_image run on the GPU torchvision/models/quantization/mobilenetv3.py (1 line): - line 74: # TODO https://github.com/pytorch/vision/pull/4232#pullrequestreview-730461659 torchvision/transforms/functional.py (1 line): - line 36: # TODO: Once torchscript supports Enums with staticmethod torchvision/datasets/video_utils.py (1 line): - line 381: # TODO: Revert it once the bug is fixed. torchvision/csrc/ops/autograd/ps_roi_pool_kernel.cpp (1 line): - line 65: // TODO: There should be an easier way to do this torchvision/prototype/datasets/_builtin/coco.py (1 line): - line 101: # TODO: create a segmentation feature references/video_classification/train.py (1 line): - line 66: # FIXME need to take into account that the datasets