lib/Optimizer/GraphOptimizer/GraphOptimizer.cpp (12 lines): - line 2206: // TODO: Fix bad assumption? See issue 3499, for now workaround it. - line 2493: // TODO: This limitation can be lifted, but that is for simplicity. - line 2497: // TODO: We can also support quantized constants if there is a need in it. - line 2558: // TODO: We can also support Splat here if needed. - line 2616: // TODO: Support quantized constants if needed. - line 2687: // TODO: Support quantized constants if needed. - line 3608: // TODO: Erase N during CSE? If we don't do it here, - line 3709: // TODO: Change Visitors to return whether they modified the Function they - line 4261: /// TODO: Currently, for all supported types wider mantissa also means wider - line 6150: // TODO: Uncomment this once #5729 gets fixed. - line 6229: // TODO: Uncomment this once #5729 gets fixed. - line 6884: // TODO: need to remove the Clip ops once the FC inputs can be put on SRAM torch_glow/src/PyTorchModelLoader.cpp (12 lines): - line 1970: // TODO reverse map of jit-glow node should resolve this problem. - line 2116: // TODO we visited many redundent nodes during this process, - line 2930: // TODO: Remove the quantization step to potentially improve performance. - line 3577: // TODO: implement 3rd argument rounding_mode option - line 3681: // TODO: extend this to allow non-constant scalars. - line 3704: // TODO: extend this to allow non-constant scalars. - line 3727: // TODO: extend this to allow non-constant scalars. - line 4958: // TODO: allow correct type mapping from double to float - line 7236: /// TODO: check Dtype is float (optional value). - line 8418: // TODO: Use a proper type based on the JIT's output type. - line 9014: // TODO: fix UINT_MAX - line 9853: // TODO: Change Glow Type to use sdim_t to be consistent lib/Backends/Habana/HabanaDeviceManager.cpp (4 lines): - line 118: // TODO: Use synGetMemInfo once implemented. - line 175: // TODO: Unload functions that were loaded successfully. - line 191: // TODO: Unload functions that were loaded successfully. - line 319: // FIXME: This can starve inactive topos. lib/Importer/TFLiteModelLoader.cpp (4 lines): - line 2160: // TODO: Move this into the GraphOptimizer once Glow supports reduce - line 2180: // TODO: When Glow supports reduce operators with multiple axes remove this! - line 2458: // TODO: Add support for strides different than 1 (positive or negative) once - line 2987: // TODO: Verify model integrity using flatbuffers::Verifier class. lib/Importer/Caffe2ModelLoader.cpp (4 lines): - line 57: // FIXME: this is a temporary solution for the case when NonZero returns - line 175: // TODO: should we check is_multiparam? - line 1796: // TODO: add checks for number of inputs and argument values - line 2255: // TODO: check maybe we can support more dimensions to be reduced lib/LLVMIRCodeGen/GlowJIT.cpp (3 lines): - line 156: // FIXME: looking for symbols external to libjit in the process is - line 235: // FIXME: looking for symbols external to libjit in the process is - line 599: // FIXME: looking for symbols external to libjit in the process is lib/Importer/ONNXModelLoader.cpp (3 lines): - line 1259: // TODO: intend to find a way to reuse the following function later - line 1310: // TODO: ONNX spec disallows using "pads" and "auto_pad" together. However, - line 4534: // TODO: add axis/dim support lib/Optimizer/IROptimizer/IROptimizer.cpp (3 lines): - line 610: // FIXME: Remove InOut! - line 953: // TODO: If it would introduce a last write into an observable, do not - line 1055: // TODO: May be disallow usage of dest interval for src? lib/Backends/OpenCL/OpenCL.cpp (3 lines): - line 964: // TODO: Handle other dimensions. - line 985: // TODO: Handle other dimensions. - line 1444: // TODO: support any dimensional transposes. lib/Graph/Nodes.cpp (3 lines): - line 275: // TODO: any kernel size check in respect to input ? In contrast to Conv, - line 1265: // TODO: We could come up with a mechanism to lazy compute that - line 2287: // TODO: Do we need support for different quant params of copy? torch_glow/src/GlowFuser.cpp (2 lines): - line 115: // TODO: delete this once this is fixed by - line 457: // TODO: this should be done only on Glow subgraphs to avoid modifying parts tools/loader/Loader.cpp (2 lines): - line 540: } // FIXME: else make sure networkName does not have any sequence of - line 593: // TODO - registered once to avoid error: lib/Backends/Habana/Habana.cpp (2 lines): - line 62: // TODO: This backend does not have a 64-bit type, but Glow uses - line 1071: // TODO: add a TRACE_EVENT entry torch_glow/src/CachingGraphRunner.cpp (2 lines): - line 324: /// TODO: Multi-dimension slicing will be supported later. - line 1149: // TODO Add support for other output types, e.g., tensor[] lib/Runtime/Executor/NetworkExecutionState.cpp (2 lines): - line 186: // TODO: for intermediate placeholders in DRT/P2P cases, we don't need - line 198: // TODO: Only add to externalPlaceholders_ of PH is external placeholder torch_glow/src/PyTorchCommon.cpp (2 lines): - line 56: // TODO: Handle this case with FloorDiv - line 770: // TODO Add support for other input types, e.g., tensor[] lib/Backends/Interpreter/InterpreterNodes.cpp (2 lines): - line 545: // Initialize bias (TODO take out to a separate function when quant is in). - line 4612: // TODO Currently we only support symmetric quantization. torch_glow/src/ShapeInferenceEngine.cpp (2 lines): - line 627: // TODO Add support for other input types, e.g., tensor list - line 1162: // TODO: @hwwang T80910607 Only support None dtype (4th argument) lib/Partitioner/Partitioner.cpp (2 lines): - line 299: // TODO: the logic here need to be improved. - line 392: // TODO : will improve the algorithm for different memory size. lib/Backends/NNPI/NNPIDeviceManager.cpp (2 lines): - line 183: usedMemoryBytes_ += functionCost_; // TODO:: static moduleSize. - line 210: usedMemoryBytes_ -= functionCost_; // TODO: static moduleSize. lib/Graph/Graph.cpp (2 lines): - line 444: // TODO: consider refactoring boilerplate code to new trait: DottyPrintable - line 1410: // TODO: remove the shuffle and replace it with layout. lib/Onnxifi/Base.cpp (2 lines): - line 99: // TODO: Use a more specific ONNXIFI error code here to denote what about - line 144: // TODO: Use a more specific ONNXIFI error code here to denote what lib/Partitioner/PartitionerUtils.cpp (2 lines): - line 202: // TODO: think about whether this is better off computed inside a Node. - line 310: // TODO: think about whether this is better off computed inside a Node. lib/Backends/OpenCL/OpenCLTensorLayout.cpp (2 lines): - line 106: // TODO: Remove ->getLayout() enum and take a string like transpose. Refactor - line 122: // TODO: Remove ->getLayout() enum and take a string like transpose. Refactor include/glow/Graph/Nodes.h (2 lines): - line 332: /// FIXME: This is a workaround, because defining the hash_code - line 337: /// FIXME: This is a workaround, because defining the hash_code lib/Backends/NNPI/FXIRImporter.cpp (2 lines): - line 50: // TODO: broadcast inputs if input is not a node. - line 256: // TODO: replace users of ReLU input after ReLU with ReLU output. tools/loader/ExecutorCore.cpp (1 line): - line 140: // TODO: Loader should provide function to register callbacks. lib/Onnxifi/GlowOnnxifiManager.h (1 line): - line 112: /// TODO: can use one mutex per set if performance becomes an issue. lib/Backends/CPU/CPULLVMIRGen.cpp (1 line): - line 38: // TODO: Add here any backend specific logic. lib/Backends/NNPI/NNPIResource.cpp (1 line): - line 413: // TODO: add AVX implementation. lib/Optimizer/Lower/Lower.cpp (1 line): - line 1073: // TODO: consider adding this functionality to the main operator set. lib/Backends/CPU/CPUDeviceManager.cpp (1 line): - line 55: // TODO: these may need to be tweaked depending on specific CPU. lib/LLVMIRCodeGen/libjit/libjit_matmul.cpp (1 line): - line 51: /// literature). TODO: Generalize these parameters for other cache sizes. tools/ClassGen/InstrGen.cpp (1 line): - line 1193: // TODO: Rename "BatchDims" member to "Axis". This was attempted in #5565 but torch_glow/src/CustomPyTorchOpLoader.cpp (1 line): - line 32: // TODO: use a rw mutex here for efficiency. lib/LLVMIRCodeGen/LLVMBackend.cpp (1 line): - line 305: // TODO - not quantized support yet in libjit. lib/Onnxifi/GlowOnnxifiManager.cpp (1 line): - line 117: // TODO: fix this so that a HostManager is deleted when all backends tools/ClassGen/NodeGen.cpp (1 line): - line 1241: // TODO: Rename "BatchDims" member to "Axis". This was attempted in #5565 but torch_glow/src/binding.cpp (1 line): - line 93: /// TODO: Handle this case with FloorDiv include/glow/Runtime/RuntimeTypes.h (1 line): - line 69: /// TODO: distinguish between data types with different peak flops. tools/ClassGen/MemberType.h (1 line): - line 104: /// TODO: Remove after modifying InstrGen to use MemberTypeInfo as well? lib/Runtime/HostManager/HostManager.cpp (1 line): - line 105: // TODO: move all initialization out of constructor. lib/Backends/NNPI/CustomKernels/IAInjectors/IAInjectors.cpp (1 line): - line 228: // TODO: pass dim as a second tensor tools/Debugger/NetworkComparator.cpp (1 line): - line 36: // TODO: Need to add different flavours of dumping. lib/Backends/NNPI/InferencePool.cpp (1 line): - line 319: // TODO: verify with garret we don't need to lock here - i.e. host manager lib/Quantization/Quantization.cpp (1 line): - line 348: // FIXME: Right now, the TensorQuantizationParams only tracks one lib/Backends/OpenCL/OpenCLDeviceManager.cpp (1 line): - line 661: // TODO: synchronize clocks better, this can be off the thread was yielded lib/Backends/Interpreter/Interpreter.cpp (1 line): - line 378: // TODO - support other types. lib/LLVMIRCodeGen/LLVMIRGen.cpp (1 line): - line 1046: // TODO investigate if removing "noalias" can be used to create bigger lib/Optimizer/IROptimizerPipeline/IRFunctionPassPipeline.cpp (1 line): - line 87: // TODO: Only if enabled. lib/LLVMIRCodeGen/DebugInfo.cpp (1 line): - line 141: // TODO: Try to produce semantically meaningful parameter names, e.g. by cmake/modules/SanitizerSupport.cmake (1 line): - line 19: # TODO: ensure that the compiler supports these options before adding lib/Backends/Interpreter/InterpreterDeviceManager.cpp (1 line): - line 59: // TODO: these may need to be tweaked depending on interpreter overheads. lib/Optimizer/GraphOptimizer/ConstantFolding.cpp (1 line): - line 120: // TODO: Add only constants used by F to the compiled function. This should torch_glow/src/PyTorchCommon.h (1 line): - line 97: /// TODO: Handle this case with FloorDiv lib/Onnxifi/onnxifiGlow.cpp (1 line): - line 202: // TODO: support more info type values. Here is the minimal required lib/Backend/BackendUtils.cpp (1 line): - line 432: // TODO: this function is largely duplicated with allowsPartialInput() torch_glow/src/TorchGlowBackend.cpp (1 line): - line 574: // TODO: lib/LLVMIRCodeGen/BundleSaver.cpp (1 line): - line 595: // TODO: Only run the appropriate passes as needed. include/glow/Graph/Graph.h (1 line): - line 1588: // TODO: add description lib/Backends/NNPI/FXNNPICompiledFunction.cpp (1 line): - line 100: // TODO: look through the nodes in FXIR to determine if a custom DSP op is include/glow/Base/TensorSerialization.h (1 line): - line 79: /// TODO: Default tensor loader could be extended to support various data types lib/Backends/OpenCL/Transforms.cpp (1 line): - line 45: // TODO: OpenCL fast convolution kernel itself has some issue with group > lib/CodeGen/MemoryAllocator.cpp (1 line): - line 204: // TODO: Check that ptr is an allocated address.