libraries/nodes/src/BinaryConvolutionalLayerNode.cpp (10 lines): - line 116: // TODO: block-vectorize this: - line 331: // TODO: make new function that emits the contents of the task function (which is also the stuff emitted in the serial case), and call this from - line 366: // TODO: interleave load/compress more tightly to eliminate need for a scratch variable to hold a whole row - line 388: // TODO: Fix this to deal with convParams.stride != 1 - line 429: // TODO: interleave load/compress more tightly to eliminate need for a scratch variable to hold the whole row - line 569: // const auto& outputLayout = this->GetOutputMemoryLayout().ReorderedCopy({2,0,1}); // TODO: reorder from r,c,d -> d,r,c - line 582: // TODO: Put this back once we're using the transposed output layout - line 670: // const auto& outputLayout = this->GetOutputMemoryLayout().ReorderedCopy({2,0,1}); // TODO: reorder from r,c,d -> d,r,c - line 683: // TODO: Put this back once we're using the transposed output layout - line 705: // TODO: get types in a way that doesn't require emitting these variables libraries/emitters/src/IRModuleEmitter.cpp (10 lines): - line 528: // TODO: add to _functions list - line 550: // TODO: add this function to the _functions list?? - line 556: // TODO: put the above IREmitter call in the IRFunctionEmitter constructor - line 563: // TODO: add this function to the _functions list?? - line 569: // TODO: put the above IREmitter call in the IRFunctionEmitter constructor - line 576: // TODO: add this function to the _functions list?? - line 582: // TODO: put the above IREmitter call in the IRFunctionEmitter constructor - line 589: // TODO: add this function to the _functions list?? - line 600: // TODO: add this function to the _functions list?? - line 611: // TODO: add this function to the _functions list?? tools/utilities/finetune/src/ModelUtils.cpp (9 lines): - line 307: // TODO: log error - line 357: // TODO: log error - line 401: // TODO: reverse order of args? - line 433: // TODO: combine this and IsCompleteAncestor so it's not O(N^2) (via set-intersect?) - line 668: // TODO: rename 'output' param here --- it's the input to the convolutional layer - line 669: // TODO: add implicit padding - line 684: // TODO: pass in original layer's output shape, for verification? (or, compare output of this function with original layer output shape) - line 738: // TODO: rename 'output' parameter -- it's where we graft the new node - line 749: // TODO: rename 'output' parameter -- it's where we graft the new node libraries/nodes/src/WinogradConvolutionNode.cpp (9 lines): - line 654: auto channelIndex = inputRange.channels.begin; // #### TODO: verify we don't really want inputRange.channels.index here - line 684: // TODO: fix TransformInputBlock to take ranges instead of tileSize/filterSize/blockSize - line 777: // TODO: I think now the outputTile is always contiguous, so we can just iterate over it continuously - line 910: // TODO: if we're not separable, we want to reduce first and accumulate a single output channel (== filter index) - line 1372: // TODO: eventually, pass this in to ComputeTransformedOutput, instead of just assuming a layout there - line 1437: // TODO: allocate a set of temporaries per thread and parallelize the big outer loop - line 1478: // TODO: See about removing the "% numChannels" if we can know that it's unnecessary at compile-time (e.g., if numFilterChannels == N*numChannels) for N > 0 - line 1479: // TODO: assert (channelStart + filterRange.size * numFilterChannels) < numChannels) --- make sure it doesn't wrap around while processing a block - line 1480: // TODO: add a comment describing the logic behind setting channelIndex (it allows us to unify depthwise-separabale and non-separable logic) libraries/value/src/loopnests/LoopNestVisitor.cpp (8 lines): - line 192: // TODO: set initial value of index variable (at least in loop-nest-printing case) - line 363: // TODO: deal with eventually not having an emit-time-constant range here - line 404: // TODO: need to know if we're going to invoke any kernels after the inner loops, and remove them from the valid kernel groups - line 430: // TODO: restore state of variables - line 664: // TODO: restore state of variables - line 732: // TODO: put this in a function that preprocesses the kernel predicates when adding the kernels to the schedule - line 930: // TODO: We want to only fire on a loop involving a leaf child of the index - line 997: // TODO: need to allow using non-"dimension" indices as well (for non-innermost kernels) tools/utilities/finetune/src/FineTuneModel.cpp (7 lines): - line 417: if (IsFullyConnectedLayerNode(&node)) // TODO: replace with submodel-matcher - line 421: else if (IsConvolutionalLayerNode(&node)) // TODO: replace with submodel-matcher - line 447: // TODO: get input of submodel to retrain - line 465: // TODO: rename this function to imply we're running an optimization - line 502: // TODO: record info about both phases (sparsify and reoptimize) - line 622: // TODO: Rename this function to make it clear we're adding nodes to the model (if normalizeInputs is true) - line 633: // TODO: fix this to work with depthwise-separable convolutions libraries/value/src/CachingStrategies.cpp (5 lines): - line 45: // TODO : Generalize to machine characteristics and move out of CachingStrategies - line 161: // TODO move to Array slice code and generalize - line 178: // TODO : replace memory offsets with absolute offset support - line 1029: // TODO : Support buffer alignment in CppEmitterContext - line 1292: // TODO : determine if a vectorized approach is worthwhile here libraries/model/src/IRModelProfiler.cpp (4 lines): - line 345: // TODO: return nullptr if out of bounds (this is device-side code, and we may not be able to throw exceptions) - line 362: // TODO: return nullptr if out of bounds (this is device-side code, and we may not be able to throw exceptions) - line 379: // TODO: return nullptr if out of bounds (this is device-side code, and we may not be able to throw exceptions) - line 396: // TODO: return nullptr if out of bounds (this is device-side code, and we may not be able to throw exceptions) tools/utilities/profile/make_profiler.py (4 lines): - line 95: config[option[i][2:]] = True # TODO - assuming flag without argument sets to True - line 190: # TODO - change this order requirement to make it more flexible - line 196: # TODO - drivetest.py does not support "orangepi0" - line 197: with DriveTest(model=model, # TODO - no labels, no expected libraries/nodes/src/FFTNode.cpp (4 lines): - line 446: // TODO: assert(bitcount(length) == 1) (i.e., length is a power of 2) - line 471: auto twiddleFactorsUnwrappedVar = module.ConstantArray(std::string("twiddles_") + std::to_string(halfN), twiddleFactorsUnwrapped); // TODO: encode type name in variable name - line 517: // TODO: assert(bitcount(length) == 1) (i.e., length is a power of 2) - line 534: // TODO: assert(bitcount(length) == 1) (i.e., length is a power of 2) tools/utilities/finetune/src/DataUtils.cpp (4 lines): - line 34: // TODO: remove this eventually - line 473: // TODO: add order (row-maj / channel-maj) parameter - line 474: // TODO: add options for normalizing and/or mean-subtracting - line 618: // TODO: deal with padding libraries/value/src/loopnests/LoopNest.cpp (4 lines): - line 446: // TODO: convert first/last into inequality check (<=, >=), so they can work with boundaries - line 614: // TODO: Move this out to the API surface - line 627: // TODO: Move this out to the API surface - line 695: // TODO: later we may normalize the loops, in which case indexScale here will be the loop increment libraries/value/src/loopnests/CodeGenerator.cpp (4 lines): - line 87: // TODO: figure out what to do with the "where" parameter - line 88: // TODO: get rid of const_cast - line 149: // TODO: figure out what to do with the "where" parameter - line 150: // TODO: get rid of const_cast libraries/emitters/src/IRThreadPool.cpp (3 lines): - line 208: // TODO: assert we're idle (until we can handle multiple task arrays to be active) - line 300: llvm::StructType* IRThreadPoolTaskQueue::GetTaskQueueDataType(IRModuleEmitter& module) const // TODO: come up with a naming convention for "class" structs like this - line 472: llvm::StructType* IRThreadPoolTaskArray::GetTaskArrayDataType(IRModuleEmitter& module) // TODO: come up with a naming convention for "class" structs like this tools/utilities/optimizer/json_profile_optimizer.py (3 lines): - line 49: # TODO - here we ignore the global compiler options. - line 110: # TODO - only optimize based on convolution method at this point - line 111: # TODO - only chose the convolution method based on perf at this point. Need other strategy. tools/utilities/finetune/src/OptimizationUtils.cpp (3 lines): - line 92: // TODO: rename this function to something without "train" or "predictor" in its name - line 131: // TODO: in both the "independent channel" and "spatial filter" cases, we really are optimizing to find a scalar result. - line 158: // TODO: in the "trainFiltersIndependently" case, we should tools/importers/CNTK/custom_functions.py (2 lines): - line 229: bit_map = 1 # TODO: need a way to get this from the model - line 313: bit_map = 1 # TODO: need a way to get this from the model libraries/value/src/LLVMContext.cpp (2 lines): - line 173: // TODO: Make this the basis of an iterator for MemoryLayout - line 481: // TODO: fix this so that GetNonPointerType call isn't needed libraries/nodes/include/ReorderDataNode.h (2 lines): - line 390: // TODO: for each dimension, loop over minimum of input and output interval. Then we don't have to check if the value is out-of-bounds - line 445: // TODO: for each dimension, loop over minimum of input and output interval. Then we don't have to check if the value is out-of-bounds libraries/nodes/src/FullyConnectedLayerNode.cpp (2 lines): - line 59: // TODO: add a reorder node here that makes the input be a contiguous vector, if necessary - line 69: // TODO: add a reorder node here that adds padding to the output, if necessary libraries/nodes/src/SimpleConvolutionNode.cpp (2 lines): - line 344: archiver["inputLayout"] << _inputMemoryLayout; // TODO: get rid of this - line 346: archiver["filterSize"] << _filterSize; // TODO: get this from weights layout libraries/nodes/src/IRNode.cpp (2 lines): - line 64: // TODO: inputTypes, outputTypes, extraArgs - line 74: // TODO: inputTypes, outputTypes, extraArgs libraries/dsp/py/symbolic.py (2 lines): - line 344: # TODO: combine MatrixLiteral with MatrixExpr - line 437: ## TODO: libraries/nodes/src/DiagonalConvolutionNode.cpp (2 lines): - line 249: // TODO: check this carefully to make sure it's valid for stackSize != all and stackSize != 1 - line 293: // TODO: this is really paddedHeight * filterSize * batchSize * stackSize - inputPadding * filterSize * batchSize libraries/nodes/src/MatrixMatrixMultiplyNode.cpp (2 lines): - line 191: // TODO: reset output layout (incl. transpose info) - line 284: // TODO: check version number and read this format if in back-compat mode tools/utilities/finetune/include/OptimizationUtils.h (2 lines): - line 123: // TODO: rename these to something without "train" and "predictor" in the name - line 127: // TODO: find a better (more general) way to indicate what the solution is, rather than with a "isSpatialConvolution" flag libraries/value/src/CppEmitterContext.cpp (2 lines): - line 253: // TODO: add alignment directive - line 1514: // TODO: fix tools/utilities/debugCompiler/src/ModelComparison.cpp (2 lines): - line 434: // TODO: write out as a table? - line 466: std::vector categoryNames; // TODO: read this in from a file libraries/dsp/py/winograd.py (2 lines): - line 127: # TODO: get rid of this, but keep the filter-transform part - line 166: ## TODO: emit just a single loop for 1D convolutions libraries/value/src/loopnests/KernelPredicate.cpp (2 lines): - line 104: // TODO: move `GetLoopRage` somewhere else - line 166: // TODO: add index, testVal to AND list, later return a conjunction of equality predicates libraries/value/src/ComputeContext.cpp (2 lines): - line 63: // TODO: Make this the basis of an iterator for MemoryLayout - line 1529: throw 0; // TODO: throw a real exception (of type value::Exception::DebugTrapException, perhaps) libraries/model/src/IRMapCompiler.cpp (2 lines): - line 660: if (currentFunction.GetCurrentRegion() == nullptr) // TODO: put this check in GetCurrentFunction() - line 884: bool needsDereference = valType->isPointerTy(); // TODO: Maybe this should be `isPtrOrPtrVectorTy()` or even `isPtrOrPtrVectorTy() || isArrayTy()` libraries/emitters/src/IRProfiler.cpp (1 line): - line 355: // TODO: bounds-checking tools/importers/CNTK/lib/cntk_layers.py (1 line): - line 906: # TODO: This logic is very fragile, we may want to have a model libraries/predictors/neural/include/BinaryConvolutionalLayer.h (1 line): - line 180: // TODO: let's make a popcount function that does the right thing libraries/utilities/src/PPMImageParser.cpp (1 line): - line 32: // TODO: throw if we read bad values or hit EOF too soon libraries/nodes/include/SimpleConvolutionNode.h (1 line): - line 207: model::PortMemoryLayout _inputMemoryLayout; // TODO: get rid of this by using a ReinterpretLayoutNode if necessary libraries/emitters/src/IREmitter.cpp (1 line): - line 975: // TODO: rename this to avoid clashes with other PointerOffset() libraries/model/src/IRCompiledMap.cpp (1 line): - line 366: auto fn = reinterpret_cast(jitter.GetFunctionAddress(_moduleName + "_PrintNodeProfilingInfo")); // TODO: hide this reinterpret_cast in a templated method of IRExecutionEngine tools/utilities/pythonlibs/remoterunner.py (1 line): - line 164: # TODO: make this recursive to deal with multi-level directories libraries/emitters/src/IRParallelLoopEmitter.cpp (1 line): - line 45: // TODO: explicitly check for empty loop? libraries/model/src/ModelTransformer.cpp (1 line): - line 157: // TODO: find a way to tell the differerence between a port that doesn't have a mapping because libraries/value/src/loopnests/SplitIndexRange.cpp (1 line): - line 102: // TODO: assert index1 and index2 are both in this dimension tools/utilities/profile/include/ProfileArguments.h (1 line): - line 33: // TODO: something about regions tools/utilities/pythonlibs/audio/training/train_classifier.py (1 line): - line 47: raise RuntimeError("FIXME: Traced RNNs don't support backward") libraries/value/include/loopnests/CodePositionConstraints.h (1 line): - line 126: // TODO: each boundary index needs its own "placement" value (e.g., you could have a kernel that runs when j==0 and k==N-1) libraries/emitters/include/ModuleEmitter.h (1 line): - line 108: // TODO: What does "runtime" variable mean? Stack/heap? This is only called in 1 place, from MapCompiler::AllocatePortFunctionArgument CMake/CommonInterfaces.cmake (1 line): - line 156: # TODO: only set this property (and omit ${LANGUAGE_LIBRARIES} from the swig_link_libraries call) if we're libraries/utilities/include/Archiver.h (1 line): - line 857: // TODO: assert back of _objectInfo == objInfo interfaces/common/src/ModelBuilderInterface.cpp (1 line): - line 974: // TODO: fix MatrixVectorMultiplyNode so it exposes the transpose options supported by BLAS GEMV functions. libraries/nodes/include/ReceptiveFieldMatrixNode.h (1 line): - line 360: // TODO: use the entries of dataOrder to compute the indices libraries/model/src/CompilableNode.cpp (1 line): - line 67: // TODO: combine precompiled-IR case with use-own-function case libraries/nodes/include/BroadcastOperationNodes.h (1 line): - line 447: // TODO: if FunctionType was a function that took a vector of inputs, then we could dispense with this `if constexpr` block libraries/value/src/LoopNests.cpp (1 line): - line 108: // TODO: this might not be needed in the high level api libraries/value/include/loopnests/Kernel.h (1 line): - line 53: // TODO : make this a template specialization of Define(), currently lambdas and std::functions aren't libraries/model/src/MapCompiler.cpp (1 line): - line 195: // TODO: can we use an array type here? libraries/predictors/src/ProtoNNPredictor.cpp (1 line): - line 72: math::MultiplyScaleAddUpdate(1.0, GetLabelEmbeddings(), similarityToPrototypes, 0.0, labels); // TODO due to the zero, there is a more appropriate operation libraries/nodes/include/ConcatenationNode.h (1 line): - line 124: // TODO: re-enable this branch when scalar port bug is fixed libraries/nodes/include/SourceNode.h (1 line): - line 278: // TODO: Interpolate if there is a sample, and currentTime > sampleTime libraries/passes/src/SetConvolutionMethodTransformation.cpp (1 line): - line 113: // TODO: just copy the node and modify its layer libraries/emitters/include/IRModuleEmitter.h (1 line): - line 896: // TODO: have a more specific check to see if the variable is mapped to a port, rather than if it's a function input/output tools/utilities/finetune/src/DataStatistics.cpp (1 line): - line 172: Sparsity GetWeightsSparsity(const WeightsType& weights) // TODO: add layout libraries/emitters/src/IRFunctionEmitter.cpp (1 line): - line 174: // TODO: set a flag indicating that this function is done tools/utilities/finetune/src/ModelOutputDataCache.cpp (1 line): - line 100: // TODO: deal with merge points -- need to skip parallel area and return something before path split. libraries/nodes/src/NeuralNetworkPredictorNode.cpp (1 line): - line 83: // TODO: libraries/emitters/src/IRPosixRuntime.cpp (1 line): - line 30: // TODO: should this be 64 bits for 64-bit systems? tools/utilities/pythonlibs/dependency_installer.py (1 line): - line 80: # TODO: check package versions (currently we just check names) tools/utilities/finetune/include/TransformData.h (1 line): - line 23: // TODO: make these be TransformDataWithSubmodel libraries/value/src/loopnests/Kernel.cpp (1 line): - line 63: // TODO : make this a template specialization of Define(), currently lambdas and std::functions aren't libraries/utilities/include/Variant.h (1 line): - line 55: // TODO: also add std::array of archivable variant types libraries/nodes/include/BroadcastFunctionNode.h (1 line): - line 989: // TODO: fix up logic for deciding how many tasks to use. libraries/emittable_functions/src/IIRFilter.cpp (1 line): - line 22: // TODO: turn this into a real For() loop libraries/value/src/loopnests/Index.cpp (1 line): - line 33: // TODO: Change this so that IDs are the responsibility of the EmitterContext interfaces/javascript/swigToTypescript/templates/generate-d-ts.xslt (1 line): - line 29: libraries/model/src/Map.cpp (1 line): - line 93: // TODO (kerha): _computeContext isn't copied right now. Not sure if it should be. [2019-08-23]