src/operator/tensor/dot-inl.cuh (6 lines): - line 183: * TODO: write a faster kernel optimized for GPU - line 223: // TODO: remove sequential search, this is a bottleneck - line 410: // TODO: remove kernel dependency on warpSize=32 - line 735: * TODO: Optimize for GPU; this is a baseline implementation providing - line 778: // TODO: remove kernel dependency on warpSize=32 - line 913: // TODO: Consider implementing a vector kernel for SpMV (similar to DotCsrDnsDns) julia/src/autograd.jl (3 lines): - line 328: # TODO: support storage type (stype in Python) - line 329: # TODO: make sure it works with gpu array - line 403: # TODO: User-defined differentiable function scala-package/macros/src/main/scala/org/apache/mxnet/SymbolMacro.scala (3 lines): - line 95: // TODO: Put Symbol.api.foo --> Stable APIs - line 145: // TODO: Seq() here allows user to place Symbols rather than normal arguments to run, need to fix if old API deprecated - line 200: // TODO: Add '_linalg_', '_sparse_', '_image_' support scala-package/macros/src/main/scala/org/apache/mxnet/APIDocGenerator.scala (3 lines): - line 60: // TODO: Add Filter to the same location in case of refactor - line 166: // TODO: Add '_linalg_', '_sparse_', '_image_' support - line 167: // TODO: Add Filter to the same location in case of refactor tools/coreml/converter/_layers.py (2 lines): - line 66: # TODO These operators still need to be converted (listing in order of priority): - line 208: #TODO add SCALED_TANH, SOFTPLUS, SOFTSIGN, SIGMOID_HARD, LEAKYRELU, PRELU, ELU, PARAMETRICSOFTPLUS, THRESHOLDEDRELU, LINEAR src/operator/tensor/cast_storage-inl.cuh (2 lines): - line 92: // TODO: remove kernel dependency on warpSize=32 - line 486: // TODO: remove kernel dependency on warpSize=32 example/bi-lstm-sort/sort_io.py (2 lines): - line 82: for l, n in len_dict.items(): # TODO: There are better heuristic ways to do this - line 102: self.index = None # TODO: what is index? scala-package/spark/src/main/scala/org/apache/mxnet/spark/io/LabeledPointIter.scala (2 lines): - line 130: // TODO: need to allow user to specify DType and Layout - line 135: // TODO: need to allow user to specify DType and Layout scala-package/core/src/main/scala/org/apache/mxnet/Serializer.scala (2 lines): - line 42: // TODO: dynamically get from mxnet env to support other serializers like Kyro - line 47: // TODO: dynamically get from mxnet env to support other serializers like Kyro example/rnn/old/bucket_io.py (2 lines): - line 78: for l, n in len_dict.items(): # TODO: There are better heuristic ways to do this - line 103: self.index = None # TODO: what is index? example/reinforcement-learning/dqn/dqn_demo.py (2 lines): - line 159: # TODO Here we can in fact play multiple gaming instances simultaneously and make actions for each - line 162: # TODO Profiling the speed of this part! scala-package/core/src/main/scala/org/apache/mxnet/Optimizer.scala (2 lines): - line 128: // TODO: make state a ClassTag - line 132: // TODO: make returned state a ClassTag src/kvstore/kvstore_dist_server.h (2 lines): - line 358: // TODO is it possible to do in place??? - line 359: // TODO should we average over number of GPUs scala-package/core/src/main/scala/org/apache/mxnet/NDArray.scala (2 lines): - line 553: // TODO: imdecode - line 1034: // TODO: naive implementation example/reinforcement-learning/dqn/base.py (2 lines): - line 104: #TODO Optimize the reshaping functionality! - line 301: # TODO `wait_to_read()` here seems unnecessary, remove it in the future! example/rnn-time-major/bucket_io.py (2 lines): - line 77: for l, n in len_dict.items(): # TODO: There are better heuristic ways to do this - line 98: self.index = None # TODO: what is index? scala-package/spark/src/main/scala/org/apache/mxnet/spark/MXNet.scala (2 lines): - line 152: // TODO: check ip & port available - line 215: // TODO: more nature way to get the # of examples? julia/src/kvstore.jl (2 lines): - line 250: # TODO: Currently Julia does not support closure in c-callbacks, so we are making use of the - line 353: # TODO: sparse support? julia/src/base.jl (2 lines): - line 65: # TODO: bug in nnvm, if do not call this, call get handle "_copyto" will fail - line 178: # TODO: find a better solution in case this cause issues in the future. example/reinforcement-learning/dqn/replay_memory.py (2 lines): - line 83: # TODO Test the copy function - line 202: #TODO Possibly states + inds for less memory access scala-package/core/src/main/scala/org/apache/mxnet/ExecutorManager.scala (2 lines): - line 276: // TODO: more precise error message should be provided by backend - line 333: // TODO: shall we dispose the replaced array here? scala-package/core/src/main/scala/org/apache/mxnet/Base.scala (1 line): - line 81: // TODO: shutdown hook won't work on Windows scala-package/spark/src/main/scala/org/apache/mxnet/spark/io/PointIter.scala (1 line): - line 129: // TODO: Make DType, Layout configurable scala-package/core/src/main/scala/org/apache/mxnet/io/MXDataIter.scala (1 line): - line 56: // TODO: need to allow user to specify DType and Layout julia/src/ndarray.jl (1 line): - line 1767: # TODO the explicit exclusion of take will no longer be necessary when it is removed from Base example/reinforcement-learning/dqn/utils.py (1 line): - line 77: #TODO Update logging patterns in other files example/reinforcement-learning/dqn/operators.py (1 line): - line 33: # TODO Backward using NDArray will cause some troubles see `https://github.com/dmlc/mxnet/issues/1720' example/ssd/symbol/common.py (1 line): - line 253: # TODO: better way to shape the anchors?? julia/src/io.jl (1 line): - line 276: TODO: remove `data_padding` and `label_padding`, and implement rollover that copies matlab/+mxnet/model.m (1 line): - line 233: % TODO convert from c order to matlab order... src/kvstore/gpu_topology.h (1 line): - line 985: // -use 0 for testing (TODO: remove this) julia/src/model.jl (1 line): - line 665: # TODO: is there better way to compare two symbols python/mxnet/module/base_module.py (1 line): - line 618: #TODO: pull this into default julia/src/optimizers/rmsprop.jl (1 line): - line 83: @inplace W .+= -η .* ∇ ./ sqrt(s .+ ϵ) # FIXME: sqrt should be dot-call scala-package/core/src/main/scala/org/apache/mxnet/Model.scala (1 line): - line 342: // TODO: make DataIter implement Iterator scala-package/macros/src/main/scala/org/apache/mxnet/NDArrayMacro.scala (1 line): - line 122: // TODO: Currently we do not add place holder for NDArray benchmark/python/sparse/dot.py (1 line): - line 39: # TODO: Use logging later julia/src/optimizers/adamax.jl (1 line): - line 79: s.uₜ = _maximum(β2 * s.uₜ, abs(∇)) # FIXME abs dot-call scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala (1 line): - line 38: * TODO: shutdown hook won't work on Windows julia/src/optimizers/adagrad.jl (1 line): - line 76: @inplace W .+= -η .* ∇ ./ sqrt(x .+ ϵ) # FIXME: sqrt dot-call scala-package/core/src/main/scala/org/apache/mxnet/NDArrayAPI.scala (1 line): - line 24: // TODO: Implement CustomOp for NDArray python/mxnet/notebook/callback.py (1 line): - line 284: super(LiveTimeSeries, self).__init__(None, None) # TODO: clean up this class hierarchy scala-package/core/src/main/scala/org/apache/mxnet/IO.scala (1 line): - line 148: // TODO: change the data/label type into IndexedSeq[(NDArray, DataDesc)] example/ctc/ocr_iter.py (1 line): - line 33: self.index = None # TODO: what is index? perl-package/AI-MXNet/lib/AI/MXNet/KVStoreServer.pm (1 line): - line 54: ## TODO write logging scala-package/core/src/main/scala/org/apache/mxnet/Executor.scala (1 line): - line 89: // TODO: more precise error message should be provided by backend julia/src/optimizers/adadelta.jl (1 line): - line 98: Δxₜ = ∇ .* sqrt(Δx .+ ϵ) ./ sqrt(x .+ ϵ) # FIXME: sqrt dot-call perl-package/AI-MXNet/lib/AI/MXNet/Module/Base.pm (1 line): - line 623: #TODO: pull this into default