Summary: 121 instances, 109 unique Text Count // TODO Add support for other input types, e.g., tensor[] 1 // TODO: think about whether this is better off computed inside a Node. 2 // TODO: any kernel size check in respect to input ? In contrast to Conv, 1 // TODO we visited many redundent nodes during this process, 1 // TODO: We could come up with a mechanism to lazy compute that 1 // TODO: intend to find a way to reuse the following function later 1 # TODO: ensure that the compiler supports these options before adding 1 // TODO: add checks for number of inputs and argument values 1 // TODO: Use a more specific ONNXIFI error code here to denote what about 1 // TODO: replace users of ReLU input after ReLU with ReLU output. 1 // TODO - not quantized support yet in libjit. 1 // TODO Currently we only support symmetric quantization. 1 // TODO: Use a proper type based on the JIT's output type. 1 // TODO: Remove ->getLayout() enum and take a string like transpose. Refactor 2 // TODO: use a rw mutex here for efficiency. 1 // TODO: consider refactoring boilerplate code to new trait: DottyPrintable 1 // TODO: Uncomment this once #5729 gets fixed. 2 // TODO: fix this so that a HostManager is deleted when all backends 1 // TODO: support any dimensional transposes. 1 // TODO Add support for other output types, e.g., tensor[] 1 // TODO: implement 3rd argument rounding_mode option 1 // TODO - registered once to avoid error: 1 // TODO: Use a more specific ONNXIFI error code here to denote what 1 // TODO: Only add to externalPlaceholders_ of PH is external placeholder 1 // TODO: This backend does not have a 64-bit type, but Glow uses 1 // FIXME: Right now, the TensorQuantizationParams only tracks one 1 // TODO: Remove the quantization step to potentially improve performance. 1 // Initialize bias (TODO take out to a separate function when quant is in). 1 // TODO: Change Visitors to return whether they modified the Function they 1 // FIXME: Remove InOut! 1 // TODO: Erase N during CSE? If we don't do it here, 1 // FIXME: looking for symbols external to libjit in the process is 2 // TODO: verify with garret we don't need to lock here - i.e. host manager 1 // TODO: This limitation can be lifted, but that is for simplicity. 1 // TODO: these may need to be tweaked depending on specific CPU. 1 // FIXME: this is a temporary solution for the case when NonZero returns 1 /// TODO: Currently, for all supported types wider mantissa also means wider 1 // TODO: Move this into the GraphOptimizer once Glow supports reduce 1 // TODO: We can also support Splat here if needed. 1 // FIXME: This can starve inactive topos. 1 // TODO: should we check is_multiparam? 1 // TODO: OpenCL fast convolution kernel itself has some issue with group > 1 // TODO: May be disallow usage of dest interval for src? 1 // TODO: allow correct type mapping from double to float 1 // TODO: Only if enabled. 1 // TODO: Need to add different flavours of dumping. 1 // TODO: these may need to be tweaked depending on interpreter overheads. 1 // TODO: add AVX implementation. 1 // TODO: Fix bad assumption? See issue 3499, for now workaround it. 1 usedMemoryBytes_ += functionCost_; // TODO:: static moduleSize. 1 /// FIXME: This is a workaround, because defining the hash_code 2 // TODO: We can also support quantized constants if there is a need in it. 1 // TODO: for intermediate placeholders in DRT/P2P cases, we don't need 1 // TODO: extend this to allow non-constant scalars. 3 // TODO: need to remove the Clip ops once the FC inputs can be put on SRAM 1 // TODO: Unload functions that were loaded successfully. 2 // TODO: Try to produce semantically meaningful parameter names, e.g. by 1 usedMemoryBytes_ -= functionCost_; // TODO: static moduleSize. 1 // FIXME: looking for symbols external to libjit in the process is 1 } // FIXME: else make sure networkName does not have any sequence of 1 // TODO: Do we need support for different quant params of copy? 1 // TODO: Handle other dimensions. 2 // TODO reverse map of jit-glow node should resolve this problem. 1 // TODO: add a TRACE_EVENT entry 1 // TODO: Verify model integrity using flatbuffers::Verifier class. 1 /// TODO: can use one mutex per set if performance becomes an issue. 1 /// TODO: Remove after modifying InstrGen to use MemberTypeInfo as well? 1 // TODO: move all initialization out of constructor. 1 /// TODO: check Dtype is float (optional value). 1 // TODO: add axis/dim support 1 // TODO: Rename "BatchDims" member to "Axis". This was attempted in #5565 but 2 // TODO: If it would introduce a last write into an observable, do not 1 // TODO: add description 1 // TODO: Add here any backend specific logic. 1 // TODO: remove the shuffle and replace it with layout. 1 // TODO: this function is largely duplicated with allowsPartialInput() 1 // TODO: Support quantized constants if needed. 2 // TODO: synchronize clocks better, this can be off the thread was yielded 1 // TODO: delete this once this is fixed by 1 /// TODO: distinguish between data types with different peak flops. 1 // TODO - support other types. 1 // TODO: Loader should provide function to register callbacks. 1 // TODO: 1 // TODO: this should be done only on Glow subgraphs to avoid modifying parts 1 // TODO: consider adding this functionality to the main operator set. 1 // TODO: support more info type values. Here is the minimal required 1 // TODO investigate if removing "noalias" can be used to create bigger 1 // TODO: look through the nodes in FXIR to determine if a custom DSP op is 1 // TODO: Add only constants used by F to the compiled function. This should 1 // TODO: ONNX spec disallows using "pads" and "auto_pad" together. However, 1 /// literature). TODO: Generalize these parameters for other cache sizes. 1 // TODO: Check that ptr is an allocated address. 1 // TODO: When Glow supports reduce operators with multiple axes remove this! 1 // TODO: check maybe we can support more dimensions to be reduced 1 // TODO: Handle this case with FloorDiv 1 // TODO: Use synGetMemInfo once implemented. 1 // TODO Add support for other input types, e.g., tensor list 1 // TODO: Add support for strides different than 1 (positive or negative) once 1 /// TODO: Default tensor loader could be extended to support various data types 1 // TODO: the logic here need to be improved. 1 // TODO: broadcast inputs if input is not a node. 1 // TODO : will improve the algorithm for different memory size. 1 // TODO: Only run the appropriate passes as needed. 1 /// TODO: Handle this case with FloorDiv 2 // TODO: fix UINT_MAX 1 // TODO: Change Glow Type to use sdim_t to be consistent 1 /// TODO: Multi-dimension slicing will be supported later. 1 // TODO: @hwwang T80910607 Only support None dtype (4th argument) 1 // TODO: pass dim as a second tensor 1