Summary: 60 instances, 57 unique Text Count AttributeValue = Any # TODO Union[Sequence[float], Sequence[int], Sequence[Text], Sequence[TensorProto], Sequence[GraphProto]] 1 // TODO: check that the index is valid given the length of the input array. 1 # TODO: There's no MIL op to extract a value from a symbolic tensor, 1 # TODO: remove InternalVar check 1 // TODO -- This seems like a giant hack. This is leaking abstractions. 1 // TODO: this should do something. 1 # TODO: If input is not rank 3 or 4, then accordingly handle 1 # TODO: The calculations to convert adaptive_pool to standard pool, 2 # TODO: Skip or CopyProp + DeadCode elimination 1 # TODO: Add one expand instead of adding one after another for input, h and c 1 # TODO: instead of a hard-coded set, use op-traits 1 TODO (rdar://59165842): Use expand_dims, squeeze etc to use 0 instead 1 # TODO: Add one expand instead of adding one after another for input, h 1 // TODO @znation: commenting out the below, because it breaks pipeline validator. 1 // TODO: Compare sizes 1 # TODO: All outputs that depend on only invariants are invariant. We 1 # TODO 3D convolution doesn't support dynamic weights: 1 // TODO: make this check all of the blob names, and make it compare to the input and output shapes 1 ## TODO: Add un-supported ops 1 # TODO: if it's an embedding, this should be integer 1 # TODO: Dynamic rank: Use GetShape and select indices dynamically 1 // TODO -- is there a better way to do this than switch/case? 1 # TODO: Right now, "const elimination" pass CANNOT be done after the "homogenize_input_dtypes" pass. 1 # TODO: gather doesn't work when the shape is known size. 1 // TODO @znation: validate array length below 1 // TODO: Check parameters match 1 // TODO: there is some cross shape constraint here -- all 3 or 4 input ranges might be unconstrained, but now they are constrained to a combination whose product is 1 # TODO Should we support this case? 1 ## TODO: Test coverage when B and C are non-constant 1 // TODO -- validate classifier interface 1 // TODO: add check that this size is possible from the input size 1 # TODO: ONNX should be able to infer the shape 1 // TODO put try/catch around this? 1 // TODO: Check to make sure the particular option is compatible with a classifier 1 # TODO: Replace If -> ElIf with more general transformation block 1 // TODO: add a catch with an error message that mentions the name 1 ## TODO: rdar://73851694 (Update einsum op translation to support generic cases) 1 * TODO: rdar://76017556 1 WRAP_SIMPLE_TYPE(Image, MLFeatureTypeType_imageType) /* TODO image is not simple type */ 1 // TODO: need to forward the other constraints -- sequence and batch (although those can't exist for this layer) 1 # TODO : If tf.split output is returned, there's no 1 // TODO Probability outputs are always dictionaries! 1 # TODO: extract nodes that depends on the "const part" of placeholders. 1 // TODO -- validate regressor interface 1 # TODO: Adding Linear layer will change to 1 // TODO: make enumgen generate this when compiling .proto files 1 // TODO: Error checking routines -- make sure this node hasn't been requested before. 3 # TODO: rdar://65575826 (PyTorch converter: output_padding mapping to slice 1 # TODO: To be removed once, auto-downgrading of spec version is enabled 1 ## TODO: Add support for conversion from STRING TO FLOAT 1 # TODO: Skip or CopyProp + DeadCode elimination 1 # TODO (sberardi): 3rd param to aten::sub is a scale factor, need to handle that. 1 TODO: 1 {Specification::FeatureType::TypeCase::kMultiArrayType, // TODO ARRAY TYPE IS INVALID, REMOVE 1 // TODO -- not implemented! 1 # TODO (sberardi): 3rd param to aten::add is a scale factor, need to handle that. 1 # TODO: Move error check under VERBOSE / DEBUG Mode 1