Summary: 160 instances, 152 unique Text Count //TODO : replace any 1 # TODO: this is ugly, we put all the initialization work in this method, because initialization relies 1 # TODO: check the label of module and warn if it's auto-generated 1 if i == self.cur_ind - 1: # TODO: add other member in the set 1 TODO: support searchspace file. 1 // TODO: this should be private 1 # TODO: consider other functions here 1 //TODO : pass in request timeout param? 1 # FIXME: It does not stop on failure. Tried "ErrorActionPreference" with no luck. 1 //TODO: change the logic here when we want to support multiple tensorboard 1 # TODO: check whether there could be multiple output nodes??? 1 // TODO support specify database dir 1 # TODO suppose to fix the conflict after the sparsity propagation 1 # TODO: find out all the modules that have the same requirement as LSTM 1 # TODO: Delete this after upgrading to PyTorch 1.11. 1 //TODO: use HDFS working folder instead 1 // TODO: how to format NaN? 1 new_model.evaluator = self.evaluator # TODO this needs a clever copy (not deepcopy) if we need mutation 1 // TODO: use search space and metrics space from TRIALS will cause update issues. 1 # TODO In current L1/L2 Filter Pruner, the 'op_types' is still necessary 1 # TODO - catch exact Exception 1 # FIXME: 1 # TODO: use node copy instead 1 # FIXME: merge this rename with non-root graph, only do once. 1 # TODO: it does not have duplicated edges if only supporting dedup input 1 # TODO: use a passed-in RNG here 1 # TODO: refactor this part, maybe we can remove the code gen of prim::Constant 1 'Module', 'Sequential', 'ModuleList', # TODO: 'ModuleDict', 'ParameterList', 'ParameterDict', 1 # TODO: 2 # TODO: Calculate loss 1 // FIXME: this should not be handled in web UI side 1 //TODO: remove this constraint when supporting multi-host placement 1 # TODO: handle imports 1 # FIXME: See cache-dependencies-template.yml on why it needs rebuild. 1 # TODO: serialize complex data type, and output proper error message 1 // TODO: change this hard coded deployment name after demo 1 # TODO what if last output is tuple/list of tensor 1 // TODO: use local storage 1 # TODO: Feed the exact input tensor if user provides input, 1 # TODO: post, put, etc 1 // TODO: move 100 and 1000 into constants class 1 # FIXME: I don't know where "utils" should be 1 # TODO: docstring 1 # TODO: deal with this argument 1 # TODO: The current design of init interface of Retiarii experiment needs to be reviewed. 1 // TODO: fix expressJoi 1 # TODO: the experiment should be completed, when strategy exits and there is no running job 1 ) # TODO: emulate tee behaviors, not necessary tho. 1 search_space: Any = '' # TODO: remove 1 # TODO: I'm not sure of the availble datasets in this benchmark. and the docs are missing. 1 # TODO: status 1 TODO: move this logic to NNI manager 1 # TODO if we can find the corresponding relation between the value node 1 assert merge_op in ['all'] # TODO: loose_end 1 # TODO: directly use weight_mask is not good 1 # FIXME: should be a warning message here 1 // TODO: use Message.txt to tooltip 1 // FIXME: Use global handler. It should be aware of shutting down event and swallow errors in this stage. 1 # TODO: may relax this limitation? 1 ], # TODO: support remaining argument, uncomment the lines in nnictl.py 1 # TODO: support InputChoice and ValueChoice 1 startTime: (trial as Trial).info.startTime, // FIXME: why do we need info here? 1 # TODO: deal with conditional ops 1 # TODO: split metric of multiple models? 1 if self.args.model_type == 'mobilenetv2': # TODO: to be tested! Share index for residual connection 1 // TODO: implement put, post, delete methods 1 # TODO support the other batch dimension in the future 1 // FIXME: We should have a global handler for critical errors. 1 sequenceId: -1, // FIXME: multi-phase tuner should use sequence ID instead of trial job ID 1 TODO: support more types in the type_as, need to figure out 1 # FIXME: return mask, to be consistent with other algorithms 2 // TODO: add blacklist 1 /* TODO: here is 32 rather than $pageMargin is because experiment page `content` position */ 1 # FIXME: prior is designed but not supported yet 2 // FIXME: unit test 1 # TODO: refactor later 1 # FIXME: not tested yet 2 # TODO: ugly, think about how to refactor this part 1 content = `${filename} is empty.`; // FIXME: this should be handled in front-end 1 # TODO: Packing multiple model in one GPU 1 //TODO: 1 // TODO: test 1 # TODO: 1 # FIXME: risk, candidates might also have None 1 // TODO: handle nested entries 1 // TODO: move this to our ESLint formatter? 1 # TODO: scope name could be empty 1 // TODO: handle more than number and object 1 this.log.error(`TrialDispatcher: TODO: not implement to handle direct REPORT_METRIC_DATA command yet.`); 1 // FIXME: redundant update 1 # FIXME: to avoid circular import, copy this function in this place 1 // TODO: remove the following constraint when supporting distributed trial 1 # FIXME This is a hack to make choice align with the previous format 1 # TODO: designed to replace `patch_optimizer` 1 # TODO: what if this input is a constant tensor 1 * FIXME: This should be a router, not a separate REST server. 1 # FIXME: some tuners raise NoMoreTrialError when they are waiting for more trial results 1 # FIXME: need further check for each algorithm which types are actually supported 1 // TODO: support non shared storage 2 // TODO: should handle more types of metric keys 1 // FIXME: this can take a long time 1 //TODO: remove this constraint when supporting other training services 1 # FIXME: this check might be wrong 1 # TODO support more kinds of value node 1 # TODO: maybe it should be able to calc on weight-granularity, beside from layer-granularity 1 // FIXME: can this be undefined? 1 # TODO: currently, only support single input slot and output slot. 1 # TODO: graph 1 # TODO: this logic might need to be refactored into execution engine 1 // TODO: netron might need prefix. 1 #TODO:finish webui function 1 // FIXME: This file is copied from react-dev-utils master branch. Waiting for its next release. 1 # TODO: It is not a good idea to directly modify global settings. A better choice is 1 # TODO: find out a proper way to show no more trial message on WebUI 1 #TODO replace this flops counter with nni.compression.torch.utils.counter.count_flops_params 1 # TODO should use a more general way to get the input 1 // TODO: this should be refactored to the common modules 1 # TODO: change to sleep time configurable via arguments 1 // TODO: NNI manager should not peek tuner's internal protocol, let's refactor this later 1 '/experiment/cluster-metadata', //TODO: Fix validation expressJoi(ValidationSchemas.SETCLUSTERMETADATA), 1 # TODO: the operation is likely to be considered editable by end-user and it will be hard to debug 1 # TODO: prim::TupleIndex is not supported yet 1 TODO: parameters of subgraph (see `Node` class) 1 # TODO: support non member functions 1 # FIXME: this is a workaround as full tensor is not supported in configs 1 * TODO: 1 displayName: Sphinx # TODO: rstcheck 1 # FIXME This is a hack to make choice align with the previous format 1 # TODO: when copying one node to multiple devices, broadcast is more efficient than P2P communication 1 # TODO: replace with validation here 2 # TODO: deal with all the types 1 # TODO: set these, for each class? 1 orig_type = weight.type() # TODO: user layer 1 # TODO: port shangning's work here, and use it in Experiment.start()/.stop() 1 # FIXME: because Tuner is designed as interface, this API should not be here 1 return null; // TODO: render a loading page 1 // TODO: change the name based on operator's type 2 // FIXME: default metric is hacked as latestAccuracy currently 1 # TODO: could make this a user given parameter 1 // FIXME: this is ad-hoc 1 # TODO: support all platforms 1 // FIXME: Use global handler. The event can be emitted after listening. 1 # TODO: need refactor 1 # TODO: add scope name 1 # TODO: match with arg's type. manually choose for now 2 TODO: this is pytorch cell 1 TODO: If the compact model has been speeduped, the auto infer masks maybe also need. 1 {/* TODO: fix bug */} 1 // TODO add validators for request params, query, body 1 #if spec.q is None: # TODO: comment out because of edge case UT uniform(99.9, 99.9) 1 #TODO: 1. get this url by api? 2. change this url in private dlc mode. 1 TODO: parameter of subgraph (cell) 1