Summary: 312 instances, 247 unique Text Count // TODO: gl_sframe_range::end() should be a const method. 2 // TODO: List of todo's for this file 18 // TODO: Consider converting float_array values to flex_nd_vec, but we must 1 // TODO: Should the metric implementations themselves (or 1 // TODO: What feedback can we give if the user requests a batch size that 1 // TODO: Returns 4GB as that makes sure default batch size is used. 1 // TODO distinguish dict from SGraph 1 // TODO: Define Optional. (Or require C++17 for std::optional?) 1 // TODO: Accessors describing name inputs and expected shapes. 1 // TODO: Check that k exists in eval_map. 1 // TODO: Also verify that activation shape here is [1024, 13, 13]? 1 # TODO: decorrelate reducer_override (which is tied to CPython's 1 // TODO implement other alignments 1 // TODO: fill this out. This should get much more sophisticated. 1 // TODO - allow other font properties 1 // TODO -- pick the limited set of categories intelligently (n most popular 1 * \TODO: We *could* templatize this around the column type, allowing this to 1 // TODO: This implementation would ideally just post-process the return value of 1 // TODO apply font weights in non-macOS 1 // TODO: write function to check valid data spec 1 // TODO: better format 1 // TODO: refactor for multiple indicies 1 // TODO: Clean up the relationship between the save/load interface defined here 1 // TODO: Style transfer backends should support both training and inference. 1 # @TODO: Should be able to automatically choose number of iterations 1 // TODO: Move this model-spec generation code into a separate file, ideally 1 /* TODO: const float_array_map& config if needed */ 3 # TODO: https://github.com/apple/turicreate/issues/2672 1 // @TODO: think about rating data later 1 // TODO: implement `get_similar_items` used with Image Saliency for OD 1 // -- TODO: Add object detection network 1 // TODO: This is broken on Windows, but is only used by RPC, so putting it off 1 //TODO: Figure out how to get error string on Windows 1 # FIXME: add tests 1 // TODO: Have MPS path use these parameters, instead 1 // TODO: Automatically infer the image column name, or throw error if you 1 // @TODO: think about side data later 1 // TODO: Expose seed parameter publicly. 1 # TODO: automatically tune this. 1 // TODO: Call object_detector::train from here. 1 * TODO: Fill in details 1 /* TODO: Render Labels on Images */ 1 // TODO: The semantics below are adopted from Core Image. 1 // TODO: Estimate the size of the sframe so that we could decide number of 1 // TODO: set node_fwd.flag to control training vs. inference mode once we have graphs for both 1 // TODO: refactor to share code with the non-browser case, 2 * TODO: 1 * TODO This can be more intelligent as required. For now, it is kinda dumb. 1 // TODO: Adopt NCHW. 1 // TODO exception handling 1 // TODO: Operations such as reshape, slice, etc.? 1 // TODO: This hack should not be required. 1 // TODO - for now, skip values with nan/inf 1 size_t bin = hash64(seeds[j] ^ i) % num_bins; // TODO: bit mask 4 /* @TODO: Split all backgrounds into as many chunks as there are cores 1 // TODO: Move these helper functions to st_model_trainer.cpp 1 // TODO: previously, we had a check for if (endltype(f) == endltype(std::endl)) 1 // TODO: fix all warnings and make these errors. 1 //TODO: for now 1 // TODO: Investigate the number of copies a tensor endures end-to-end. 1 // TODO: Add a mutable float_array interface so we can validate size. 1 TODO: 1 # Now, set up the environment. TODO: move these in to actual 1 // TODO: Investigate higher-quality downsampling? The original Python toolkit 1 // TODO: Someday, this will all be an implementation detail of each 2 // TODO: can it be set by the available RAM?? 1 // TODO: Make less ugly! 14 # TODO: replace all these with in-build macros based on standard macros in Availability.h 1 //TODO: Make specific errors available 1 std::string output_type); // TODO: This should be const 1 * TODO: Add proper arguments to create_drawing_classifier 1 // TODO: Should these also be smoothed? 1 // TODO: below are not available from NSCursor, images are needed 1 * \TODO Factor out the shared structure for data iterators 1 * TODO: Add some comments hare about what the base class is supposed to be. 1 // TODO: Recycle these allocations. 1 // TODO: A struct instead of a map would be nice, too. 1 // TODO: Eventually this should also support computing loss from labels, when 1 // -- TODO: Add activity classifier network 1 // TODO: Change for testing 1 // TODO: Once we adopt an asynchronous API, we can let this "double buffering" 1 // TODO: Migrate to neural_net::float_array_map 1 // Helper class for extracting rules // TODO REFACTOR 1 // TODO: Replace model_spec with a Checkpoint class that encapsulates 1 // TODO: yes 20 is a magic number. 1 // TODO: Make some of these parameters to the model 1 # TODO: Refactor Instance Norm 1 // TODO: Remove resize_only by passing all the augmentation options 1 * \TODO: This should be a flex_list to accomodate integer labels! 1 /** TODO: Figure out a better solution to having `num_samples` be a 1 // TODO: write function to check valid image spec 1 // TODO: memory pool implementations, etc. 1 // TODO: When MLFoundation supports training across multiple GPUs, ensure 1 * TODO: Refactor this and perplexity to share code. 1 gl_sframe training_data_; // TODO: Avoid storing gl_sframe AND data_iterator. 3 // TODO: The original Python code path allowed users to specify no validation 1 // TODO: Remove this vestigial macro invocation once the dependency on cppipc 1 // TODO: Dispatch augmentation to a separate thread/queue. 1 // TODO: Change if vega tooltips have log level options 1 // TODO: Someday we will remove this brittle dependency on names by 2 // TODO: Redo this function!!! There are tons of in-memory sections. 1 // - there should only be a single column if this is the case. TODO: Why? 1 // TODO: Parse from an input stream, without loading entire file into memory 1 // TODO: Should this ultimately use std::random_device instead? 1 // TODO: The names should somehow be a parameter of this class. 5 // TODO: Temporary row_number column to do a stable sort, better solution! 1 // TODO - what if it's string? how to tell? can it be anything else? 1 // TODO: convert interface above to use the extensions methods here 2 # TODO: Recycle the ndarray instances we're allocating below with 1 // TODO: Parameters such as graph mode should be more explicit than just a 1 // TODO: Iterator needs to support resuming from an offset. 1 // TODO: Move this into DarknetYOLOModelTrainer, since these heuristics are 1 // TODO: Streamline this to prevent all the copies from the various types. 1 // TODO: Remainder of interface: predict, etc. 1 // TODO: resolve these issues at the source level 1 // TODO: add proxy support 1 // TODO this is bad -- we need a non-const Aggregation in order to call 1 // TODO: Figure out the right failure behavior for when denominator is 0. 1 // TODO: These should be exposed in a way that facilitates experimentation. 1 // TODO: convert interface above to use the extensions methods here 6 // TODO: This copy is inefficient. This should be a wrapper around NSData: 1 // TODO: Merge all logical_filter transforms that have identical masks. 1 // TODO: make train asynchronous 1 // TODO: This function should be const. 1 /* TODO: const float_array_map& config if needed */ 1 // TODO: This heavyweight shuffle operation introduces spikes into the 2 // TODO: In C++17, we can use std::map::merge to move the table entries 2 // TODO: No, really, replace this legacy interface with one that just accepts 1 # TODO: take care of batch size 1 // TODO: Add estimated disk and memory size to SFrames. 1 // TODO do we need to reset the CTM on each call here? (And reapply our CG->Canvas transformations?) 1 //TODO: Why don't I have to take the address of cancel_handler? 1 // TODO: Get this to take an arbitrary function 1 {"content_feature", content_feature}, // TODO: refactor to take content name and style name 1 // TODO - error handling? 2 // TODO send kill signal to threads 1 // TODO: think about templating to make this simpler. 1 // TODO: optimize this, as some of the range is already sorted 1 // TODO: Potentially plumb `::google::protobuf::MessageLite` to variant 1 * TODO: write what needs to be done for batching 1 // TODO: The original Python implementation also exposed "anchors", 1 TODO: Switch to gl_sarray and the new SDK implementation. 1 // TODO - what if it has values > 32 bit? MLMultiArray only supports 32-bit ints. 1 // TODO: Codify priority levels 1 // TODO: Support additional layers (and further parameterize the above) as 1 // TODO: Should accept model_backend as an optional argument to avoid 1 # TODO: Evaluate method to update lr in set_learning_rate() 1 * TODO: this should 1 // TODO: Expose via forthcoming C-API checkpointing mechanism? 1 // TODO -- what should we assert here instead, to make sure we have enough 1 * TODO: make it a general function of setting up a bipartite graph 1 // TODO: eventually we should probably have an "undefined" bin 1 # TODO: allow autodetection of light/dark mode. 2 // TODO: Ideally this would not require copying. But we should move away 1 // TODO: Replace these with an instance of TCModelTrainerBackendGraphs? 1 * TODO: Implement batching for multiple images 1 // TODO not implemented yet. 1 // TODO: Expose these from DarknetYOLOCheckpoint. 1 //TODO: Perhaps combine the sframe and the join positions into a struct? 1 // TODO: This next section could benefit from an iterator over floats, 1 // TODO: MANY MANY HEURISTICS 1 // TODO: raise exception 2 // TODO: force has key 2 // TODO: Clean the choice of data_type up 1 // TODO: Only sort the largest num_words 1 // TODO: Encapsulate these network architecture parameters better, to support 1 // TODO: Remove the inheritance from ipc_object_base once the cppipc code has 1 # TODO: early stopping 1 // TODO: Clean up logic surrounding batch size and memory budget. 1 #pragma clang diagnostic ignored "-Woverloaded-virtual" // TODO: fix these issues below 1 /* TODO: Design Better Error Message Compornent */ 1 // TODO: filter annotation array use `x` as index into the array 1 // TODO Test 2 // TODO: This can be optimized by precomputing this for all zones outsize 1 // TODO: This copy is inefficient. We can construct an NSData that wraps a 1 // TODO: When we support alternative base models, we will have to generalize. 1 // TODO: Move this into object_detection::ModelTrainer once the 1 // TODO -- actually show this with the axes the user asked for 1 // TODO: augment -> train 1 # TODO: Include self.labels: feed_dict['label'] to handle labels from validation set 1 // TODO: Remove this model-specific code once the inference path no longer 1 // TODO BUG? Custom model doesn't seem to allow optional outputs 1 // TODO: write function to check valid vega spec 1 // TODO: Remove this type alias once this class stops inheriting from 1 // TODO: Check that statement 1 # TODO: Converge to NCHW everywhere. 1 x[i] = DenseVector(1); // TODO: Why doesn't 0 work? 1 // TODO: If we standardize on NCHW in the toolkit code, then we can avoid the 1 * TODO Refactor this into subclasses + subfunctions 1 // TODO: find a better way of initializing vertex_data 1 // TODO: Codify priority levels? 2 // TODO: Use an Optional type instead. 1 // TODO: Recycle non-temporary MPSImage instances using image allocators 1 // TODO: Performance, instead of performing a string comparison everywhere, 1 // @TODO: When we make a dylib, we'll need some sophisticated 1 // TODO: add free all matrices and array. Mostly copy_weight_matrices_ 1 // TODO: Validate the dictionary keys in compute_properties. Let downstream 1 # TODO we can link the requirements from here when target_link_libraries 1 * TODO: will slowly move away all users of this function to get_lazy_sarray 1 // TODO: This class should be responsible for producing the augmenter itself. 1 // TODO should we be updating _font to reflect the system font we've fallen back to? 1 // TODO - not sure what to do here. 1 // TODO: Remove this method. Just let subclasses define the entire training 1 // TODO: add options. 1 // TODO: Raise exception 1 // TODO: Unify with the logic in supervised_learning_model_base::api_evaluate, 1 //TODO: What happens if a join key (or part of one) is NULL? 1 // TODO: The original Python implementation only sampled the training data for 1 // TODO distinguish list and vector 1 // TODO: revisit the code when we actually have vertex groups 1 // TODO: Images intended for clients to (optionally) read should be 1 // TODO: Remove this eager construction once we stop putting weak pointers in 1 // TODO make add_element take a sframe_row::row_reference instead 1 // TODO: optimize this. 1 //TODO: Implement this functionality in Windows. 1 std::string output_type); // TODO: This should be const 1 // TODO: Remove this proxy subclass once the dependency on cppipc has been 1 // TODO: This function should be const 3 //TODO: This is a HUGE hack. From the Python side, a list of ints is turned 1 // TODO - what should this do on other platforms? 1 // TODO: If gl_sframe_range::end() were a const method, we wouldn't need to 2 /* TODO: Indicator to Show how many Images were Annotated */ 1 * TODO: This code is *very* similar to the code in supervised_learning.cpp. 1 // TODO: add multiple batch sizes 2 // TODO: Where is the right place for this? Probably not here... 1 std::vector hwc_data(chw_array.size()); // TODO: Avoid initializing 1 // TODO: Adopt EncodedBatch instead. 1 # @TODO: Think about how to bypass the md5 checksum if the user wants to 1 const static size_t NUM_THREADS = 6; // handle requests on 6 threads - TODO optimize this number 1 // TODO: Factor out more shared code from FillLossLabelsBatch and the below? 1 //TODO: Break this into its own function 1 // TODO: Just save this value before writing it to disk. 1 // TODO: modifications to the cursor attribute should actually be applied 1 << std::endl; // TODO: finish 1 // TODO Implement + Test 1 // TODO: implement `background_work` used with Image Saliency for OD 1 // TODO: refactor code to be more readable with loops 2 // TODO: The original Python code path only runs the model selector on at most 1 #pragma clang diagnostic ignored "-Woverloaded-virtual" // TODO: fix this issue 1 //TODO: This is what will run out of memory first when scaling up 1 // TODO: implement `cast_annotations` used with Image Saliency for OD 1 // TODO: remove 'is_dense' once the ball tree is updated to use distance components 1 // TODO - do we need to do anything here? perhaps not. 2 // TODO: Investigate parallelizing this file I/O. 1 // TODO: Implement "report_by_class" using the standard evaluation framework. 1 //TODO: This has a race condition that I'm ignoring as I don't think it will 1 // TODO: Remove this method. It is only called by the base class implementation 1