adanet/core/iteration.py (12 lines): - line 100: # TODO: Consider making these messages be some kind of Enum. - line 145: # TODO: Re-enable once we know that evaluation won't - line 315: # TODO: Replace candidates with ensemble_specs. - line 567: # TODO: Consider moving builder mode logic to ensemble_builder.py. - line 660: # TODO: Eliminate need for candidates. - line 717: # TODO: Eliminate need for candidates. - line 726: # TODO: Move adanet_loss from subnetwork report to a new - line 821: TODO: Make this code compatible with TPU estimators. - line 1017: TODO: Best ensemble index should always be static during EVAL - line 1192: # TODO: Handle hook created variables. - line 1193: # TODO: Handle TPU embedding variables. - line 1210: # TODO: Currently, TPUEstimator has no global_step set when adanet/core/estimator.py (5 lines): - line 145: # TODO: Run writer.flush() at Session end. - line 695: # TODO: Consider using `RunConfig.replace` with the new device_fn, - line 733: # TODO: Merge CandidateBuilder into SubnetworkManager. - line 960: # TODO: Support steps parameter. - line 1784: # TODO: Refactor architecture building logic to its own module. adanet/core/estimator_distributed_test_runner.py (5 lines): - line 50: # TODO: Switch back to TF 2.0 once the distribution bug is fixed. - line 216: # TODO: Prevent checkpoints that are currently being - line 254: # TODO: Switch optimizers to tf.keras.optimizers.Adam once the - line 313: # TODO: Switch optimizers to tf.keras.optimizers.Adam once the - line 372: # TODO: Replace with adanet.Estimator. Currently this just verifies adanet/autoensemble/common.py (4 lines): - line 76: TODO: Figure out how to handle single-batch datasets. - line 152: # TODO: Consider tensorflow_estimator/python/estimator/util.py. - line 187: # TODO: Replace with variance complexity measure. - line 262: # TODO: Make the "config" argument optional using introspection. adanet/core/summary.py (4 lines): - line 372: # TODO: _ScopedSummary and _ScopedSummaryV2 share a lot of the same - line 554: # TODO: Figure out a cleaner way to handle this. - line 558: # TODO: Do summaries need to be reduced before writing? - line 779: # TODO: Add support for `bad_color` arg. adanet/core/tpu_estimator.py (3 lines): - line 41: TODO: Provide the missing functionality detailed below. - line 206: # TODO: Consider extracting a common function to use here and in - line 416: # TODO: Magic number. Investigate whether there is a adanet/distributed/placement.py (2 lines): - line 178: # TODO: Allow user to disable ensemble workers. For example, when there - line 184: # TODO: Optional code organization suggestion: adanet/core/architecture.py (2 lines): - line 125: # TODO: Remove setters and getters. - line 135: # TODO: Confirm that it makes sense to have global step of 0. adanet/core/report_accessor.py (2 lines): - line 33: # TODO: Encapsulate conversion and serialization of a - line 107: TODO: Remove iteration_number from the argument of this method. adanet/experimental/controllers/sequential_controller.py (1 line): - line 28: # TODO: Add checks to make sure phases are valid. adanet/ensemble/strategy.py (1 line): - line 74: # TODO: Pruning the previous subnetwork may require more metadata adanet/ensemble/__init__.py (1 line): - line 17: # TODO: Add more detailed documentation. adanet/experimental/work_units/keras_trainer_work_unit.py (1 line): - line 40: # TODO: Allow better customization of TensorBoard log_dir. adanet/experimental/keras/ensemble_model.py (1 line): - line 69: # TODO: Extract output shapes from submodels instead of passing in adanet/experimental/work_units/keras_tuner_work_unit.py (1 line): - line 33: # TODO: Allow better customization of TensorBoard log_dir. adanet/experimental/phases/autoensemble_phase.py (1 line): - line 172: # TODO: Add some way to check that work_units has to be called adanet/core/eval_metrics.py (1 line): - line 233: # TODO: Should architecture.subnetworks be sorted by iteration adanet/core/evaluator.py (1 line): - line 30: # TODO: Remove uses of Evaluator once AdaNet Ranker is implemented. adanet/experimental/keras/testing_utils.py (1 line): - line 27: # TODO: Add ability to choose the problem type: regression, adanet/experimental/phases/phase.py (1 line): - line 32: # TODO: Find a better way to ensure work_units only gets called adanet/replay/__init__.py (1 line): - line 16: # TODO: Add more detailed documentation. adanet/experimental/phases/keras_tuner_phase.py (1 line): - line 59: # TODO: Find a better way to get all models than to pass in a adanet/distributed/__init__.py (1 line): - line 20: # TODO: Add more details documentation. adanet/subnetwork/generator.py (1 line): - line 180: # TODO: Validate name matches ^[A-Za-z0-9_.\\-/]*$ adanet/experimental/phases/keras_trainer_phase.py (1 line): - line 41: # TODO: Consume arbitary fit inputs. adanet/ensemble/weighted.py (1 line): - line 434: # TODO: Add Unit tests for the ndims == 3 path. adanet/experimental/storages/storage.py (1 line): - line 44: # TODO: How do we enforce that save_model is called only once per