econml/solutions/causal_analysis/_causal_analysis.py (11 lines): - line 30: # TODO: this utility is documented but internal; reimplement? - line 32: # TODO: this utility is even less public... - line 360: # TODO: we can't currently handle unseen values of the feature column when getting the effect; - line 370: # TODO: consider addding an API to DML that allows for better understanding of how the nuisance inputs are - line 659: # TODO: check compatibility of X and Y lengths - line 709: # TODO: bail out also if categorical columns, classification, random_state changed? - line 713: # TODO: should we also train a new model_y under any circumstances when warm_start is True? - line 1002: # TODO: enrich outcome logic for multi-class classification when that is supported - line 1066: # TODO: Note that there's no column metadata for the sample number - should there be? - line 1443: # TODO: it seems like it would be better to just return the tree itself rather than plot it; - line 1535: # TODO: it seems like it would be better to just return the tree itself rather than plot it; econml/iv/nnet/_deepiv.py (8 lines): - line 19: # TODO: make sure to use random seeds wherever necessary - line 20: # TODO: make sure that the public API consistently uses "T" instead of "P" for the treatment - line 96: # TODO: does the numeric stability actually make any difference? - line 340: # TODO: is there a more robust way to do this? - line 350: # TODO: do we need to give the user more control over other arguments to fit? - line 365: # TODO: do we need to give the user more control over other arguments to fit? - line 370: # TODO: it seems like we need to sum over the batch because we can only apply gradient to a scalar, - line 436: # TODO: any way to get this to work on batches of arbitrary size? econml/utilities.py (4 lines): - line 167: # TODO: any way to avoid creating a copy if the array was already dense? - line 752: # TODO: might be faster to break into connected components first - line 757: # TODO: Consider investigating other performance ideas for these cases - line 822: # TODO: would using einsum's paths to optimize the order of merging help? econml/iv/dr/_dr.py (3 lines): - line 95: # TODO: prel_model_effect could allow sample_var and freq_weight? - line 1539: # TODO: support freq_weight and sample_var in debiased lasso - line 2511: # TODO: do correct adjustment for sample_var econml/dml/dml.py (3 lines): - line 419: # TODO: consider whether we need more care around stateful featurizers, - line 857: # TODO: support freq_weight and sample_var in debiased lasso - line 1116: # TODO: consider whether we need more care around stateful featurizers, econml/sklearn_extensions/linear_model.py (2 lines): - line 27: # TODO: consider working around relying on sklearn implementation details - line 1771: # TODO: Add other types of covariance estimation (e.g. Newey-West (HAC), HC2, HC3) econml/_cate_estimator.py (2 lines): - line 588: # TODO: what if input is sparse? - there's no equivalent to einsum, - line 1140: # TODO Share some logic with non-discrete version econml/orf/_ortho_forest.py (2 lines): - line 46: # TODO: consider working around relying on sklearn implementation details - line 318: # TODO: Check performance econml/inference/_bootstrap.py (2 lines): - line 70: # TODO: Add a __dir__ implementation? - line 177: # TODO: studentized bootstrap? this would be more accurate in most cases but can we avoid econml/sklearn_extensions/model_selection.py (1 line): - line 16: # TODO: conisder working around relying on sklearn implementation details prototypes/orthogonal_forests/causal_tree.py (1 line): - line 65: rho = - moment / grad # TODO: Watch out for division by zero! econml/grf/_base_grf.py (1 line): - line 216: # TODO: support freq_weight and sample_var econml/iv/sieve/_tsls.py (1 line): - line 256: # TODO: is it right that the effective number of intruments is the econml/dr/_drlearner.py (1 line): - line 1175: # TODO: support freq_weight and sample_var in debiased lasso econml/dynamic/dml/_dml.py (1 line): - line 151: # TODO: update docs econml/_ortho_learner.py (1 line): - line 747: # TODO: ideally, we'd also infer whether we need a GroupKFold (if groups are passed) econml/dml/causal_forest.py (1 line): - line 515: # TODO: consider whether we need more care around stateful featurizers,