Summary: 45 instances, 38 unique Text Count # TODO: consider whether we need more care around stateful featurizers, 3 # TODO: Add other types of covariance estimation (e.g. Newey-West (HAC), HC2, HC3) 1 # TODO: it seems like it would be better to just return the tree itself rather than plot it; 2 # TODO: do we need to give the user more control over other arguments to fit? 2 # TODO: it seems like we need to sum over the batch because we can only apply gradient to a scalar, 1 # TODO: might be faster to break into connected components first 1 # TODO: make sure to use random seeds wherever necessary 1 rho = - moment / grad # TODO: Watch out for division by zero! 1 # TODO: prel_model_effect could allow sample_var and freq_weight? 1 # TODO: is it right that the effective number of intruments is the 1 # TODO: consider working around relying on sklearn implementation details 2 # TODO: support freq_weight and sample_var in debiased lasso 3 # TODO: consider addding an API to DML that allows for better understanding of how the nuisance inputs are 1 # TODO: what if input is sparse? - there's no equivalent to einsum, 1 # TODO Share some logic with non-discrete version 1 # TODO: Note that there's no column metadata for the sample number - should there be? 1 # TODO: do correct adjustment for sample_var 1 # TODO: check compatibility of X and Y lengths 1 # TODO: would using einsum's paths to optimize the order of merging help? 1 # TODO: studentized bootstrap? this would be more accurate in most cases but can we avoid 1 # TODO: make sure that the public API consistently uses "T" instead of "P" for the treatment 1 # TODO: Consider investigating other performance ideas for these cases 1 # TODO: ideally, we'd also infer whether we need a GroupKFold (if groups are passed) 1 # TODO: does the numeric stability actually make any difference? 1 # TODO: any way to get this to work on batches of arbitrary size? 1 # TODO: bail out also if categorical columns, classification, random_state changed? 1 # TODO: Add a __dir__ implementation? 1 # TODO: this utility is documented but internal; reimplement? 1 # TODO: conisder working around relying on sklearn implementation details 1 # TODO: we can't currently handle unseen values of the feature column when getting the effect; 1 # TODO: should we also train a new model_y under any circumstances when warm_start is True? 1 # TODO: Check performance 1 # TODO: enrich outcome logic for multi-class classification when that is supported 1 # TODO: update docs 1 # TODO: this utility is even less public... 1 # TODO: any way to avoid creating a copy if the array was already dense? 1 # TODO: support freq_weight and sample_var 1 # TODO: is there a more robust way to do this? 1