Summary: 111 instances, 96 unique Text Count # TODO: technically we should make sure that we add a 1 # TODO: Figure out attribute namming weirdness here 1 # TODO: Consider returning an MVN here instead 1 # TODO: improve efficiency for multi-task models 1 # TODO: filter out points that are worse than the reference point first here 1 # TODO (T52818288): Properly use lazy tensors in scalarize_posterior 1 TODO: implement support using independent warping functions 1 # TODO: Allow qMC sampling 2 # TODO: make this be the default KPMatmulLT diagonal method in gpytorch 2 # Use the mean of the previous noise values (TODO: be smarter here). 1 # TODO: Make sure observation noise is transformed correctly 1 # TODO swap out scatter_ so that comparisons could be int instead of long 1 # TODO: add support for outcome transforms. 1 # TODO: Refactor these as PosteriorTransform as well. 1 # TODO: Don't instantiate a generator 1 # TODO: Allow passing in inner samplers directly 1 # TODO: Avoid unnecessary computation by not generating all candidates 1 # TODO: give the Posterior API an add_observation_noise function to avoid 1 # TODO: ensure this works in batch mode, which it does not currently. 1 # TODO: support splitting outcome transforms. 1 # TODO: return a HigherOrderGPPosterior once rescaling constant 1 # TODO: use gpytorch's pivoted cholesky instead once that gets an exposed list 1 # TODO: enforce the diagonalization to return a KPLT for all shapes in 1 # TODO: cleanup the reshaping here 1 # TODO: Consider different aggregation (i.e. max) across q-batch 1 # TODO: Dedup handling of this here and in the constructor (maybe via a 2 # TODO: Investigate differentiability of MVaR. 1 TODO: Support `m > 2` objectives. 1 TODO: write this in C++ for faster looping. 1 # TODO: Add support for outcome transforms. 1 performs the decomposition under minimization. TODO: use maximization 1 TODO: Use the observed values to identify the fantasy sub-tree that is closest to 1 # TODO: expand options for the mean module via batch shaping? 1 # TODO: ensure that this still works for structured noise solves. 1 # @TODO make sure fidelity_dims align in project, expand & cost_aware_utility 1 TODO: Add support for outcome constraints. 1 # TODO: Implement best point computation from training data 1 TODO: Investigate further. 1 # TODO: Find a general way to do this efficiently. 1 TODO: add support for utilizing gradient information 1 # TODO: clean this up after removing AcquisitionObjective. 2 # TODO: remove these variables from `state_dict()` so that when calling 1 # TODO: improve efficiency by not recomputing baseline-baseline 1 # TODO: Add support for HeteroskedasticSingleTaskGP. 3 # TODO: we could use batches to compute (q choose i) and (q choose q-i) 1 # GPyTorchPosterior (TODO: Should we Lazy-evaluate the mean here as well?) 1 # TODO: Allow subsetting of other covar modules 2 # TODO: make this a flag? 1 # TODO: Deduplicate repeated evaluations / deal with numerical degeneracies 1 # Use the mean of the previous noise values (TODO: be smarter here). 1 # TODO: Can we enable backprop through the latent covariances? 1 # TODO (T41270962): Support task-specific noise levels in likelihood 1 # TODO: Clean up once ScalarizedObjective is removed. 1 # TODO: Add prune_baseline functionality as for qNEI 1 TODO: make this method return the sampler object, to avoid doing burn-in 1 TODO: remove this when MultiTask models support outcome transforms. 2 # TODO: Implement more efficient way to compute posterior over both training and 1 # TODO: speed computation of covariance matrix 1 For now uses the same q' as in `full_optimizer`. TODO: allow different `q`. 1 # TODO: Add support for custom likelihoods. 2 # TODO: Make sure this doesn't change base samples in-place 1 # TODO: Delta method, possibly issue warning 4 # TODO: update to follow new gpytorch convention resulting from 1 # TODO: Use sparse representation (not sure if scipy optim supports that) 1 # TODO: when batched kernels are supported in RandomFourierFeatures, 1 TODO: similar to qNEHVI, when we are using sequential greedy candidate 1 # TODO: we could refactor this __init__ logic into a 1 # TODO: support batched inputs (req. dealing with ragged tensors) 2 # TODO: Can we get the model batch shape property from the model? 1 # TODO: Handle approx. zero losses (normalize by min/max loss range) 1 # TODO: remove this once torch.cholesky issue is resolved 1 # TODO: Validate noise shape 1 # TODO: make D a sparse matrix once pytorch has better support for 1 # TODO: get rid of this once cholesky_inverse supports batch mode 1 # TODO: support multiple batch dimensions here 1 # TODO: remove the exception handling, when the pytorch 1 # TODO: enable vectorization/parallelization here 1 # TODO: Allow broadcasting of model batch shapes 1 # TODO: make sure there are not deduplicate constraints 1 # TODO: Do not instantiate full covariance for lazy tensors (ideally we simplify 1 # TODO: use the exposed joint covariances from the prediction strategy 1 # TODO: Find a better way to do this 1 # ensure scalars agree (TODO: Allow different priors for different outputs) 1 TODO: Use DKLV17 and LKF17 for the box decomposition as in [Yang2019]_ for 1 # TODO: Update the exp moving average efficiently 1 # TODO: be smart about how we can update covar matrix here 1 TODO: refactor some/all of this into the MCSampler. 2 # TODO: support batched inputs (req. dealing with ragged tensors) 2 # TODO: Ideally support RFFs for multi-outputs instead of having to 1 # TODO: We can support fixed features, see Max's comment on D33551393. We can 1 "composite_mtbo.ipynb", # TODO: very slow, figure out if we can make it faster 1 new_weights = getattr(sampler, "base_weights", None) # TODO: generalize this 1 TODO: Support t-batches of initial conditions. 1 # TODO: use batched `torch.randperm` when available: 1 # TODO: make minimum value dtype-dependent 1 TODO: replace this with a more efficient decomposition. E.g. 1