tutorial/source/svi_part_ii.ipynb (235 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# SVI Part II: Conditional Independence, Subsampling, and Amortization\n",
"\n",
"## The Goal: Scaling SVI to Large Datasets\n",
"\n",
"For a model with $N$ observations, running the `model` and `guide` and constructing the ELBO involves evaluating log pdf's whose complexity scales badly with $N$. This is a problem if we want to scale to large datasets. Luckily, the ELBO objective naturally supports subsampling provided that our model/guide have some conditional independence structure that we can take advantage of. For example, in the case that the observations are conditionally independent given the latents, the log likelihood term in the ELBO can be approximated with\n",
"\n",
"$$ \\sum_{i=1}^N \\log p({\\bf x}_i | {\\bf z}) \\approx \\frac{N}{M}\n",
"\\sum_{i\\in{\\mathcal{I}_M}} \\log p({\\bf x}_i | {\\bf z}) $$\n",
"\n",
"where $\\mathcal{I}_M$ is a mini-batch of indices of size $M$ with $M<N$ (for a discussion please see references [1,2]). Great, problem solved! But how do we do this in Pyro?\n",
"\n",
"## Marking Conditional Independence in Pyro\n",
"\n",
"If a user wants to do this sort of thing in Pyro, he or she first needs to make sure that the model and guide are written in such a way that Pyro can leverage the relevant conditional independencies. Let's see how this is done. Pyro provides two language primitives for marking conditional independencies: `plate` and `markov`. Let's start with the simpler of the two.\n",
"\n",
"### Sequential `plate`\n",
"\n",
"Let's return to the example we used in the [previous tutorial](svi_part_i.ipynb). For convenience let's replicate the main logic of `model` here:\n",
"\n",
"```python\n",
"def model(data):\n",
" # sample f from the beta prior\n",
" f = pyro.sample(\"latent_fairness\", dist.Beta(alpha0, beta0))\n",
" # loop over the observed data using pyro.sample with the obs keyword argument\n",
" for i in range(len(data)):\n",
" # observe datapoint i using the bernoulli likelihood\n",
" pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n",
"``` \n",
"\n",
"For this model the observations are conditionally independent given the latent random variable `latent_fairness`. To explicitly mark this in Pyro we basically just need to replace the Python builtin `range` with the Pyro construct `plate`:\n",
"\n",
"```python\n",
"def model(data):\n",
" # sample f from the beta prior\n",
" f = pyro.sample(\"latent_fairness\", dist.Beta(alpha0, beta0))\n",
" # loop over the observed data [WE ONLY CHANGE THE NEXT LINE]\n",
" for i in pyro.plate(\"data_loop\", len(data)): \n",
" # observe datapoint i using the bernoulli likelihood\n",
" pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n",
"```\n",
"\n",
"We see that `pyro.plate` is very similar to `range` with one main difference: each invocation of `plate` requires the user to provide a unique name. The second argument is an integer just like for `range`. \n",
"\n",
"So far so good. Pyro can now leverage the conditional independency of the observations given the latent random variable. But how does this actually work? Basically `pyro.plate` is implemented using a context manager. At every execution of the body of the `for` loop we enter a new (conditional) independence context which is then exited at the end of the `for` loop body. Let's be very explicit about this: \n",
"\n",
"- because each observed `pyro.sample` statement occurs within a different execution of the body of the `for` loop, Pyro marks each observation as independent\n",
"- this independence is properly a _conditional_ independence _given_ `latent_fairness` because `latent_fairness` is sampled _outside_ of the context of `data_loop`.\n",
"\n",
"Before moving on, let's mention some gotchas to be avoided when using sequential `plate`. Consider the following variant of the above code snippet:\n",
"\n",
"```python\n",
"# WARNING do not do this!\n",
"my_reified_list = list(pyro.plate(\"data_loop\", len(data)))\n",
"for i in my_reified_list: \n",
" pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n",
"```\n",
"\n",
"This will _not_ achieve the desired behavior, since `list()` will enter and exit the `data_loop` context completely before a single `pyro.sample` statement is called. Similarly, the user needs to take care not to leak mutable computations across the boundary of the context manager, as this may lead to subtle bugs. For example, `pyro.plate` is not appropriate for temporal models where each iteration of a loop depends on the previous iteration; in this case a `range` or `pyro.markov` should be used instead.\n",
"\n",
"### Vectorized `plate`\n",
"\n",
"Conceptually vectorized `plate` is the same as sequential `plate` except that it is a vectorized operation (as `torch.arange` is to `range`). As such it potentially enables large speed-ups compared to the explicit `for` loop that appears with sequential `plate`. Let's see how this looks for our running example. First we need `data` to be in the form of a tensor:\n",
"\n",
"```python\n",
"data = torch.zeros(10)\n",
"data[0:6] = torch.ones(6) # 6 heads and 4 tails\n",
"```\n",
"\n",
"Then we have:\n",
"\n",
"```python\n",
"with plate('observe_data'):\n",
" pyro.sample('obs', dist.Bernoulli(f), obs=data)\n",
"```\n",
"\n",
"Let's compare this to the analogous sequential `plate` usage point-by-point:\n",
"- both patterns requires the user to specify a unique name.\n",
"- note that this code snippet only introduces a single (observed) random variable (namely `obs`), since the entire tensor is considered at once. \n",
"- since there is no need for an iterator in this case, there is no need to specify the length of the tensor(s) involved in the `plate` context\n",
"\n",
"Note that the gotchas mentioned in the case of sequential `plate` also apply to vectorized `plate`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Subsampling\n",
"\n",
"We now know how to mark conditional independence in Pyro. This is useful in and of itself (see the [dependency tracking section](svi_part_iii.ipynb) in SVI Part III), but we'd also like to do subsampling so that we can do SVI on large datasets. Depending on the structure of the model and guide, Pyro supports several ways of doing subsampling. Let's go through these one by one.\n",
"\n",
"### Automatic subsampling with `plate`\n",
"\n",
"Let's look at the simplest case first, in which we get subsampling for free with one or two additional arguments to `plate`:\n",
"\n",
"```python\n",
"for i in pyro.plate(\"data_loop\", len(data), subsample_size=5):\n",
" pyro.sample(\"obs_{}\".format(i), dist.Bernoulli(f), obs=data[i])\n",
"``` \n",
"\n",
"That's all there is to it: we just use the argument `subsample_size`. Whenever we run `model()` we now only evaluate the log likelihood for 5 randomly chosen datapoints in `data`; in addition, the log likelihood will be automatically scaled by the appropriate factor of $\\tfrac{10}{5} = 2$. What about vectorized `plate`? The incantantion is entirely analogous:\n",
"\n",
"```python\n",
"with plate('observe_data', size=10, subsample_size=5) as ind:\n",
" pyro.sample('obs', dist.Bernoulli(f), \n",
" obs=data.index_select(0, ind))\n",
"```\n",
"\n",
"Importantly, `plate` now returns a tensor of indices `ind`, which, in this case will be of length 5. Note that in addition to the argument `subsample_size` we also pass the argument `size` so that `plate` is aware of the full size of the tensor `data` so that it can compute the correct scaling factor. Just like for sequential `plate`, the user is responsible for selecting the correct datapoints using the indices provided by `plate`. \n",
"\n",
"Finally, note that the user must pass a `device` argument to `plate` if `data` is on the GPU.\n",
"\n",
"### Custom subsampling strategies with `plate`\n",
"\n",
"Every time the above `model()` is run `plate` will sample new subsample indices. Since this subsampling is stateless, this can lead to some problems: basically for a sufficiently large dataset even after a large number of iterations there's a nonnegligible probability that some of the datapoints will have never been selected. To avoid this the user can take control of subsampling by making use of the `subsample` argument to `plate`. See [the docs](http://docs.pyro.ai/en/dev/primitives.html#pyro.plate) for details.\n",
"\n",
"### Subsampling when there are only local random variables \n",
"\n",
"We have in mind a model with a joint probability density given by\n",
"\n",
"$$ p({\\bf x}, {\\bf z}) = \\prod_{i=1}^N p({\\bf x}_i | {\\bf z}_i) p({\\bf z}_i) $$\n",
"\n",
"For a model with this dependency structure the scale factor introduced by subsampling scales all the terms in the ELBO by the same amount. This is the case, for example, for a vanilla VAE. This explains why for the VAE it's permissible for the user to take complete control over subsampling and pass mini-batches directly to the model and guide; `plate` is still used, but `subsample_size` and `subsample` are not. To see how this looks in detail, see the [VAE tutorial](vae.ipynb).\n",
"\n",
"\n",
"### Subsampling when there are both global and local random variables\n",
"\n",
"In the coin flip examples above `plate` appeared in the model but not in the guide, since the only thing being subsampled was the observations. Let's look at a more complicated example where subsampling appears in both the model and guide. To make things simple let's keep the discussion somewhat abstract and avoid writing a complete model and guide. \n",
"\n",
"Consider the model specified by the following joint distribution:\n",
"\n",
"$$ p({\\bf x}, {\\bf z}, \\beta) = p(\\beta) \n",
"\\prod_{i=1}^N p({\\bf x}_i | {\\bf z}_i) p({\\bf z}_i | \\beta) $$\n",
"\n",
"There are $N$ observations $\\{ {\\bf x}_i \\}$ and $N$ local latent random variables \n",
"$\\{ {\\bf z}_i \\}$. There is also a global latent random variable $\\beta$. Our guide will be factorized as\n",
"\n",
"$$ q({\\bf z}, \\beta) = q(\\beta) \\prod_{i=1}^N q({\\bf z}_i | \\beta, \\lambda_i) $$\n",
"\n",
"Here we've been explicit about introducing $N$ local variational parameters \n",
"$\\{\\lambda_i \\}$, while the other variational parameters are left implicit. Both the model and guide have conditional independencies. In particular, on the model side, given the $\\{ {\\bf z}_i \\}$ the observations $\\{ {\\bf x}_i \\}$ are independent. In addition, given $\\beta$ the latent random variables $\\{\\bf {z}_i \\}$ are independent. On the guide side, given the variational parameters $\\{\\lambda_i \\}$ and $\\beta$ the latent random variables $\\{\\bf {z}_i \\}$ are independent. To mark these conditional independencies in Pyro and do subsampling we need to make use of `plate` in _both_ the model _and_ the guide. Let's sketch out the basic logic using sequential `plate` (a more complete piece of code would include `pyro.param` statements, etc.). First, the model:\n",
"\n",
"```python\n",
"def model(data):\n",
" beta = pyro.sample(\"beta\", ...) # sample the global RV\n",
" for i in pyro.plate(\"locals\", len(data)):\n",
" z_i = pyro.sample(\"z_{}\".format(i), ...)\n",
" # compute the parameter used to define the observation \n",
" # likelihood using the local random variable\n",
" theta_i = compute_something(z_i) \n",
" pyro.sample(\"obs_{}\".format(i), dist.MyDist(theta_i), obs=data[i])\n",
"```\n",
"\n",
"Note that in contrast to our running coin flip example, here we have `pyro.sample` statements both inside and outside of the `plate` loop. Next the guide:\n",
"\n",
"```python\n",
"def guide(data):\n",
" beta = pyro.sample(\"beta\", ...) # sample the global RV\n",
" for i in pyro.plate(\"locals\", len(data), subsample_size=5):\n",
" # sample the local RVs\n",
" pyro.sample(\"z_{}\".format(i), ..., lambda_i)\n",
"```\n",
"\n",
"Note that crucially the indices will only be subsampled once in the guide; the Pyro backend makes sure that the same set of indices are used during execution of the model. For this reason `subsample_size` only needs to be specified in the guide."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Amortization\n",
"\n",
"Let's again consider a model with global and local latent random variables and local variational parameters:\n",
"\n",
"$$ p({\\bf x}, {\\bf z}, \\beta) = p(\\beta) \n",
"\\prod_{i=1}^N p({\\bf x}_i | {\\bf z}_i) p({\\bf z}_i | \\beta) \\qquad \\qquad\n",
"q({\\bf z}, \\beta) = q(\\beta) \\prod_{i=1}^N q({\\bf z}_i | \\beta, \\lambda_i) $$\n",
"\n",
"For small to medium-sized $N$ using local variational parameters like this can be a good approach. If $N$ is large, however, the fact that the space we're doing optimization over grows with $N$ can be a real problem. One way to avoid this nasty growth with the size of the dataset is *amortization*.\n",
"\n",
"This works as follows. Instead of introducing local variational parameters, we're going to learn a single parametric function $f(\\cdot)$ and work with a variational distribution that has the form \n",
"\n",
"$$q(\\beta) \\prod_{n=1}^N q({\\bf z}_i | f({\\bf x}_i))$$\n",
"\n",
"The function $f(\\cdot)$—which basically maps a given observation to a set of variational parameters tailored to that datapoint—will need to be sufficiently rich to capture the posterior accurately, but now we can handle large datasets without having to introduce an obscene number of variational parameters. \n",
"This approach has other benefits too: for example, during learning $f(\\cdot)$ effectively allows us to share statistical power among different datapoints. Note that this is precisely the approach used in the [VAE](vae.ipynb).\n",
"\n",
"## Tensor shapes and vectorized `plate`\n",
"\n",
"The usage of `pyro.plate` in this tutorial was limited to relatively simple cases. For example, none of the `plate`s were nested inside of other `plate`s. In order to make full use of `plate`, the user must be careful to use Pyro's tensor shape semantics. For a discussion see the [tensor shapes tutorial](tensor_shapes.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"\n",
"[1] `Stochastic Variational Inference`,\n",
"<br/> \n",
"Matthew D. Hoffman, David M. Blei, Chong Wang, John Paisley\n",
"\n",
"[2] `Auto-Encoding Variational Bayes`,<br/> \n",
"Diederik P Kingma, Max Welling"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
}
},
"nbformat": 4,
"nbformat_minor": 2
}