Brief Survey of Generative Models

  8 minute read

\[\newcommand{\argmin}{\mathop{\mathrm{argmin}}} \newcommand{\argmax}{\mathop{\mathrm{argmax}}}\]

Overview

This page is a high-level summary of various generative models with little explanations. Models to cover are as follows:

Variational Autoencoders (VAE)

  • Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).

Adversarial Variational Bayes (AVB)

Extention to VAE to use non-Gaussian encoders

  • Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. “Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks.” arXiv preprint arXiv:1701.04722 (2017).

Generative Adverserial Networks (GAN)

  • Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014.

Generalized divergence minimization GAN (\(f\)-GAN)

  • Nowozin, Sebastian, Botond Cseke, and Ryota Tomioka. “f-gan: Training generative neural samplers using variational divergence minimization.” Advances in Neural Information Processing Systems. 2016.

Wasserstein GAN (WGAN)

  • Arjovsky, Martin, Soumith Chintala, and Léon Bottou. “Wasserstein gan.” arXiv preprint arXiv:1701.07875 (2017).

Adversarial Autoencoders (AAE)

  • Makhzani, Alireza, et al. “Adversarial autoencoders.” arXiv preprint arXiv:1511.05644 (2015).

Wasserstein Auto-Encoder (WAE)

  • Tolstikhin, Ilya, et al. “Wasserstein Auto-Encoders.” arXiv preprint arXiv:1711.01558 (2017).

Cramer GAN

  • Bellemare, Marc G., et al. “The Cramer Distance as a Solution to Biased Wasserstein Gradients.” arXiv preprint arXiv:1705.10743 (2017).

VAE

Model setup:

  • Recognition model: \(q_\phi(z \vert x) = \mathcal N(\mu=h_1(x), \sigma^2 \mathbf I=h_2(x)\mathbf I)\)
  • Assumed fixed prior: \(p(z) = \mathcal N(0,\mathbf I)\)
  • Generation model: \(p_\theta(x \vert z) = \mathcal N(\mu=g_1(z), \sigma^2 \mathbf I=g_2(z)\mathbf I)\)
    • Implied (but intractable) posterior: \(p_\theta(z\vert x)\)

Key equations:

\[\begin{equation} \begin{split} \log p_\theta (x^i) = D_{KL}(q_\phi(z\vert x^i)\| p_\theta(z\vert x^i) + \mathcal L(\theta, \phi, x^i) \end{split} \end{equation}\] \[\begin{equation} \begin{split} \mathcal L(\theta, \phi, x^i) &= \mathbb{E}_{z\sim q_\phi(z\vert x^i)}[\log p_\theta(x^i,z) - \log q_\phi(z\vert x^i)]\\\\ &= \mathbb{E}_{z\sim q_\phi(z\vert x^i)}[\log p_\theta(x^i,z)] + H[q_\phi(z\vert x^i)]\\\\ &= \mathbb{E}_{z\sim q_\phi(z\vert x^i)}[\log p_\theta(x^i \vert z)] - D_{KL}[q_\phi(z\vert x^i) \| p(z)]\\\\ \end{split} \label{vae_elbo} \end{equation}\]

Optimization objective:

\[\hat\theta, \hat\phi = \argmax_{\theta, \phi} \sum_i \mathcal L(\theta, \phi, x^i)\]

Gradient-friendly Monte Carlo:

Difficulties in calculating \(\mathcal L(\theta, \phi, x^i)\):

  • Due to the generality of \(q\) and \(p\) (typically a neural network), the expectation in \(\ref{vae_elbo}\) does not have an analytical form. So we need to resort to Monte Carlo estimation.
  • Furthermore, direct sampling \(z\) according to \(q\) poses difficulty in taking derivative against parameters \(\phi\) that parameterizes the distribution \(q\).

Solution: Reparameterization Trick

Find smooth and invertible transformation \(z=g_\phi(\epsilon)\) such that with \(\epsilon\) drawn from a fixed (non-parameterized) distribution \(p(\epsilon)\) we have \(z \sim q(z; \phi)\), so

\[\mathbb{E}_{z\sim q(z;\phi)}[f(z)] = \mathbb{E}_{\epsilon\sim p(\epsilon)}[f(g_\phi(\epsilon))]\]

For the Normal distribution used here (\(q_\phi(z\vert x)\)), it is convenient to use location-scale transformation, \(z=\mu+\sigma * \epsilon\) with \(\epsilon \sim \mathcal N(0,\mathbf I)\).

\[\begin{equation} \widetilde{\mathcal{L}}(\theta, \phi, x^i) = \frac{1}{L} \sum_{l=1}^L \left( \log p_\theta(x^i \vert z^{i,l})] - D_{KL}[q_\phi(z^{i,l}\vert x^i) \| p(z^{i,l}) \right) \end{equation}\] \[z^{i,l} = \mu_{x^i} + \sigma_{x^i} * \epsilon ^{i,l} ~~\text{and}~~ \epsilon^l \sim \mathcal N(0,\mathbf I)\]

For total \(N\) data points with mini batch size \(M\):

\[\begin{equation} \begin{split} {\mathcal L}(\theta, \phi; X) = \sum_{i=1}^N \mathcal L(\theta, \phi, x^i) \approx \widetilde {\mathcal L^M}(\theta, \phi; X) = \frac{N}{M} \sum_{i=1}^M \widetilde {\mathcal L}(\theta, \phi, x^i) \end{split} \end{equation}\]

For sufficiently large batch size \(M\), the inner loop sample size \(L\) can be set to 1. Due to stochastic mini batch gradient descent and stochastic expectation estimation, this is also called doubly stochastic estimation.

Using non-Gaussian encoders

Todo: discuss AVB paper

Gumble trick for discrete latent variables

Ref for this section:

  1. Gumble max trick https://hips.seas.harvard.edu/blog/2013/04/06/the-gumbel-max-trick-for-discrete-distributions/
  2. Balog, Matej, et al. “Lost Relatives of the Gumbel Trick.” arXiv preprint arXiv:1706.04161 (2017).
  3. Jang, Eric, Shixiang Gu, and Ben Poole. “Categorical reparameterization with gumbel-softmax.” arXiv preprint arXiv:1611.01144 (2016).

Gumble distribution:


\(f\)-GAN and GAN

Prelude on \(f\)-divergence and its variational lower bound

The f-divergence family

\[\begin{equation} D_f = \int_{\mathcal X} q(x) ~ f\left( \frac{p(x)}{q(x)}\right) dx \end{equation}\label{f_div}\]

where the generator function \(f: \mathbb{R}_{+} \rightarrow \mathbb{R}\) is a convex, lower-semicontinuous function satisfying \(f(1) = 0\).

Every convex, lower-semicontinuous function has a convex conjugate function \(f^c\), also known as Fenchel conjugate. This function is defined as

\[\begin{equation} f^c(t) = \underset {u \in \text{dom}_f}{\text{sup}} \{ut - f(u)\} \end{equation}\]

Function \(f^c\) is again convex and lower-semicontinuous and the pair \((f,f^c)\) is dual to each other, i.e. \(\left(f^{c}\right)^c=f\). So we can represent \(f\) as

\[\begin{equation} f(t) = \underset {t \in \text{dom}_{f^c}}{\text{sup}} \{tu - f^c(t)\} \end{equation}\]

With this we can establish a lower bound for estimating the f-divergence in general

\[\begin{equation} \begin{split} D_f(P \| Q) &= \int_{\mathcal X} q(x) \underset {t \in \text{dom}_{f^c}}{\text{sup}} \{t \frac{p(x)}{q(x)} - f^c(t)\} dx \\\\ & \ge \underset {T \in {\mathcal T}} {\text{sup}} \int_{\mathcal X} \left( p(x)T(x) - q(x)f^c(T(x)) \right) dx \\\\ & = \underset {T \in {\mathcal T}} {\text{sup}} \left( \mathbb{E}_{x\sim P}[T(x)] - \mathbb{E}_{x\sim Q}[f^c(T(x))] \right) \end{split} \label{f_lowerbound} \end{equation}\]

where \(\mathcal T\) is an arbitrary class of functions \(T: \mathcal X \rightarrow \mathbb R\). The inequality is due to Jessen’s inequality and constraints imposed by \(\mathcal T\).

The bond is tight for

\[\begin{equation} T^*(x) = f' \left(\frac{p(x)}{q(x)} \right) \end{equation}\]

Generative adversarial training

Suppose our goal is to come up with a distribution \(Q\) (model) that is close to \(P\) (the data distribution) and the similarity score (loss) is measured by \(D_f(P \| Q)\). However the direct calculation of \(\ref{f_div}\) is intractable, such as the case where the functional form of \(P\) is unknown and \(Q\) is a complex model parameterized by a neural network.

To be specific:

  • Evaluating \(q(x)\) at any \(x\) is easy, but integrating it is hard due to lack of easy functional form.
  • For \(p(x)\), we do not know how to evaluate it at any \(x\)
  • Sampling from both \(P\) and \(Q\) are easy. Because drawing from data set approximates \(x \sim P\) and we can make the model \(Q\) take random vectors as input which are easy to produce.

Fortunately, we can sample from both of them easily. In this case, \(\ref{f_lowerbound}\) offers a way to estimate the lower bound of the divergence. We would need to maximize this lower bound by changing \(T\) so that it is close to the true divergence, then minimize it over \(Q\). This is formally stated as follows.

\[\begin{equation} F(\theta, \omega) = \mathbb{E}_{x\sim P}[T_\omega(x)] + \mathbb{E}_{x\sim Q_\theta}[-f^c(T_\omega(x))] \end{equation}\] \[\begin{equation} \hat \theta = \argmin_\theta \max_\omega F(\theta, \omega) \end{equation}\]

To ensure that the output of \(T_\omega\) respects the domain of \({f^c}\), we define \(T_\omega(x) = g_f(V_\omega(x))\), where \(V_\omega: \mathcal X \rightarrow \mathbb R\) without any range constraints on the output and \(g_f: \mathbb R \rightarrow \text{dom}_{f^c}\) is an output activation function specific to the \(f\)-divergence used with suitable output ranges.

GAN

For the original GAN, with a divergence target similar to Jensen-Shannon \(\begin{equation} F(\theta, \omega) = \mathbb{E}_{x\sim P}[\log D_\omega(x)] + \mathbb{E}_{x\sim Q_\theta}[\log(1-D_\omega(x))] \end{equation}\) with \(D_\omega = 1/(1+e^{-V_\omega(x)})\) which corresponds to the following

\(g_f(\nu)= \log(1/(1+e^{-\nu}))\) \(T_\omega(x) = \log (D_\omega(x)) = g_f(V_\omega(x))\) \(f^c(t) = -\log (1-\exp(t))\) \(\log (1-D_\omega(x)) = -f^c(T_\omega(x))\)

Practical considerations in adversarial training

Todo: log trick, DCGAN heuristics

Name \(D_f(P\vert Q)\) Generator \(f(u)\) \(T^*(x)\)
Forward KL \(\int p(x) \log \frac{p(x)}{q(x)} dx\) \(u\log u\) \(1 +\log \frac{q(x)}{p(x)}\)
Reverse KL \(\int q(x) \log \frac{p(x)}{q(x)} dx\) \(-\log u\) \(- \frac{q(x)}{p(x)}\)
Jensen-Shannon \(\frac{1}{2} \int p(x) \log \frac{2p(x)}{p(x)+q(x)} + q(x) \log \frac{2q(x)}{p(x)+q(x)} dx\) \(u\log u - (u+1) \log \frac{u+1}{2}\) \(\log \frac{2p(x)}{p(x)+q(x)}\)
GAN \(\int p(x) \log \frac{2p(x)}{p(x)+q(x)} + q(x) \log \frac{2q(x)}{p(x)+q(x)} dx -\log(4)\) \(u\log u - (u+1) \log (u+1)\) \(\log \frac{p(x)}{p(x)+q(x)}\)
Name Conjugate \(f^c(t)\) \(\text{dom}_{f^c}\) Output activation \(g_f\) \(f'(1)\)
Forward KL \(\exp(t-1)\) \(\mathbb R\) \(\nu\) \(1\)
Reverse KL \(-1-\log(-t)\) \(\mathbb R_{-}\) \(-\exp(\nu)\) \(-1\)
Jensen-Shannon \(-\log(2-\exp(t))\) \(t < \log(2)\) \(\log(2) - \log(1+\exp(-\nu))\) \(0\)
GAN \(-\log(1-\exp(t))\) \(\mathbb R_{-}\) \(- \log(1+\exp(-\nu))\) \(-\log(2)\)

WGAN and WAE

Optimal transport (OT)

Kantorovich formulated the optimization target in optimal transport problems as follows

\[\begin{equation} W_c(P_X, P_G) = \underset{\Gamma \in \mathcal P(x \sim P_X, y \sim P_Y)}{\text{inf}} \mathbb{E}_{x,y \sim \Gamma}[c(x,y)] \end{equation}\]

where \(\mathcal P(X\sim P_X, Y\sim P_Y)\) is a set of all join distributions of \((X,Y)\) with marginals \(P_X\) and \(P_Y\).

Wasserstein distance

When \(c(x,y) = \| x-y \| ^p\) for \(p \ge 1\), \(W_c^{1/p}\) is called p-Wasserstein distance.

\[\begin{equation} W_p(P_X, P_G) = \underset{\Gamma \in \mathcal P(x \sim P_X, y \sim P_Y)}{\text{inf}} \mathbb{E}_{x,y \sim \Gamma}[\|x - y\|^p] \end{equation}\]

The optimization problem is highly intractable in general, due to the constraint. However when \(p=1\), Kantorovich-Rubinstein duality holds:

\[\begin{equation} W_1(P_X, P_G) = \underset{f \in \text{\{1-Lipschitz\}}}{\text{sup}} \mathbb{E}_{x\sim P_X}[f(x)] - \mathbb{E}_{y\sim P_Y}[f(y)] \end{equation}\]

The family of divergences from \(f\)-divergence only consider the relative probability (the ratio between two probability density functions) and do not measure the closeness of the underlying outcomes. With disjoint support or overlapping support but intersections that yield zero measure, the divergence between a target distribution and a \(\theta\)-parameterized distribution might not be continuous with respect to \(\theta\). Wasserstein distance on the other hand does take into account the underlying topology of the outcomes and is continuous and differentiable almost everywhere with respect to \(\theta\) and thus almost always provide useful gradient for optimization.

Wasserstein GAN (WGAN)

Following the dual form of \(W_1\), we can form a generative-adversarial model for a data distribution \(P_D\) and model \(Q_\theta\) with auxiliary function \(f\) that is 1-Lipschitz continuous.

\[\begin{equation} \hat \theta = \argmin_\theta \underset{f \in \text{\{1-Lipschitz\}}}{\text{sup}} \mathbb{E}_{x\sim P_D}[f(x)] - \mathbb{E}_{x\sim Q_\theta}[f(x)] \end{equation}\]

$$ Practical considerations for WGAN

Todo: Gradient clipping with K-Lipschitz constraint on \(f\); Soft gradient penalty (WGAN-GP)

Wasserstein Auto-encoder (WAE)

Rather than working with the dual or Wasserstein distance, which only holds for \(W_1\), we can also work with the primal form directly. As shown in Tolstikhin, Ilya, et al. “Wasserstein Auto-Encoders.” the following holds when we have a deterministic decoder mapping latent variable \(Z\) to \(Y\) through \(y=G(z)\):

\[\begin{equation} \begin{split} W_c(P_X, P_G) = W_c^\dagger(P_X, P_G) &= \underset{P \in \mathcal P(x \sim P_X, z \sim P_Z)}{\text{inf}} \mathbb{E}_{x,y \sim P}[c(x, G(z)]\\\\ &= \underset{Q: Q_Z = P_Z}{\text{inf}} \mathbb{E}_{x\sim P_X} \mathbb{E}_{z \sim Q(Z\vert X)}[c(x, G(z)] \end{split} \end{equation}\]

The constraint put on \(Q(Z\vert X)\) is that its marginal needs to equal to \(P(Z)\). To have a feasible optimization problem we relax this constraint with the following constraint-free optimization target with a penalty that assess the closeness between \(Q(Z)\) and \(P(Z)\) via any reasonable divergence. This new objective is named penalized optimal transport (POT).

\[\begin{equation} D_{POT/WAE}(P_X, P_G) := \underset{Q \in \mathcal Q}{\text{inf}} \mathbb{E}_{x\sim P_X} \mathbb{E}_{z \sim Q(Z\vert X)}[c(x, G(z)] + \lambda \cdot D_{Z} (Q_Z, P_Z) \end{equation}\]

If the divergence between \(P_Z\) and \(Q_Z\) is intractable to directly calculate, we could use generative-adversarial training to approximate it (see \(f\)-GAN).

Note: if decoder is probabilistic instead of deterministic, we would only have \(W_c(P_X, P_G) \le W_c^\dagger(P_X, P_G)\), so we are minimizing an upper bound of the true OT cost.

Thought: the original paper used JS divergence for \(D_Z\), how about we use Wasserstein distance for \(D_Z\).

Todo: discuss connections to AAE paper

Leave a comment