Brief Survey of Generative Models

  5 minute read

Overview

This page is a high-level summary of various generative models with little explanations. Models to cover are as follows:

Variational Autoencoders (VAE)

  • Kingma, Diederik P., and Max Welling. “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114 (2013).

Adversarial Variational Bayes (AVB)

Extention to VAE to use non-Gaussian encoders

  • Mescheder, Lars, Sebastian Nowozin, and Andreas Geiger. “Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks.” arXiv preprint arXiv:1701.04722 (2017).

Generative Adverserial Networks (GAN)

  • Goodfellow, Ian, et al. “Generative adversarial nets.” Advances in neural information processing systems. 2014.

Generalized divergence minimization GAN (-GAN)

  • Nowozin, Sebastian, Botond Cseke, and Ryota Tomioka. “f-gan: Training generative neural samplers using variational divergence minimization.” Advances in Neural Information Processing Systems. 2016.

Wasserstein GAN (WGAN)

  • Arjovsky, Martin, Soumith Chintala, and Léon Bottou. “Wasserstein gan.” arXiv preprint arXiv:1701.07875 (2017).

Adversarial Autoencoders (AAE)

  • Makhzani, Alireza, et al. “Adversarial autoencoders.” arXiv preprint arXiv:1511.05644 (2015).

Wasserstein Auto-Encoder (WAE)

  • Tolstikhin, Ilya, et al. “Wasserstein Auto-Encoders.” arXiv preprint arXiv:1711.01558 (2017).

Cramer GAN

  • Bellemare, Marc G., et al. “The Cramer Distance as a Solution to Biased Wasserstein Gradients.” arXiv preprint arXiv:1705.10743 (2017).

VAE

Model setup:

  • Recognition model:
  • Assumed fixed prior:
  • Generation model:
    • Implied (but intractable) posterior:

Key equations:

Optimization objective:

Gradient-friendly Monte Carlo:

Difficulties in calculating :

  • Due to the generality of and (typically a neural network), the expectation in does not have an analytical form. So we need to resort to Monte Carlo estimation.
  • Furthermore, direct sampling according to poses difficulty in taking derivative against parameters that parameterizes the distribution .

Solution: Reparameterization Trick

Find smooth and invertible transformation such that with drawn from a fixed (non-parameterized) distribution we have , so

For the Normal distribution used here (), it is convenient to use location-scale transformation, with .

For total data points with mini batch size :

For sufficiently large batch size , the inner loop sample size can be set to 1. Due to stochastic mini batch gradient descent and stochastic expectation estimation, this is also called doubly stochastic estimation.

Using non-Gaussian encoders

Todo: discuss AVB paper

Gumble trick for discrete latent variables

Ref for this section:

  1. Gumble max trick https://hips.seas.harvard.edu/blog/2013/04/06/the-gumbel-max-trick-for-discrete-distributions/
  2. Balog, Matej, et al. “Lost Relatives of the Gumbel Trick.” arXiv preprint arXiv:1706.04161 (2017).
  3. Jang, Eric, Shixiang Gu, and Ben Poole. “Categorical reparameterization with gumbel-softmax.” arXiv preprint arXiv:1611.01144 (2016).

Gumble distribution:


-GAN and GAN

Prelude on -divergence and its variational lower bound

The f-divergence family

where the generator function is a convex, lower-semicontinuous function satisfying .

Every convex, lower-semicontinuous function has a convex conjugate function , also known as Fenchel conjugate. This function is defined as

Function is again convex and lower-semicontinuous and the pair is dual to each other, i.e. . So we can represent as

With this we can establish a lower bound for estimating the f-divergence in general

where is an arbitrary class of functions . The inequality is due to Jessen’s inequality and constraints imposed by .

The bond is tight for

Generative adversarial training

Suppose our goal is to come up with a distribution (model) that is close to (the data distribution) and the similarity score (loss) is measured by . However the direct calculation of is intractable, such as the case where the functional form of is unknown and is a complex model parameterized by a neural network.

To be specific:

  • Evaluating at any is easy, but integrating it is hard due to lack of easy functional form.
  • For , we do not know how to evaluate it at any
  • Sampling from both and are easy. Because drawing from data set approximates and we can make the model take random vectors as input which are easy to produce.

Fortunately, we can sample from both of them easily. In this case, offers a way to estimate the lower bound of the divergence. We would need to maximize this lower bound by changing so that it is close to the true divergence, then minimize it over . This is formally stated as follows.

To ensure that the output of respects the domain of , we define , where without any range constraints on the output and is an output activation function specific to the -divergence used with suitable output ranges.

GAN

For the original GAN, with a divergence target similar to Jensen-Shannon with which corresponds to the following

Practical considerations in adversarial training

Todo: log trick, DCGAN heuristics

Name Generator
Forward KL
Reverse KL
Jensen-Shannon
GAN
Name Conjugate Output activation
Forward KL
Reverse KL
Jensen-Shannon
GAN

WGAN and WAE

Optimal transport (OT)

Kantorovich formulated the optimization target in optimal transport problems as follows

where is a set of all join distributions of with marginals and .

Wasserstein distance

When for , is called p-Wasserstein distance.

The optimization problem is highly intractable in general, due to the constraint. However when , Kantorovich-Rubinstein duality holds:

The family of divergences from -divergence only consider the relative probability (the ratio between two probability density functions) and do not measure the closeness of the underlying outcomes. With disjoint support or overlapping support but intersections that yield zero measure, the divergence between a target distribution and a -parameterized distribution might not be continuous with respect to . Wasserstein distance on the other hand does take into account the underlying topology of the outcomes and is continuous and differentiable almost everywhere with respect to and thus almost always provide useful gradient for optimization.

Wasserstein GAN (WGAN)

Following the dual form of , we can form a generative-adversarial model for a data distribution and model with auxiliary function that is 1-Lipschitz continuous.

$$ Practical considerations for WGAN

Todo: Gradient clipping with K-Lipschitz constraint on ; Soft gradient penalty (WGAN-GP)

Wasserstein Auto-encoder (WAE)

Rather than working with the dual or Wasserstein distance, which only holds for , we can also work with the primal form directly. As shown in Tolstikhin, Ilya, et al. “Wasserstein Auto-Encoders.” the following holds when we have a deterministic decoder mapping latent variable to through :

The constraint put on is that its marginal needs to equal to . To have a feasible optimization problem we relax this constraint with the following constraint-free optimization target with a penalty that assess the closeness between and via any reasonable divergence. This new objective is named penalized optimal transport (POT).

If the divergence between and is intractable to directly calculate, we could use generative-adversarial training to approximate it (see -GAN).

Note: if decoder is probabilistic instead of deterministic, we would only have , so we are minimizing an upper bound of the true OT cost.

Thought: the original paper used JS divergence for , how about we use Wasserstein distance for .

Todo: discuss connections to AAE paper

Leave a Comment