Deep Dive into Latent Variable Models and Variational Autoencoders
Over the past few years, there has been a shift in research focus towards Generative models and unsupervised learning. Generative Adversarial models and Latent Variable models have emerged as prominent architectures in this field. In this article, we will delve deeper into how latent variable models work, their core principles, and explore their most popular representative: Variational Autoencoders (VAE).
### Discriminative vs Generative Models
Machine Learning models are often categorized into discriminative and generative models based on the probabilistic formulations used to build and train them.
Discriminative models learn the probability of a label y based on a data point x. On the other hand, generative models learn a probability distribution over the data points without external labels. Conditional Generative models aim to learn the probability distribution of the data conditioned on the labels.
### Generative Models
Generative models aim to learn the probability density function p(x) that describes the behavior of training data and enables the generation of novel data by sampling from the distribution. There are explicit density models and implicit density models, with Variational Autoencoders falling under the latter category.
### Latent Variable Models
Latent variable models aim to model the probability distribution with latent variables, which represent a transformation of data points into a lower-dimensional space. These latent variables provide a simpler explanation of the data. Terms like prior distribution, likelihood, joint distribution, marginal distribution, and posterior distribution are key components of latent variable models.
### Training a Latent Variable Model with Maximum Likelihood
Maximum likelihood estimation is a technique used to estimate the parameters of a probability distribution that best fits the observed data. In the context of latent variable models, approximate inference is often used due to intractable problems.
### Variational Inference
Variational inference approximates the intractable posterior distribution with a tractable one, computed using an optimization problem. The Evidence Lower Bound (ELBO) is a common variational lower bound used to approximate the marginal log-likelihood function.
### Amortized Variational Inference
Amortized variational inference involves training an external neural network to predict the variational parameters instead of optimizing ELBO per data point. This approach aims to overcome the issue of learning different variational parameters for each data point.
### Variational Autoencoders
Variational Autoencoders (VAEs) are a popular model that combines deep learning with latent variable models. They consist of two neural networks: an Encoder and a Decoder. The Encoder parameterizes the variational posterior, while the Decoder parameterizes the likelihood.
Training a VAE involves maximizing the ELBO, which includes a term for negative reconstruction error and a term that controls the closeness of the variational posterior to the prior. The reparameterization trick is used to ensure the ability to backpropagate gradients during training.
In conclusion, VAEs provide a powerful framework for learning latent representations of data while also generating new samples. Understanding the probabilistic nature of VAEs and the principles behind training them is crucial for effectively applying them in practical applications.
For further reading and references on VAEs and latent variable models, the provided sources offer in-depth insights and additional resources on the topic.