- VAEs compresses higher dimensional data to lower dimensional bottleneck representation.
- The goal of VAE is to find a distribution q(z|x) of some latent variables which we can sample from z (bottleneck) to generate new samples x' ~ p(x|z).
- Differences between vanilla AutoEncoders and VAEs are:
- AutoEncoders can only reconstruct the data-points that are present in the dataset. If we pass a random datapoint which is not from a dataset distribution, Decoder of AutoEncoder will generate random garbage image. While, VAEs can generate images from various datapoints in the distribution.
- Loss function of Autoencoder only consists of reconstruction loss(MSE loss). While, VAE loss function consists of KL-Divergence loss along with MSE loss.
- Epoch 1 vs Epoch 25
- This project uses CelebA Dataset.
- VAE is trained on 64x64 image patches for 25 epochs.
- Trained model is provided in
Trained model
directory.