Skip to content

Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch

Notifications You must be signed in to change notification settings

sgskt/VAE-CVAE-MNIST

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational Autoencoder & Conditional Variational Autoenoder on MNIST

VAE paper: Auto-Encoding Variational Bayes

CVAE paper: Learning Structured Output Representation using Deep Conditional Generative Models


In order to run conditional variational autoencoder, add --conditional to the the command. Check out the other commandline options in the code for hyperparameter settings (like learning rate, batch size, encoder/decoder layer depth and size).


Results

All plots obtained after 10 epochs of training. Hyperparameters accordning to default settings in the code; not tuned.

z ~ q(z|x) and q(z|x,c)

The modeled latent distribution after 10 epochs and 100 samples per digit.

VAE CVAE

p(x|z) and p(x|z,c)

Randonly sampled z, and their output. For CVAE, each c has been given as input once.

VAE CVAE

About

Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%