Training a Variational Autoencoder (VAE)
Implement and train a Variational Autoencoder (VAE) on a dataset like MNIST. The encoder should map the input to a latent space distribution (mean and variance), and the decoder should reconstruct the original image. The loss function will be a combination of the reconstruction loss (e.g., Binary Cross-Entropy) and the KL divergence between the latent distribution and a standard normal distribution.
Verification: After training, sample from the latent space (e.g., torch.randn) and use the decoder to generate new images. These images should be plausible, novel digits. Visualize them to confirm. Also, the KL divergence loss should decrease over time.