-
Implementing a Siamese Network with Triplet Loss
Building on the previous exercise, let's switch to **Triplet Loss**. This loss function is more powerful as it enforces a margin between an anchor-positive pair and an anchor-negative pair. The...
-
Implementing a Custom Loss Function with `torch.autograd`
Create a **custom loss function** that inherits from `torch.nn.Module` and performs a non-standard calculation. For example, a custom Huber loss. This loss is less sensitive to outliers than Mean...
-
Neural Style Transfer
Implement **Neural Style Transfer**. Given a content image and a style image, generate a new image that combines the content of the former with the style of the latter. Use a pre-trained VGG...
-
Training a Variational Autoencoder (VAE)
Implement and train a **Variational Autoencoder (VAE)** on a dataset like MNIST. The encoder should map the input to a latent space distribution (mean and variance), and the decoder should...