-
Tiny Neural Radiance Fields (NeRF)
### Description Implement a simplified version of a Neural Radiance Field (NeRF) to represent a 2D image. [1] A NeRF learns a continuous mapping from spatial coordinates to pixel values. Instead...
-
Implement Lottery Ticket Hypothesis Pruning
### Description The Lottery Ticket Hypothesis suggests that a randomly initialized, dense network contains a smaller subnetwork (a "winning ticket") that, when trained in isolation, can match the...
-
Simple Differentiable Renderer
### Description Modern 3D deep learning often relies on differentiable rendering, allowing gradients to flow from a 2D rendered image back to 3D scene parameters. [1] Your task is to implement a...
-
Spiking Neuron with Leaky Integrate-and-Fire
### Description Implement a single Leaky Integrate-and-Fire (LIF) neuron, the fundamental building block of many Spiking Neural Networks (SNNs). Unlike traditional neurons, LIF neurons operate on...
-
Implement a Custom Loss Function
Create a custom loss function called `MeanAbsolutePercentageError` (MAPE) in PyTorch. It should: 1. Take predictions and targets as input tensors. 2. Compute $$\frac{1}{n} \sum_i \frac{|y_i -...
-
Custom Dataset for CSV Data
Write a PyTorch `Dataset` class that loads data from a CSV file containing tabular data (features + labels). Requirements: - Use `pandas` to read the CSV. - Convert features and labels to tensors....
-
Batch Normalization From Scratch
Implement 1D batch normalization manually (without using `nn.BatchNorm1d`). Steps: 1. Compute batch mean and variance. 2. Normalize inputs. 3. Scale and shift with learnable $$\gamma, \beta$$....
-
Gradient Clipping Example
Write code to: 1. Train a small RNN on dummy data. 2. Add gradient clipping using `torch.nn.utils.clip_grad_norm_`. 3. Print gradient norms before and after clipping. Show that exploding gradients...
-
Implement a Linear Regression Model
Build a simple linear regression model using `nn.Module`. Requirements: - One input feature, one output. - Train it on synthetic data $$y = 3x + 2 + \epsilon$$. - Use `MSELoss` and `SGD`. Check...
-
Checkpointing with torch.save
Train a simple feedforward model for 1 epoch. Save: 1. Model state dict. 2. Optimizer state dict. 3. Epoch number. Then load the checkpoint and resume training seamlessly.
-
Implement Dropout Manually
Implement dropout as a function `my_dropout(x, p)`: - Zero out elements of `x` with probability `p`. - Scale survivors by $$1/(1-p)$$. - Ensure deterministic behavior when `torch.manual_seed` is...
-
Custom Activation Function
Define a custom activation function called `Swish`: $$f(x) = x \cdot \sigma(x)$$. - Implement it as a PyTorch `nn.Module`. - Train a small MLP on random data with it. - Compare with ReLU...
-
Weight Initialization Techniques
Initialize a neural network's weights using different schemes: - Xavier initialization. - Kaiming initialization. Show histograms of weight distributions before and after initialization.
-
Debug Exploding Gradients
Create a deep feedforward net (20 layers, ReLU). Train it on dummy data. Track gradient norms across layers. Observe if gradients explode. Experiment with: - Smaller learning rate. - Gradient...
-
Custom Collate Function
Write a custom `collate_fn` for `DataLoader` that pads variable-length sequences with zeros. Use `torch.nn.utils.rnn.pad_sequence`. Test by batching random-length tensors.
-
Visualize Training with TensorBoard
Integrate TensorBoard into a training loop: - Log training loss and validation accuracy. - Add histograms of weights and gradients. - Write a few sample images. Open TensorBoard and verify logs.
-
Implement a Siamese Network
Implement a Siamese network for MNIST digit similarity: - Two identical CNNs sharing weights. - Contrastive loss function. - Train on pairs of digits (same/different). Evaluate on test pairs.
-
Gradient Accumulation Example
Simulate large-batch training using gradient accumulation: - Train with microbatches of size 4. - Accumulate gradients over 8 steps. - Update optimizer after accumulation. Verify final result...
-
Implement Early Stopping
Add early stopping to a training loop: - Monitor validation loss. - Stop training if no improvement after 5 epochs. - Save best model checkpoint. Demonstrate on MNIST subset.
-
Create a Transformer Encoder Block
Implement a single Transformer encoder block: - Multi-head self-attention. - Layer normalization. - Feedforward network. Compare output with `nn.TransformerEncoderLayer`.
-
Implement Label Smoothing
Write a function to apply label smoothing for classification: - Replace one-hot targets with $$1-\epsilon$$ for true class, $$\epsilon/(K-1)$$ for others. - Use it in cross-entropy training. Show...
-
Save and Load TorchScript Model
Convert a trained PyTorch model to TorchScript via tracing and scripting. Save it to disk. Reload and run inference. Compare outputs with the original model.
-
Distributed DataParallel Basics
Simulate training with `torch.nn.DataParallel`: - Define a simple CNN. - Run it on 2 GPUs (if available). - Verify batch is split across devices. Inspect `model.module` usage.
-
Mixed Precision Training with autocast
Modify a training loop to use `torch.cuda.amp.autocast`: - Wrap forward + loss in `autocast`. - Use `GradScaler` for backward. Compare training speed vs. full precision.