-
Implementing Layer Normalization from Scratch
Implement **Layer Normalization** as a custom `torch.nn.Module`. Unlike `BatchNorm`, `LayerNorm` normalizes across the features of a single sample, not a batch. Your implementation should...
-
Gradient Clipping Example
Write code to: 1. Train a small RNN on dummy data. 2. Add gradient clipping using `torch.nn.utils.clip_grad_norm_`. 3. Print gradient norms before and after clipping. Show that exploding gradients...
-
Weight Initialization Techniques
Initialize a neural network's weights using different schemes: - Xavier initialization. - Kaiming initialization. Show histograms of weight distributions before and after initialization.
-
Gradient Accumulation Example
Simulate large-batch training using gradient accumulation: - Train with microbatches of size 4. - Accumulate gradients over 8 steps. - Update optimizer after accumulation. Verify final result...
-
Implement Label Smoothing
Write a function to apply label smoothing for classification: - Replace one-hot targets with $$1-\epsilon$$ for true class, $$\epsilon/(K-1)$$ for others. - Use it in cross-entropy training. Show...
-
Mixed Precision Training with autocast
Modify a training loop to use `torch.cuda.amp.autocast`: - Wrap forward + loss in `autocast`. - Use `GradScaler` for backward. Compare training speed vs. full precision.