Mixed Precision Training with autocast
Modify a training loop to use torch.cuda.amp.autocast:
- Wrap forward + loss in
autocast. - Use
GradScalerfor backward.
Compare training speed vs. full precision.
Modify a training loop to use torch.cuda.amp.autocast:
autocast.GradScaler for backward.Compare training speed vs. full precision.