Manual Gradient Descent Step
Simulate one step of gradient descent for a simple quadratic loss.
Problem
Given a scalar parameter initialized at 5.0, minimize the loss using PyTorch.
- Input: None (fixed setup).
- Output: Updated parameter value after one gradient descent step.
Example
w = torch.tensor(5.0, requires_grad=True)
loss = (w - 3) ** 2
loss.backward()
with torch.no_grad():
w -= 0.1 * w.grad
print(w.item()) # Should be closer to 3 than 5
Solution Sketch
Compute the gradient with .backward()
, then update using w = w - lr * grad
. Reset gradients before the next step.