Implement a Simple MLP
Build and run a minimal Multi-Layer Perceptron (MLP) using torch.nn
.
Problem
Construct a 2-layer MLP with ReLU activation for input of size 10 and output of size 2.
- Input: Tensor of shape
(batch_size, 10)
. - Output: Tensor of shape
(batch_size, 2)
.
Example
import torch.nn as nn
class MLP(nn.Module):
def __init__(self):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(10, 32),
nn.ReLU(),
nn.Linear(32, 2)
)
def forward(self, x):
return self.layers(x)
x = torch.randn(4, 10)
model = MLP()
print(model(x).shape) # Expected: torch.Size([4, 2])
Solution Sketch
Use nn.Sequential
to stack layers. Forward pass applies linear → ReLU → linear. This is a basic building block for neural nets.