Solve partial differential equations (heat, Burgers) by training a neural network whose loss includes the PDE residual, computed with autograd. Implementation of the idea by Raissi, Perdikaris & Karniadakis (2019), validated against analytical solutions.
A PINN is a neural network uθ(x, t) whose loss combines three pieces: initial conditions, boundary conditions, and the PDE residual evaluated at collocation points in the domain. Partial derivatives are computed with torch.autograd.grad — no finite differences, no mesh.
This project implements two classic cases:
PINNs are at the active research frontier (Nature, AIAA, JCP papers). A technical blog post explaining the implementation generates visibility and organic contacts.
Generic PDE:
Heat equation:
Burgers equation:
Composite PINN loss:
torch.autograd.grad with create_graph=True.04-pinns/ ├── README.md ├── requirements.txt ├── src/ │ ├── pinn.py # u_theta(x, t) network with Tanh activation │ ├── physics.py # heat and Burgers residuals via autograd │ └── train.py # loop with composite loss (PDE + BC + IC) └── notebooks/ # comparison with analytical / numerical solution
heat_residual(u, x, t, α) function using autograd.gradburgers_residual(u, x, t, ν) functionRelative L² error < 1% vs. analytical solution across the entire domain.
Relative L² error < 5% vs. high-resolution numerical reference.
Correct capture of emergent shock without pathological oscillations.
Learned solutions, comparison to analytical/numerical reference, and L² errors.
No results published yet.
What goes here: space-time heatmaps of learned u(x,t), side-by-side comparison with exact/numerical solution, and relative L² errors.