← Back to projects
04 / 05 ML for science · PDEs Planned

Physics-Informed Neural Networks (PINNs)

Solve partial differential equations (heat, Burgers) by training a neural network whose loss includes the PDE residual, computed with autograd. Implementation of the idea by Raissi, Perdikaris & Karniadakis (2019), validated against analytical solutions.

// Overview

About this project

A PINN is a neural network uθ(x, t) whose loss combines three pieces: initial conditions, boundary conditions, and the PDE residual evaluated at collocation points in the domain. Partial derivatives are computed with torch.autograd.grad — no finite differences, no mesh.

This project implements two classic cases:

  • 1D heat equation: linear case with a known analytical solution. Serves as a sanity check.
  • 1D Burgers equation: classic nonlinear case with shock-like discontinuities that challenge any numerical method.

PINNs are at the active research frontier (Nature, AIAA, JCP papers). A technical blog post explaining the implementation generates visibility and organic contacts.

Mathematical formulation

Generic PDE:

ℱ[u](x, t) = 0

Heat equation:

ut − α uxx = 0

Burgers equation:

ut + u · ux − ν uxx = 0

Composite PINN loss:

L = LPDE + λB Lboundary + λI Linitial
// Skills demonstrated

What skills it certifies

Advanced autograd
Second-order partial derivatives via torch.autograd.grad with create_graph=True.
Numerical PDEs
Formulate residuals in smooth form, convert physical constraints into loss terms.
Optimization
Balance composite loss weights λ, learning rate schedulers, Adam → L-BFGS.
Paper implementation
Translate Raissi et al. (JCP 2019) directly into clean, testable PyTorch code.
Validation
Relative L² error against the analytical solution (heat) and high-resolution numerical reference (Burgers).
2D visualization
Space-time heatmaps, snapshots of u(x) at various t, side-by-side comparison with classical solver.
Technical writing
Blog post explaining trade-offs: when a PINN beats a classical solver and when it doesn't.
PyTorch NumPy SciPy Matplotlib Autograd PDEs Research Jupyter
// Structure

Project organization

04-pinns/
├── README.md
├── requirements.txt
├── src/
│   ├── pinn.py        # u_theta(x, t) network with Tanh activation
│   ├── physics.py     # heat and Burgers residuals via autograd
│   └── train.py       # loop with composite loss (PDE + BC + IC)
└── notebooks/      # comparison with analytical / numerical solution
// Roadmap

Project status

// Metrics

Success criteria

Heat equation

Relative L² error < 1% vs. analytical solution across the entire domain.

Burgers equation

Relative L² error < 5% vs. high-resolution numerical reference.

Correct capture of emergent shock without pathological oscillations.

// Results

Outputs and final metrics

Learned solutions, comparison to analytical/numerical reference, and L² errors.

Pending

No results published yet.

What goes here: space-time heatmaps of learned u(x,t), side-by-side comparison with exact/numerical solution, and relative L² errors.