Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
158 changes: 158 additions & 0 deletions docs/src/tutorials/burgers_mol_vs_pinn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Trusting PINNs: Validating a Neural Solver against a standard numerical solver

## Dependencies
These packages are required to run the following tutorial:

```julia
using Pkg
Pkg.add(["NeuralPDE", "MethodOfLines", "OrdinaryDiffEq", "ModelingToolkit", "Lux", "Optimization", "OptimizationOptimJL", "OptimizationOptimisers", "DomainSets", "Plots", "Statistics"])
```

## 1D Viscous Burgers' Equation:

```math
\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial^2 u}{\partial x^2} = 0
```

Where we set $\nu = 0.02$ as the viscosity coefficient. The problem is defined on the domain $t \in [0, 1]$ and $x \in [-1, 1]$ with:
* **Initial Condition:** $u(0, x) = -\sin(\pi x)$
* **Boundary Conditions:** $u(t, -1) = u(t, 1) = 0$

Implementing this using`ModelingToolkit.jl`:

```@example burgers_comp
using NeuralPDE, MethodOfLines, OrdinaryDiffEq, ModelingToolkit
using Lux, Optimization, OptimizationOptimJL, OptimizationOptimisers
using DomainSets, Plots, Random, Statistics

# Setting seed to ensure reproducibility:
rng = Random.default_rng()
Random.seed!(rng, 42)

# Statistics Tracker Training:
mutable struct TrainingStats
losses::Vector{Float64}
best_loss::Float64
end

@parameters t x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2

# Setting values of constants:
ν = 0.02
t_max = 1.0
x_min = -1.0
x_max = 1.0

# Defining the 1D Viscous Burgers' Equation:
eq = Dt(u(t,x)) + u(t,x)*Dx(u(t,x)) - ν*Dxx(u(t,x)) ~ 0

# Setting Boundary Conditions:
bcs = [u(0.0, x) ~ -sin(π * x),
u(t, x_min) ~ 0.0,
u(t, x_max) ~ 0.0]
domains = [t ∈ Interval(0.0, t_max), x ∈ Interval(x_min, x_max)]

# Defining the PDE System:
@named pde_system = PDESystem(eq, bcs, domains, [t, x], [u(t, x)])
```

## Generating the ground truth: (MethodOfLines)
We discretize the spatial domain using a finite difference scheme and solve the resulting ODE system with a high-order solver (`Tsit5`). This is done to generate a ground truth solution to compare the results of the neural network to.

```@example burgers_comp

dx_mol = 0.01
discretization_mol = MOLFiniteDifference([x => dx_mol], t)
prob_mol = discretize(pde_system, discretization_mol)
@time sol_mol = solve(prob_mol, Tsit5(), saveat=0.01)
```

## Training a PINN:
We use a grid training strategy to train a deep neural network. The best loss achieved is stored.

```@example burgers_comp
# Defining the architecture of the Neural Network:
chain = Lux.Chain(
Dense(2, 20, Lux.tanh),
Dense(20, 30, Lux.tanh),
Dense(30, 30, Lux.tanh),
Dense(30, 20, Lux.tanh),
Dense(20, 20, Lux.tanh),
Dense(20, 1)
)
strategy = GridTraining(0.05)
discretization_pinn = PhysicsInformedNN(chain, strategy)
prob_pinn = discretize(pde_system, discretization_pinn)
stats = TrainingStats([], Inf)
callback = function (p, l)
push!(stats.losses, l)
if l < stats.best_loss
stats.best_loss = l
end

# In case we get NaN anywhere:
if isnan(l)
println("!!! WARNING: Loss became NaN. Aborting training.")
return true
end

if length(stats.losses) % 500 == 0
println("Iter: $(length(stats.losses)) | Current: $l | Best: $(stats.best_loss)")
end
return false
end
```

## Training strategy
The training of the neural network consists of 2 stages:
1. **Adam**: Rapidly descends the loss landscape to find a rough solution (2000 iterations).
2. **BFGS**: A second-order optimizer used to fine-tune the solution to high precision (1000 iterations).

```@example burgers_comp
# Use of Adam optimizer:
res_adam = Optimization.solve(prob_pinn, OptimizationOptimisers.Adam(0.01); callback=callback, maxiters=2000)
# Use of the BFGS Optimizer:
prob_bfgs = remake(prob_pinn, u0=res_adam.u)
@time res_pinn = Optimization.solve(prob_bfgs, BFGS(); callback=callback, maxiters=1000)
```

## Results Analysis:
Generating an animation to visually compare the prediction from the PINN against the ground truth generated by the numerical solver and calculating the global MSE:
```@example burgers_comp
ts_mol = sol_mol[t]
xs_mol = sol_mol[x]
u_truth_matrix = sol_mol[u(t, x)]
phi = discretization_pinn.phi
u_predict_matrix = [first(phi([t_val, x_val], res_pinn.u)) for t_val in ts_mol, x_val in xs_mol]
mse_global = mean(abs2, u_truth_matrix .- u_predict_matrix)
println("FINAL GLOBAL MSE: $mse_global")
anim = @animate for (i, t_val) in enumerate(ts_mol)
truth_slice = u_truth_matrix[i, :]
pred_slice = u_predict_matrix[i, :]

p1 = plot(xs_mol, truth_slice, label="MOL Ground Truth",
lw=3, color=:blue, ylims=(-1.2, 1.2), legend=:topright,
title="Burgers' (MOL) vs PINN (t = $(round(t_val, digits=2)))")
plot!(p1, xs_mol, pred_slice, label="PINN Prediction",
ls=:dash, lw=3, color=:orange)

err_slice = abs.(truth_slice .- pred_slice)
p2 = plot(xs_mol, err_slice, label="Abs Error",
color=:red, fill=(0, 0.2, :red), ylims=(0, 0.2),
xlabel="x", ylabel="Error")

plot(p1, p2, layout=(2, 1))
end

# Saving the gif animation:
mkpath("assets")
gif(anim, "assets/burgers_mol_pinn_evolution.gif", fps=15)
```
![Burgers Evolution](assets/burgers_mol_pinn_evolution.gif)

## Performance Summary
While MethodOfLines.jl approach is significantly faster than for this equation, we have demonstrated the ability of a PINN to learn the solution without any mesh generation. This may be useful for solving PDEs in higher dimensions or inverse problems.