Skip to content

Commit

Permalink
format markdown
Browse files Browse the repository at this point in the history
  • Loading branch information
ArnoStrouwen committed Jan 24, 2023
1 parent 3989e0b commit 06249a8
Show file tree
Hide file tree
Showing 38 changed files with 1,961 additions and 1,869 deletions.
3 changes: 2 additions & 1 deletion .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
style = "sciml"
style = "sciml"
format_markdown = true
1 change: 0 additions & 1 deletion LICENSE.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,3 @@ The SciMLSensitivity.jl package is licensed under the MIT "Expat" License:
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> SOFTWARE.
>
7 changes: 3 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,16 @@
[![Build Status](https://github.com/SciML/SciMLSensitivity.jl/workflows/CI/badge.svg)](https://github.com/SciML/SciMLSensitivity.jl/actions?query=workflow%3ACI)
[![Build status](https://badge.buildkite.com/e0ee4d9d914eb44a43c291d78c53047eeff95e7edb7881b6f7.svg)](https://buildkite.com/julialang/scimlsensitivity-dot-jl)

[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor's%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)

SciMLSensitivity.jl is a component package in the [SciML Scientific Machine Learning ecosystem](https://sciml.ai/).
SciMLSensitivity.jl is a component package in the [SciML Scientific Machine Learning ecosystem](https://sciml.ai/).
It holds the sensitivity analysis utilities. Users interested in using this
functionality should check out [DifferentialEquations.jl](https://docs.sciml.ai/DiffEqDocs/stable/).


## Tutorials and Documentation

For information on using the package,
[see the stable documentation](https://docs.sciml.ai/SciMLSensitivity/stable/). Use the
[in-development documentation](https://docs.sciml.ai/SciMLSensitivity/dev/) for the version of
the documentation, which contains the unreleased features.
the documentation, which contains the unreleased features.
50 changes: 29 additions & 21 deletions docs/src/Benchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ at this time.

Quick summary:

- `BacksolveAdjoint` can be the fastest (but use with caution!); about 25% faster
- Using `ZygoteVJP` is faster than other vjp choices with FastDense due to the overloads
- `BacksolveAdjoint` can be the fastest (but use with caution!); about 25% faster
- Using `ZygoteVJP` is faster than other vjp choices with FastDense due to the overloads

```julia
using DiffEqFlux, OrdinaryDiffEq, Flux, Optim, Plots, SciMLSensitivity,
Expand All @@ -46,13 +46,13 @@ tsteps = range(tspan[1], tspan[2], length = datasize)

function trueODEfunc(du, u, p, t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
du .= ((u .^ 3)'true_A)'
end

prob_trueode = ODEProblem(trueODEfunc, u0, tspan)
ode_data = Array(solve(prob_trueode, Tsit5(), saveat = tsteps))

dudt2 = FastChain((x, p) -> x.^3,
dudt2 = FastChain((x, p) -> x .^ 3,
FastDense(2, 50, tanh),
FastDense(50, 2))
Random.seed!(100)
Expand All @@ -66,94 +66,102 @@ function loss_neuralode(p)
return loss
end

@btime Zygote.gradient(loss_neuralode,p)
@btime Zygote.gradient(loss_neuralode, p)
# 2.709 ms (56506 allocations: 6.62 MiB)

prob_neuralode_interpolating = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=InterpolatingAdjoint(autojacvec=ReverseDiffVJP(true)))
prob_neuralode_interpolating = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = InterpolatingAdjoint(autojacvec = ReverseDiffVJP(true)))

function loss_neuralode_interpolating(p)
pred = Array(prob_neuralode_interpolating(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_interpolating,p)
@btime Zygote.gradient(loss_neuralode_interpolating, p)
# 5.501 ms (103835 allocations: 2.57 MiB)

prob_neuralode_interpolating_zygote = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=InterpolatingAdjoint(autojacvec=ZygoteVJP()))
prob_neuralode_interpolating_zygote = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = InterpolatingAdjoint(autojacvec = ZygoteVJP()))

function loss_neuralode_interpolating_zygote(p)
pred = Array(prob_neuralode_interpolating_zygote(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_interpolating_zygote,p)
@btime Zygote.gradient(loss_neuralode_interpolating_zygote, p)
# 2.899 ms (56150 allocations: 6.61 MiB)

prob_neuralode_backsolve = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=BacksolveAdjoint(autojacvec=ReverseDiffVJP(true)))
prob_neuralode_backsolve = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = BacksolveAdjoint(autojacvec = ReverseDiffVJP(true)))

function loss_neuralode_backsolve(p)
pred = Array(prob_neuralode_backsolve(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_backsolve,p)
@btime Zygote.gradient(loss_neuralode_backsolve, p)
# 4.871 ms (85855 allocations: 2.20 MiB)

prob_neuralode_quad = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=QuadratureAdjoint(autojacvec=ReverseDiffVJP(true)))
prob_neuralode_quad = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = QuadratureAdjoint(autojacvec = ReverseDiffVJP(true)))

function loss_neuralode_quad(p)
pred = Array(prob_neuralode_quad(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_quad,p)
@btime Zygote.gradient(loss_neuralode_quad, p)
# 11.748 ms (79549 allocations: 3.87 MiB)

prob_neuralode_backsolve_tracker = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=BacksolveAdjoint(autojacvec=TrackerVJP()))
prob_neuralode_backsolve_tracker = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = BacksolveAdjoint(autojacvec = TrackerVJP()))

function loss_neuralode_backsolve_tracker(p)
pred = Array(prob_neuralode_backsolve_tracker(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_backsolve_tracker,p)
@btime Zygote.gradient(loss_neuralode_backsolve_tracker, p)
# 27.604 ms (186143 allocations: 12.22 MiB)

prob_neuralode_backsolve_zygote = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=BacksolveAdjoint(autojacvec=ZygoteVJP()))
prob_neuralode_backsolve_zygote = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = BacksolveAdjoint(autojacvec = ZygoteVJP()))

function loss_neuralode_backsolve_zygote(p)
pred = Array(prob_neuralode_backsolve_zygote(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_backsolve_zygote,p)
@btime Zygote.gradient(loss_neuralode_backsolve_zygote, p)
# 2.091 ms (49883 allocations: 6.28 MiB)

prob_neuralode_backsolve_false = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=BacksolveAdjoint(autojacvec=ReverseDiffVJP(false)))
prob_neuralode_backsolve_false = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = BacksolveAdjoint(autojacvec = ReverseDiffVJP(false)))

function loss_neuralode_backsolve_false(p)
pred = Array(prob_neuralode_backsolve_false(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_backsolve_false,p)
@btime Zygote.gradient(loss_neuralode_backsolve_false, p)
# 4.822 ms (9956 allocations: 1.03 MiB)

prob_neuralode_tracker = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps, sensealg=TrackerAdjoint())
prob_neuralode_tracker = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps,
sensealg = TrackerAdjoint())

function loss_neuralode_tracker(p)
pred = Array(prob_neuralode_tracker(u0, p))
loss = sum(abs2, ode_data .- pred)
return loss
end

@btime Zygote.gradient(loss_neuralode_tracker,p)
@btime Zygote.gradient(loss_neuralode_tracker, p)
# 12.614 ms (76346 allocations: 3.12 MiB)
```
60 changes: 29 additions & 31 deletions docs/src/examples/dae/physical_constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,31 +17,31 @@ rng = Random.default_rng()
function f!(du, u, p, t)
y₁, y₂, y₃ = u
k₁, k₂, k₃ = p
du[1] = -k₁*y₁ + k₃*y₂*y₃
du[2] = k₁*y₁ - k₃*y₂*y₃ - k₂*y₂^2
du[3] = y₁ + y₂ + y₃ - 1
du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
du[2] = k₁ * y₁ - k₃ * y₂ * y₃ - k₂ * y₂^2
du[3] = y₁ + y₂ + y₃ - 1
return nothing
end
u₀ = [1.0, 0, 0]
M = [1. 0 0
0 1. 0
0 0 0]
M = [1.0 0 0
0 1.0 0
0 0 0]
tspan = (0.0,1.0)
tspan = (0.0, 1.0)
p = [0.04, 3e7, 1e4]
stiff_func = ODEFunction(f!, mass_matrix = M)
prob_stiff = ODEProblem(stiff_func, u₀, tspan, p)
sol_stiff = solve(prob_stiff, Rodas5(), saveat = 0.1)
nn_dudt2 = Lux.Chain(Lux.Dense(3, 64, tanh),
Lux.Dense(64, 2))
Lux.Dense(64, 2))
pinit, st = Lux.setup(rng, nn_dudt2)
model_stiff_ndae = NeuralODEMM(nn_dudt2, (u, p, t) -> [u[1] + u[2] + u[3] - 1],
tspan, M, Rodas5(autodiff=false), saveat = 0.1)
tspan, M, Rodas5(autodiff = false), saveat = 0.1)
model_stiff_ndae(u₀, Lux.ComponentArray(pinit), st)
function predict_stiff_ndae(p)
Expand All @@ -62,12 +62,11 @@ end
l1 = first(loss_stiff_ndae(Lux.ComponentArray(pinit)))
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p) -> loss_stiff_ndae(x), adtype)
optf = Optimization.OptimizationFunction((x, p) -> loss_stiff_ndae(x), adtype)
optprob = Optimization.OptimizationProblem(optf, Lux.ComponentArray(pinit))
result_stiff = Optimization.solve(optprob, NLopt.LD_LBFGS(), maxiters=100)
result_stiff = Optimization.solve(optprob, NLopt.LD_LBFGS(), maxiters = 100)
```


## Step-by-Step Description

### Load Packages
Expand All @@ -88,9 +87,9 @@ fitting difficult.
function f!(du, u, p, t)
y₁, y₂, y₃ = u
k₁, k₂, k₃ = p
du[1] = -k₁*y₁ + k₃*y₂*y₃
du[2] = k₁*y₁ - k₃*y₂*y₃ - k₂*y₂^2
du[3] = y₁ + y₂ + y₃ - 1
du[1] = -k₁ * y₁ + k₃ * y₂ * y₃
du[2] = k₁ * y₁ - k₃ * y₂ * y₃ - k₂ * y₂^2
du[3] = y₁ + y₂ + y₃ - 1
return nothing
end
```
Expand All @@ -100,21 +99,20 @@ end
```@example dae2
u₀ = [1.0, 0, 0]
M = [1. 0 0
0 1. 0
0 0 0]
M = [1.0 0 0
0 1.0 0
0 0 0]
tspan = (0.0,1.0)
tspan = (0.0, 1.0)
p = [0.04, 3e7, 1e4]
```

- `u₀` = Initial Conditions
- `M` = Semi-explicit Mass Matrix (last row is the constraint equation and are therefore
all zeros)
- `tspan` = Time span over which to evaluate
- `p` = parameters `k1`, `k2` and `k3` of the differential equation above

- `u₀` = Initial Conditions
- `M` = Semi-explicit Mass Matrix (last row is the constraint equation and are therefore
all zeros)
- `tspan` = Time span over which to evaluate
- `p` = parameters `k1`, `k2` and `k3` of the differential equation above

### ODE Function, Problem and Solution

Expand All @@ -138,12 +136,12 @@ is more suited to SciML applications (similarly for

```@example dae2
nn_dudt2 = Lux.Chain(Lux.Dense(3, 64, tanh),
Lux.Dense(64, 2))
Lux.Dense(64, 2))
pinit, st = Lux.setup(rng, nn_dudt2)
model_stiff_ndae = NeuralODEMM(nn_dudt2, (u, p, t) -> [u[1] + u[2] + u[3] - 1],
tspan, M, Rodas5(autodiff=false), saveat = 0.1)
tspan, M, Rodas5(autodiff = false), saveat = 0.1)
model_stiff_ndae(u₀, Lux.ComponentArray(pinit), st)
```

Expand Down Expand Up @@ -195,8 +193,8 @@ The callback function displays the loss during training.

```@example dae2
callback = function (p, l, pred) #callback function to observe training
display(l)
return false
display(l)
return false
end
```

Expand All @@ -207,7 +205,7 @@ Finally, training with `Optimization.solve` by passing: *loss function*, *model

```@example dae2
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x,p) -> loss_stiff_ndae(x), adtype)
optf = Optimization.OptimizationFunction((x, p) -> loss_stiff_ndae(x), adtype)
optprob = Optimization.OptimizationProblem(optf, Lux.ComponentArray(pinit))
result_stiff = Optimization.solve(optprob, NLopt.LD_LBFGS(), maxiters=100)
result_stiff = Optimization.solve(optprob, NLopt.LD_LBFGS(), maxiters = 100)
```
Loading

0 comments on commit 06249a8

Please sign in to comment.