Skip to content

Commit

Permalink
Fully remove OptimizationFlux
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisRackauckas committed Nov 6, 2023
1 parent 39101bf commit 11c0734
Show file tree
Hide file tree
Showing 13 changed files with 14 additions and 18 deletions.
1 change: 0 additions & 1 deletion .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@ jobs:
- SDE2
- SDE3
version:
- '1.6'
- '1'
steps:
- uses: actions/checkout@v4
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/Downstream.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:
strategy:
fail-fast: false
matrix:
julia-version: [1, 1.6]
julia-version: [1]
os: [ubuntu-latest]
package:
- {user: SciML, repo: DiffEqFlux.jl, group: All}
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ Tracker = "0.2"
TruncatedStacktraces = "1.2"
Zygote = "0.6"
ZygoteRules = "0.2"
julia = "1.6"
julia = "1.9"

[extras]
AlgebraicMultigrid = "2169fc97-5a83-5252-b627-83903c6c433c"
Expand Down
2 changes: 0 additions & 2 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Optimization = "7f7a1694-90dd-40f0-9382-eb1efda571ba"
OptimizationFlux = "253f991c-a7b2-45f8-8852-8b9a9df78a86"
OptimizationNLopt = "4e6fcdb7-1186-4e1f-a706-475e75c168bb"
OptimizationOptimJL = "36348300-93cb-4f02-beb5-3c3902f8871e"
OptimizationOptimisers = "42dfb2eb-d2b4-4451-abcd-913932933ac1"
Expand Down Expand Up @@ -58,7 +57,6 @@ MLDatasets = "0.7"
NNlib = "0.8, 0.9"
Optimisers = "0.2, 0.3"
Optimization = "3.9"
OptimizationFlux = "0.1"
OptimizationNLopt = "0.1"
OptimizationOptimJL = "0.1"
OptimizationOptimisers = "0.1"
Expand Down
3 changes: 1 addition & 2 deletions docs/src/examples/neural_ode/simplechains.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@ First, we'll need data for training the NeuralODE, which can be obtained by solv

```@example sc_neuralode
using SimpleChains,
StaticArrays, OrdinaryDiffEq, SciMLSensitivity, Optimization,
OptimizationFlux, Plots
StaticArrays, OrdinaryDiffEq, SciMLSensitivity, OptimizationOptimisers, Plots
u0 = @SArray Float32[2.0, 0.0]
datasize = 30
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/ode/exogenous_input.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ used as an input into the neural network of a neural ODE system.

```@example exogenous
using DifferentialEquations, Lux, ComponentArrays, DiffEqFlux, Optimization,
OptimizationPolyalgorithms, OptimizationFlux, Plots, Random
OptimizationPolyalgorithms, OptimizationOptimisers, Plots, Random
rng = Random.default_rng()
tspan = (0.1f0, Float32(10.0))
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/ode/second_order_adjoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ with Hessian-vector products (never forming the Hessian) for large parameter
optimizations.

```@example secondorderadjoints
using Flux, DiffEqFlux, Optimization, OptimizationFlux, DifferentialEquations,
using Flux, DiffEqFlux, Optimization, OptimizationOptimisers, DifferentialEquations,
Plots, Random, OptimizationOptimJL
u0 = Float32[2.0; 0.0]
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/ode/second_order_neural.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ An example of training a neural network on a second order ODE is as follows:

```@example secondorderneural
using DifferentialEquations,
Flux, Optimization, OptimizationFlux, RecursiveArrayTools,
Flux, Optimization, OptimizationOptimisers, RecursiveArrayTools,
Random
u0 = Float32[0.0; 2.0]
Expand Down
2 changes: 1 addition & 1 deletion docs/src/examples/optimal_control/optimal_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ will first reduce control cost (the last term) by 10x in order to bump the netwo
of a local minimum. This looks like:

```@example neuraloptimalcontrol
using Flux, DifferentialEquations, Optimization, OptimizationNLopt, OptimizationFlux,
using Flux, DifferentialEquations, Optimization, OptimizationNLopt, OptimizationOptimisers,
SciMLSensitivity, Zygote, Plots, Statistics, Random
rng = Random.default_rng()
Expand Down
4 changes: 2 additions & 2 deletions docs/src/examples/sde/optimization_sde.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ end
We can then use `Optimization.solve` to fit the SDE:

```@example sde
using Optimization, Zygote, OptimizationFlux
using Optimization, Zygote, OptimizationOptimisers
pinit = [1.2, 0.8, 2.5, 0.8, 0.1, 0.1]
adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
Expand Down Expand Up @@ -121,7 +121,7 @@ In this example, we will find the parameters of the SDE that force the
solution to be close to the constant 1.

```@example sde
using DifferentialEquations, DiffEqFlux, Optimization, OptimizationFlux, Plots
using DifferentialEquations, DiffEqFlux, Optimization, OptimizationOptimisers, Plots
function lotka_volterra!(du, u, p, t)
x, y = u
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/data_parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ The following is a full copy-paste example for the multithreading.
Distributed and GPU minibatching are described below.

```@example dataparallel
using DifferentialEquations, Optimization, OptimizationFlux
using DifferentialEquations, Optimization, OptimizationOptimisers
pa = [1.0]
u0 = [3.0]
θ = [u0; pa]
Expand Down Expand Up @@ -198,7 +198,7 @@ using Distributed
addprocs(4)

@everywhere begin
using DifferentialEquations, Optimization, OptimizationFlux
using DifferentialEquations, Optimization, OptimizationOptimisers
function f(u, p, t)
1.01u .* p
end
Expand Down Expand Up @@ -251,7 +251,7 @@ The following is an example of minibatch ensemble parallelism across
a GPU:

```julia
using DifferentialEquations, Optimization, OptimizationFlux, DiffEqGPU
using DifferentialEquations, Optimization, OptimizationOptimisers, DiffEqGPU
function f(du, u, p, t)
@inbounds begin
du[1] = 1.01 * u[1] * p[1] * p[2]
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/training_tips/divergence.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A full example making use of this trick is:

```@example divergence
using DifferentialEquations,
SciMLSensitivity, Optimization, OptimizationFlux,
SciMLSensitivity, Optimization, OptimizationOptimisers,
OptimizationNLopt, Plots
function lotka_volterra!(du, u, p, t)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/training_tips/local_minima.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ on `(0,5.0)`. Naively, we use the same training strategy as before:
```@example iterativefit
using DifferentialEquations,
ComponentArrays, SciMLSensitivity, Optimization,
OptimizationFlux
OptimizationOptimisers
using Lux, Plots, Random
rng = Random.default_rng()
Expand Down

0 comments on commit 11c0734

Please sign in to comment.