Ipopt.jl is a wrapper for the Ipopt solver.
This wrapper is maintained by the JuMP community and is not a COIN-OR project.
Ipopt.jl
is licensed under the MIT License.
The underlying solver, coin-or/Ipopt, is licensed under the Eclipse public license.
Install Ipopt.jl
using the Julia package manager:
import Pkg
Pkg.add("Ipopt")
In addition to installing the Ipopt.jl
package, this will also download and
install the Ipopt binaries. You do not need to install Ipopt separately.
To use a custom binary, read the Custom solver binaries section of the JuMP documentation.
For details on using a different linear solver, see the Linear Solvers
section
below. You do not need a custom binary to change the linear solver.
You can use Ipopt with JuMP as follows:
using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
set_attribute(model, "max_cpu_time", 60.0)
set_attribute(model, "print_level", 0)
The Ipopt optimizer supports the following constraints and attributes.
List of supported objective functions:
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}
MOI.ObjectiveFunction{MOI.VariableIndex}
List of supported variable types:
List of supported constraint types:
MOI.ScalarAffineFunction{Float64}
inMOI.EqualTo{Float64}
MOI.ScalarAffineFunction{Float64}
inMOI.GreaterThan{Float64}
MOI.ScalarAffineFunction{Float64}
inMOI.LessThan{Float64}
MOI.ScalarQuadraticFunction{Float64}
inMOI.EqualTo{Float64}
MOI.ScalarQuadraticFunction{Float64}
inMOI.GreaterThan{Float64}
MOI.ScalarQuadraticFunction{Float64}
inMOI.LessThan{Float64}
MOI.VariableIndex
inMOI.EqualTo{Float64}
MOI.VariableIndex
inMOI.GreaterThan{Float64}
MOI.VariableIndex
inMOI.LessThan{Float64}
List of supported model attributes:
Supported options are listed in the Ipopt documentation.
Ipopt provides a callback that can be used to log the status of the optimization
during a solve. It can also be used to terminate the optimization by returning
false
. Here is an example:
using JuMP, Ipopt, Test
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, x >= 1)
@objective(model, Min, x + 0.5)
x_vals = Float64[]
function my_callback(
alg_mod::Cint,
iter_count::Cint,
obj_value::Float64,
inf_pr::Float64,
inf_du::Float64,
mu::Float64,
d_norm::Float64,
regularization_size::Float64,
alpha_du::Float64,
alpha_pr::Float64,
ls_trials::Cint,
)
push!(x_vals, callback_value(model, x))
@test isapprox(obj_value, 1.0 * x_vals[end] + 0.5, atol = 1e-1)
# return `true` to keep going, or `false` to terminate the optimization.
return iter_count < 1
end
MOI.set(model, Ipopt.CallbackFunction(), my_callback)
optimize!(model)
@test MOI.get(model, MOI.TerminationStatus()) == MOI.INTERRUPTED
@test length(x_vals) == 2
See the Ipopt documentation for an explanation of the arguments to the callback. They are identical to the output contained in the logging table printed to the screen.
To access the current solution and primal, dual, and complementarity violations
of each iteration, use Ipopt.GetIpoptCurrentViolations
and
Ipopt.GetIpoptCurrentIterate
. The two functions are identical to the ones in
the Ipopt C interface.
Ipopt.jl wraps the Ipopt C interface with minimal modifications.
A complete example is available in the test/C_wrapper.jl
file.
For simplicity, the five callbacks required by Ipopt are slightly different to the C interface. They are as follows:
"""
eval_f(x::Vector{Float64})::Float64
Returns the objective value `f(x)`.
"""
function eval_f end
"""
eval_grad_f(x::Vector{Float64}, grad_f::Vector{Float64})::Nothing
Fills `grad_f` in-place with the gradient of the objective function evaluated at
`x`.
"""
function eval_grad_f end
"""
eval_g(x::Vector{Float64}, g::Vector{Float64})::Nothing
Fills `g` in-place with the value of the constraints evaluated at `x`.
"""
function eval_g end
"""
eval_jac_g(
x::Vector{Float64},
rows::Vector{Cint},
cols::Vector{Cint},
values::Union{Nothing,Vector{Float64}},
)::Nothing
Compute the Jacobian matrix.
* If `values === nothing`
- Fill `rows` and `cols` with the 1-indexed sparsity structure
* Otherwise:
- Fill `values` with the elements of the Jacobian matrix according to the
sparsity structure.
!!! warning
If `values === nothing`, `x` is an undefined object. Accessing any elements
in it will cause Julia to segfault.
"""
function eval_jac_g end
"""
eval_h(
x::Vector{Float64},
rows::Vector{Cint},
cols::Vector{Cint},
obj_factor::Float64,
lambda::Float64,
values::Union{Nothing,Vector{Float64}},
)::Nothing
Compute the Hessian-of-the-Lagrangian matrix.
* If `values === nothing`
- Fill `rows` and `cols` with the 1-indexed sparsity structure
* Otherwise:
- Fill `values` with the Hessian matrix according to the sparsity structure.
!!! warning
If `values === nothing`, `x` is an undefined object. Accessing any elements
in it will cause Julia to segfault.
"""
function eval_h end
If you get a termination status MOI.INVALID_MODEL
, it is probably because you
have some undefined value in your model, for example, a division by zero. Fix
this by removing the division, or by imposing variable bounds so that you cut
off the undefined region.
Instead of
model = Model(Ipopt.Optimizer)
@variable(model, x)
@NLobjective(model, 1 / x)
do
model = Model(Ipopt.Optimizer)
@variable(model, x >= 0.0001)
@NLobjective(model, 1 / x)
To improve performance, Ipopt supports a number of linear solvers.
Obtain a license and download HSL_jll.jl
from https://licences.stfc.ac.uk/product/julia-hsl.
There are two versions available: LBT and OpenBLAS. LBT is the recommended option for Julia ≥ v1.9.
Install this download into your current environment using:
import Pkg
Pkg.develop(path = "/full/path/to/HSL_jll.jl")
Then, use a linear solver in HSL by setting the hsllib
and linear_solver
attributes:
using JuMP, Ipopt
import HSL_jll
model = Model(Ipopt.Optimizer)
set_attribute(model, "hsllib", HSL_jll.libhsl_path)
set_attribute(model, "linear_solver", "ma86")
Due to the security policy of macOS, Mac users may need to delete the quarantine attribute of the ZIP archive before extracting. For example:
xattr -d com.apple.quarantine lbt_HSL_jll.jl-2023.5.26.zip
xattr -d com.apple.quarantine openblas_HSL_jll.jl-2023.5.26.zip
Download Pardiso from https://www.pardiso-project.org. Save the shared library somewhere, and record the filename.
Then, use Pardiso by setting the pardisolib
and linear_solver
attributes:
using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
set_attribute(model, "pardisolib", "/full/path/to/libpardiso")
set_attribute(model, "linear_solver", "pardiso")
If you use Ipopt.jl with Julia ≥ v1.9, the linear solver SPRAL is available.
You can use it by setting the linear_solver
attribute:
using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
set_attribute(model, "linear_solver", "spral")
Note that the following environment variables must be set before starting Julia:
export OMP_CANCELLATION=TRUE
export OMP_NESTED=TRUE
export OMP_PROC_BIND=TRUE