Optimisers
In GridapTopOpt we implement optimisation algorithms as iterators that inherit from an abstract type Optimiser
. A concrete Optimiser
implementation, say OptEg
, then implements iterate(m::OptEg) ↦ (var,state)
and iterate(m::OptEg,state) ↦ (var,state)
, where var
and state
are the available items in the outer loop and internal state of the iterator, respectively. As a result we can iterate over the object m=OptEg(...)
using for var in m
. The benefit of this implementation is that the internals of the optimisation method can be hidden in the source code while the explicit for
loop is still visible to the user. The body of the loop can then be used for auxiliary operations such as writing the optimiser history and other files.
The below describes the implemented optimisers along with the OptimiserHistory
type. Custom optimisers can be implemented by creating types that inherit from Optimiser
and extending the interfaces in Custom optimiser.
Lagrangian & Augmented Lagrangian method
GridapTopOpt.AugmentedLagrangian
— Typestruct AugmentedLagrangian <: Optimiser
An augmented Lagrangian method based on Nocedal and Wright, 2006 (link). Note that this method will function as a Lagrangian method if no constraints are defined in problem::PDEConstrainedFunctionals
.
Parameters
problem::PDEConstrainedFunctionals
: The objective and constraint setup.ls_evolver::LevelSetEvolution
: Solver for the evolution and reinitisation equations.vel_ext::VelocityExtension
: The velocity-extension method for extending shape sensitivities onto the computational domain.history::OptimiserHistory{Float64}
: Historical information for optimisation problem.converged::Function
: A function to check optimiser convergence.has_oscillations::Function
: A function to check for oscillations.params::NamedTuple
: Optimisation parameters.
The has_oscillations
function has been added to avoid oscillations in the iteration history. By default this uses a mean zero crossing algorithm as implemented in ChaosTools. Oscillations checking can be disabled by taking has_oscillations = (args...) -> false
.
GridapTopOpt.AugmentedLagrangian
— MethodAugmentedLagrangian(
problem :: PDEConstrainedFunctionals{N},
ls_evolver :: LevelSetEvolution,
vel_ext :: VelocityExtension,
φ0;
Λ_max = 10^10, ζ = 1.1, update_mod = 5, γ = 0.1, γ_reinit = 0.5, os_γ_mult = 0.75,
maxiter = 1000, verbose=false, constraint_names = map(i -> Symbol("C_$i"),1:N),
converged::Function = default_al_converged, debug = false,
has_oscillations::Function = default_has_oscillations
) where {N,O}
Create an instance of AugmentedLagrangian
with several adjustable defaults.
Required
problem::PDEConstrainedFunctionals
: The objective and constraint setup.ls_evolver::LevelSetEvolution
: Solver for the evolution and reinitisation equations.vel_ext::VelocityExtension
: The velocity-extension method for extending shape sensitivities onto the computational domain.φ0
: An initial level-set function defined as a FEFunction or GridapDistributed equivilent.
Optional defaults
γ = 0.1
: Initial coeffient on the time step size for solving the Hamilton-Jacobi evolution equation.γ_reinit = 0.5
: Coeffient on the time step size for solving the reinitisation equation.ζ = 1.1
: Increase multiplier on Λ everyupdate_mod
iterations.Λ_max = 5.0
: Maximum value on any entry in Λ.update_mod = 5
: Number of iterations before increasingΛ
.reinit_mod = 1
: How often we solve reinitialisation equation.maxiter = 1000
: Maximum number of algorithm iterations.verbose=false
: Verbosity flag.constraint_names = map(i -> Symbol("C_$i"),1:N)
: Constraint names for history output.has_oscillations::Function = default_has_oscillations
: Function to check for oscillations in the history.initial_parameters::Function = default_al_init_params
: Function to generate initial λ, Λ. This can be replaced to inject different λ and Λ, for example.os_γ_mult = 0.75
: Decrease multiplier forγ
whenhas_oscillations
returns trueconverged::Function = default_hp_converged
: Convergence criteria.debug = false
: Debug flag.
Hilbertian projection method
GridapTopOpt.HilbertianProjection
— Typestruct HilbertianProjection <: Optimiser
A Hilbertian projection method as described by Wegert et al., 2023 (link).
Parameters
problem::PDEConstrainedFunctionals{N}
: The objective and constraint setup.ls_evolver::LevelSetEvolution
: Solver for the evolution and reinitisation equations.vel_ext::VelocityExtension
: The velocity-extension method for extending shape sensitivities onto the computational domain.projector::HilbertianProjectionMap
: Sensitivity information projectorhistory::OptimiserHistory{Float64}
: Historical information for optimisation problem.converged::Function
: A function to check optimiser convergence.has_oscillations::Function
: A function to check for oscillations.params::NamedTuple
: Optimisation parameters.
GridapTopOpt.HilbertianProjection
— MethodHilbertianProjection(
problem :: PDEConstrainedFunctionals{N},
ls_evolver :: LevelSetEvolution,
vel_ext :: VelocityExtension,
φ0;
orthog = HPModifiedGramSchmidt(),
λ=0.5, α_min=0.1, α_max=1.0, γ=0.1, γ_reinit=0.5,
ls_max_iters = 10, ls_δ_inc = 1.1, ls_δ_dec = 0.7,
ls_ξ = 0.0025, ls_ξ_reduce_coef = 0.1, ls_ξ_reduce_abs_tol = 0.01,
ls_γ_min = 0.001, ls_γ_max = 0.1,
maxiter = 1000, verbose=false, constraint_names = map(i -> Symbol("C_$i"),1:N),
converged::Function = default_hp_converged, debug = false
) where {N}
Create an instance of HilbertianProjection
with several adjustable defaults including the orthogonalisation method. By default the later is HPModifiedGramSchmidt
.
Required
problem::PDEConstrainedFunctionals{N}
: The objective and constraint setup.ls_evolver::LevelSetEvolution
: Solver for the evolution and reinitisation equations.vel_ext::VelocityExtension
: The velocity-extension method for extending shape sensitivities onto the computational domain.φ0
: An initial level-set function defined as a FEFunction or GridapDistributed equivilent.
Algorithm defaults
γ = 0.1
: Initial coeffient on the time step size for solving the Hamilton-Jacobi evolution equation.γ_reinit = 0.5
: Coeffient on the time step size for solving the reinitisation equation.maxiter = 1000
: Maximum number of algorithm iterations.verbose=false
: Verbosity flag.constraint_names = map(i -> Symbol("C_$i"),1:N)
: Constraint names for history output.converged::Function = default_hp_converged
: Convergence criteria.has_oscillations::Function = (ls_enabled ? (args...)->false : default_has_oscillations
: By default this is disabled when a line search in enabled.os_γ_mult = 0.5
: Decrease multiplier forγ
whenhas_oscillations
returns truedebug = false
: Debug flag.α_min ∈ [0,1] = 0.1
: Controls lower bound on on the projected objective descent coefficent.α_min = 1
ignores the objective function and instead solves a constraint satisfaction problem.α_max ∈ [0,1] = 1.0
: Controls the upper bound on the projected objective descent coeffient. Typically this shouldn't change unless wanting to approach the optimum 'slower'.λ = 0.5
: The rate of contraint decrease.
Note that in practice we usually only adjust α_min
to control the balance between improving the objective or constraints.
Line search defaults
ls_enabled = true
: Set whether a line search is used.ls_max_iters = 10
: Maximum number of line search iterations.ls_δ_inc = 1.1
: Increase multiplier forγ
on acceptance.ls_δ_dec = 0.7
: Decrease multiplier forγ
on rejection.ls_ξ = 1.0
: Line search tolerance for objective reduction.ls_ξ_reduce_coef = 0.0025
: Coeffient onls_ξ
if constraints within tolerance (see below).ls_ξ_reduce_abs_tol = 0.01
: Tolerance on constraints to reducels_ξ
vials_ξ_reduce_coef
.ls_γ_min = 0.001
: Minimum coeffient on the time step size for solving the HJ evolution equation.ls_γ_max = 0.1
: Maximum coeffient on the time step size for solving the HJ evolution equation.
A more concervative evolution of the boundary can be achieved by decreasing ls_γ_max
.
The line search has been adjusted so that it is only enforced once the constraints are within a set tolerance. This generally leads to better optimisation histories, especially for problems where constraints are far from saturation and the objective must decrease to improve the constraints.
This can be set to always be enfored by taking ls_ξ = 0.0025
and ls_ξ_reduce_coef = 0.1
.
GridapTopOpt.HPModifiedGramSchmidt
— TypeHPModifiedGramSchmidt
High performance modified Gram-Schmidt. Based on Algorithm 6 in this paper.
GridapTopOpt.OrthogonalisationMap
— TypeOrthogonalisationMap
Optimiser history
GridapTopOpt.OptimiserHistory
— Typemutable struct OptimiserHistory{T}
Track historical information on optimisation problem iteration history.
Parameters
niter::Int
: Current iteration numberkeys::Vector{Symbol}
: Vector of symbols associated to valuesvalues::Dict{Symbol,Vector{T}}
: Dictionary of vectors associated to keysbundles::Dict{Symbol,Vector{Symbol}}
: Groups of symbols (e.g., a group of constraints)verbose::SolverVerboseLevel
: Verbosity levelmaxiter::Int
: Maximum number of iterations.
Behaviour
- Indexing at a specific iteration returns an OptimiserHistorySlice.
- Indexing with a key returns all values of that key
- Indexing with a key and iteration returns value/s of the key at the iteration.
GridapTopOpt.OptimiserHistorySlice
— Typestruct OptimiserHistorySlice{T} end
A read-only wrapper of OptimiserHistory for IO display of iteration history at a specific iteration.
Custom optimiser
GridapTopOpt.Optimiser
— Typeabstract type Optimiser
Optimisers in GridapTopOpt.jl are implemented as iterators. Your own optimiser can be implemented by implementing concrete functionality of the below.
Base.iterate
— MethodBase.iterate(::Optimiser)
Return tuple of first iteration state for Optimiser
.
Base.iterate
— MethodBase.iterate(::Optimiser,state)
Return tuple of next iteration state given current state for Optimiser
.
GridapTopOpt.get_history
— Methodget_history(::Optimiser) :: OptimiserHistory
Get OptimiserHistory
from Optimiser
.
GridapTopOpt.converged
— Methodconverged(::Optimiser)
Return a Bool
that is true if the Optimiser
has converged, otherwise false.