Optimisers

In GridapTopOpt we implement optimisation algorithms as iterators that inherit from an abstract type Optimiser. A concrete Optimiser implementation, say OptEg, then implements iterate(m::OptEg) ↦ (var,state) and iterate(m::OptEg,state) ↦ (var,state), where var and state are the available items in the outer loop and internal state of the iterator, respectively. As a result we can iterate over the object m=OptEg(...) using for var in m. The benefit of this implementation is that the internals of the optimisation method can be hidden in the source code while the explicit for loop is still visible to the user. The body of the loop can then be used for auxiliary operations such as writing the optimiser history and other files.

The below describes the implemented optimisers along with the OptimiserHistory type. Custom optimisers can be implemented by creating types that inherit from Optimiser and extending the interfaces in Custom optimiser.

Lagrangian & Augmented Lagrangian method

GridapTopOpt.AugmentedLagrangianType
struct AugmentedLagrangian <: Optimiser

An augmented Lagrangian method based on Nocedal and Wright, 2006 (link). Note that this method will function as a Lagrangian method if no constraints are defined in problem::PDEConstrainedFunctionals.

Parameters

  • problem::PDEConstrainedFunctionals: The objective and constraint setup.
  • ls_evolver::LevelSetEvolution: Solver for the evolution and reinitisation equations.
  • vel_ext::VelocityExtension: The velocity-extension method for extending shape sensitivities onto the computational domain.
  • history::OptimiserHistory{Float64}: Historical information for optimisation problem.
  • converged::Function: A function to check optimiser convergence.
  • has_oscillations::Function: A function to check for oscillations.
  • params::NamedTuple: Optimisation parameters.

The has_oscillations function has been added to avoid oscillations in the iteration history. By default this uses a mean zero crossing algorithm as implemented in ChaosTools. Oscillations checking can be disabled by taking has_oscillations = (args...) -> false.

source
GridapTopOpt.AugmentedLagrangianMethod
AugmentedLagrangian(
  problem    :: PDEConstrainedFunctionals{N},
  ls_evolver :: LevelSetEvolution,
  vel_ext    :: VelocityExtension,
  φ0;
  Λ_max = 10^10, ζ = 1.1, update_mod = 5, γ = 0.1, γ_reinit = 0.5, os_γ_mult = 0.75,
  maxiter = 1000, verbose=false, constraint_names = map(i -> Symbol("C_$i"),1:N),
  converged::Function = default_al_converged, debug = false,
  has_oscillations::Function = default_has_oscillations
) where {N,O}

Create an instance of AugmentedLagrangian with several adjustable defaults.

Required

  • problem::PDEConstrainedFunctionals: The objective and constraint setup.
  • ls_evolver::LevelSetEvolution: Solver for the evolution and reinitisation equations.
  • vel_ext::VelocityExtension: The velocity-extension method for extending shape sensitivities onto the computational domain.
  • φ0: An initial level-set function defined as a FEFunction or GridapDistributed equivilent.

Optional defaults

  • γ = 0.1: Initial coeffient on the time step size for solving the Hamilton-Jacobi evolution equation.
  • γ_reinit = 0.5: Coeffient on the time step size for solving the reinitisation equation.
  • ζ = 1.1: Increase multiplier on Λ every update_mod iterations.
  • Λ_max = 5.0: Maximum value on any entry in Λ.
  • update_mod = 5: Number of iterations before increasing Λ.
  • reinit_mod = 1: How often we solve reinitialisation equation.
  • maxiter = 1000: Maximum number of algorithm iterations.
  • verbose=false: Verbosity flag.
  • constraint_names = map(i -> Symbol("C_$i"),1:N): Constraint names for history output.
  • has_oscillations::Function = default_has_oscillations: Function to check for oscillations in the history.
  • initial_parameters::Function = default_al_init_params: Function to generate initial λ, Λ. This can be replaced to inject different λ and Λ, for example.
  • os_γ_mult = 0.75: Decrease multiplier for γ when has_oscillations returns true
  • converged::Function = default_hp_converged: Convergence criteria.
  • debug = false: Debug flag.
source

Hilbertian projection method

GridapTopOpt.HilbertianProjectionType
struct HilbertianProjection <: Optimiser

A Hilbertian projection method as described by Wegert et al., 2023 (link).

Parameters

  • problem::PDEConstrainedFunctionals{N}: The objective and constraint setup.
  • ls_evolver::LevelSetEvolution: Solver for the evolution and reinitisation equations.
  • vel_ext::VelocityExtension: The velocity-extension method for extending shape sensitivities onto the computational domain.
  • projector::HilbertianProjectionMap: Sensitivity information projector
  • history::OptimiserHistory{Float64}: Historical information for optimisation problem.
  • converged::Function: A function to check optimiser convergence.
  • has_oscillations::Function: A function to check for oscillations.
  • params::NamedTuple: Optimisation parameters.
source
GridapTopOpt.HilbertianProjectionMethod
HilbertianProjection(
  problem :: PDEConstrainedFunctionals{N},
  ls_evolver :: LevelSetEvolution,
  vel_ext :: VelocityExtension,
  φ0;
  orthog = HPModifiedGramSchmidt(),
  λ=0.5, α_min=0.1, α_max=1.0, γ=0.1, γ_reinit=0.5,
  ls_max_iters = 10, ls_δ_inc = 1.1, ls_δ_dec = 0.7,
  ls_ξ = 0.0025, ls_ξ_reduce_coef = 0.1, ls_ξ_reduce_abs_tol = 0.01,
  ls_γ_min = 0.001, ls_γ_max = 0.1,
  maxiter = 1000, verbose=false, constraint_names = map(i -> Symbol("C_$i"),1:N),
  converged::Function = default_hp_converged, debug = false
) where {N}

Create an instance of HilbertianProjection with several adjustable defaults including the orthogonalisation method. By default the later is HPModifiedGramSchmidt.

Required

  • problem::PDEConstrainedFunctionals{N}: The objective and constraint setup.
  • ls_evolver::LevelSetEvolution: Solver for the evolution and reinitisation equations.
  • vel_ext::VelocityExtension: The velocity-extension method for extending shape sensitivities onto the computational domain.
  • φ0: An initial level-set function defined as a FEFunction or GridapDistributed equivilent.

Algorithm defaults

  • γ = 0.1: Initial coeffient on the time step size for solving the Hamilton-Jacobi evolution equation.
  • γ_reinit = 0.5: Coeffient on the time step size for solving the reinitisation equation.
  • maxiter = 1000: Maximum number of algorithm iterations.
  • verbose=false: Verbosity flag.
  • constraint_names = map(i -> Symbol("C_$i"),1:N): Constraint names for history output.
  • converged::Function = default_hp_converged: Convergence criteria.
  • has_oscillations::Function = (ls_enabled ? (args...)->false : default_has_oscillations: By default this is disabled when a line search in enabled.
  • os_γ_mult = 0.5: Decrease multiplier for γ when has_oscillations returns true
  • debug = false: Debug flag.
  • α_min ∈ [0,1] = 0.1: Controls lower bound on on the projected objective descent coefficent. α_min = 1 ignores the objective function and instead solves a constraint satisfaction problem.
  • α_max ∈ [0,1] = 1.0: Controls the upper bound on the projected objective descent coeffient. Typically this shouldn't change unless wanting to approach the optimum 'slower'.
  • λ = 0.5: The rate of contraint decrease.

Note that in practice we usually only adjust α_min to control the balance between improving the objective or constraints.

Line search defaults

  • ls_enabled = true: Set whether a line search is used.
  • ls_max_iters = 10: Maximum number of line search iterations.
  • ls_δ_inc = 1.1: Increase multiplier for γ on acceptance.
  • ls_δ_dec = 0.7: Decrease multiplier for γ on rejection.
  • ls_ξ = 1.0: Line search tolerance for objective reduction.
  • ls_ξ_reduce_coef = 0.0025: Coeffient on ls_ξ if constraints within tolerance (see below).
  • ls_ξ_reduce_abs_tol = 0.01: Tolerance on constraints to reduce ls_ξ via ls_ξ_reduce_coef.
  • ls_γ_min = 0.001: Minimum coeffient on the time step size for solving the HJ evolution equation.
  • ls_γ_max = 0.1: Maximum coeffient on the time step size for solving the HJ evolution equation.

A more concervative evolution of the boundary can be achieved by decreasing ls_γ_max.

Note

The line search has been adjusted so that it is only enforced once the constraints are within a set tolerance. This generally leads to better optimisation histories, especially for problems where constraints are far from saturation and the objective must decrease to improve the constraints.

This can be set to always be enfored by taking ls_ξ = 0.0025 and ls_ξ_reduce_coef = 0.1.

source

Optimiser history

GridapTopOpt.OptimiserHistoryType
mutable struct OptimiserHistory{T}

Track historical information on optimisation problem iteration history.

Parameters

  • niter::Int: Current iteration number
  • keys::Vector{Symbol}: Vector of symbols associated to values
  • values::Dict{Symbol,Vector{T}}: Dictionary of vectors associated to keys
  • bundles::Dict{Symbol,Vector{Symbol}}: Groups of symbols (e.g., a group of constraints)
  • verbose::SolverVerboseLevel: Verbosity level
  • maxiter::Int: Maximum number of iterations.

Behaviour

  • Indexing at a specific iteration returns an OptimiserHistorySlice.
  • Indexing with a key returns all values of that key
  • Indexing with a key and iteration returns value/s of the key at the iteration.
source

Custom optimiser

GridapTopOpt.OptimiserType
abstract type Optimiser

Optimisers in GridapTopOpt.jl are implemented as iterators. Your own optimiser can be implemented by implementing concrete functionality of the below.

source
Base.iterateMethod
Base.iterate(::Optimiser)

Return tuple of first iteration state for Optimiser.

source
Base.iterateMethod
Base.iterate(::Optimiser,state)

Return tuple of next iteration state given current state for Optimiser.

source