Surrogate Optimizers
When initializing a new MOOP
object
(see MOOP Classes),
you must provide a surrogate optimization problem solver, which will
be used to generate candidate solutions for each iteration.
from parmoo import optimizers
Note that when using a gradient-based technique, you must provide gradient evaluation options for all objective and constraint functions, by adding code to handle the optional ``der`` input.
def f(x, sx, der=0):
# When using gradient-based solvers, define extra if-cases for
# handling der=1 (calculate df/dx) and der=2 (caldculate df/dsx).
GPS Search Techniques (gradient-free)
Implementations of the SurrogateOptimizer class.
This module contains implementations of the SurrogateOptimizer ABC, which are based on the GPS polling strategy for direct search.
Note that these strategies are all gradient-free, and therefore does not require objective, constraint, or surrogate gradients methods to be defined.
- The classes include:
LocalGPS
– Generalized Pattern Search (GPS) algorithmGlobalGPS
– global random search, followed by GPS
- class optimizers.gps_search.LocalGPS(o, lb, ub, hyperparams)
Use Generalized Pattern Search (GPS) to identify local solutions.
Applies GPS to the surrogate problem, in order to identify design points that are locally Pareto optimal, with respect to the surrogate problem.
- __init__(o, lb, ub, hyperparams)
Constructor for the LocalGPS class.
- Parameters:
o (int) – The number of objectives.
lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.
ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.
hyperparams (dict) –
A dictionary of hyperparameters for the optimization procedure. It may contain the following:
opt_budget (int): The GPS iteration limit (default: 1000).
opt_restarts (int): Number of multisolve restarts per scalarization (default: n+1).
- Returns:
A new SurrogateOptimizer object.
- Return type:
- solve(x)
Solve the surrogate problem using generalized pattern search (GPS).
- Parameters:
x (np.ndarray) – A 2d array containing a list of feasible design points used to warm start the search.
- Returns:
A 2d numpy.ndarray of potentially efficient design points that were found by the GPS optimizer.
- Return type:
np.ndarray
- class optimizers.gps_search.GlobalGPS(o, lb, ub, hyperparams)
Use randomized search globally followed by GPS locally.
Use
RandomSearch
to globally search the design space (search phase) followed byLocalGPS
to refine the potentially efficient solutions (poll phase).- __init__(o, lb, ub, hyperparams)
Constructor for the GlobalGPS class.
- Parameters:
o (int) – The number of objectives.
lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.
ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.
hyperparams (dict) –
A dictionary of hyperparameters for the optimization procedure. It may contain the following:
opt_budget (int): The function evaluation budget (default: 10,000)
gps_budget (int): The number of the total opt_budget evaluations that will be used by GPS (default: half of opt_budget).
- Returns:
A new SurrogateOptimizer object.
- Return type:
- solve(x)
Solve the surrogate problem by using random search followed by GPS.
- Parameters:
x (np.ndarray) – A 2d array containing a list of feasible design points used to warm start the search.
- Returns:
A 2d numpy.ndarray containing a list of potentially efficient design points that were found by the optimizers.
- Return type:
np.ndarray
Random Search Techniques (gradient-free)
Implementations of the SurrogateOptimizer class.
This module contains implementations of the SurrogateOptimizer ABC, which are based on randomized search strategies.
Note that these strategies are all gradient-free, and therefore does not require objective, constraint, or surrogate gradients methods to be defined.
- The classes include:
RandomSearch
– search globally by generating random samples
- class optimizers.random_search.RandomSearch(o, lb, ub, hyperparams)
Use randomized search to identify potentially efficient designs.
Randomly search the design space and use the surrogate models to predict whether each search point is potentially Pareto optimal.
- __init__(o, lb, ub, hyperparams)
Constructor for the RandomSearch class.
- Parameters:
o (int) – The number of objectives.
lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.
ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.
hyperparams (dict) –
A dictionary of hyperparameters for the optimization procedure. It may contain the following:
opt_budget (int): The sample size (default 10,000)
- Returns:
A new SurrogateOptimizer object.
- Return type:
- solve(x)
Solve the surrogate problem using random search.
- Parameters:
x (np.ndarray) – A 2d array containing a list of feasible design points used to warm start the search.
- Returns:
A 2d numpy.ndarray containing a list of potentially efficient design points that were found by the random search.
- Return type:
np.ndarray
L-BFGS-B Variations (gradient-based)
Implementations of the SurrogateOptimizer class.
This module contains implementations of the SurrogateOptimizer ABC, which are based on the L-BFGS-B quasi-Newton algorithm.
Note that all of these methods are gradient based, and therefore require objective, constraint, and surrogate gradient methods to be defined.
- The classes include:
LBFGSB
– Limited-memory bound-constrained BFGS (L-BFGS-B) methodTR_LBFGSB
– L-BFGS-B is applied within a trust region
- class optimizers.lbfgsb.LBFGSB(o, lb, ub, hyperparams)
Use L-BFGS-B and gradients to identify local solutions.
Applies L-BFGS-B to the surrogate problem, in order to identify design points that are locally Pareto optimal with respect to the surrogate problem.
- __init__(o, lb, ub, hyperparams)
Constructor for the LBFGSB class.
- Parameters:
o (int) – The number of objectives.
lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.
ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.
hyperparams (dict) –
A dictionary of hyperparameters for the optimization procedure. It may contain the following:
opt_budget (int): The evaluation budget per solve (default: 1000).
opt_restarts (int): Number of multisolve restarts per scalarization (default: n+1).
- Returns:
A new SurrogateOptimizer object.
- Return type:
- solve(x)
Solve the surrogate problem using L-BFGS-B.
- Parameters:
x (np.ndarray) – A 2d array containing a list of design points used to warm start the search.
- Returns:
A 2d numpy.ndarray of potentially efficient design points that were found by L-BFGS-B.
- Return type:
np.ndarray
- class optimizers.lbfgsb.TR_LBFGSB(o, lb, ub, hyperparams)
Use L-BFGS-B and gradients to identify solutions within a trust region.
Applies L-BFGS-B to the surrogate problem, in order to identify design points that are locally Pareto optimal with respect to the surrogate problem.
- __init__(o, lb, ub, hyperparams)
Constructor for the TR_LBFGSB class.
- Parameters:
o (int) – The number of objectives.
lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.
ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.
hyperparams (dict) –
A dictionary of hyperparameters for the optimization procedure. It may contain the following:
opt_budget (int): The evaluation budget per solve (default: 1000).
opt_restarts (int): Number of multisolve restarts per scalarization (default: 2).
- Returns:
A new SurrogateOptimizer object.
- Return type:
- solve(x)
Solve the surrogate problem using L-BFGS-B.
- Parameters:
x (np.ndarray) – A 2d array containing a list of design points used to warm start the search.
- Returns:
A 2d numpy.ndarray of potentially efficient design points that were found by L-BFGS-B.
- Return type:
np.ndarray