Acquisition Functions

Add one of these to your MOOP object to generate additional scalarizations per iteration. In general, ParMOO typically generates one candidate solution per simulation per acquisition function, so the number of acquisition functions determines the number of candidate simulations evaluated (in parallel) per iteration/batch.

from parmoo import acquisitions

Current options are:

Weighted Sum Methods

Implementations of the weighted-sum scalarization technique.

This module contains implementations of the AcquisitionFunction ABC, which use the weighted-sum technique.

The classes include:
  • UniformWeights (sample convex weights from a uniform distribution)

  • FixedWeights (uses a fixed scalarization, which can be set upon init)

class acquisitions.weighted_sum.UniformWeights(o, lb, ub, hyperparams)

Randomly generate scalarizing weights.

Generates uniformly distributed scalarization weights, by randomly sampling the probability simplex.

__init__(o, lb, ub, hyperparams)

Constructor for the UniformWeights class.

Parameters:
  • o (int) – The number of objectives.

  • lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.

  • ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.

  • hyperparams (dict) – A dictionary of hyperparameters for tuning the acquisition function.

Returns:

A new UniformWeights generator.

Return type:

UniformWeights

useSD()

Query whether this method uses uncertainties.

When False, allows users to shortcut expensive uncertainty computations.

setTarget(data, penalty_func, history)

Randomly generate a new vector of scalarizing weights.

Parameters:
  • data (dict) – A dictionary specifying the current function evaluation database.

  • penalty_func (function) – A function of one (x) or two (x, sx) inputs that evaluates the (penalized) objectives.

  • history (dict) – Another unused argument for this function.

Returns:

A 1d array containing the ‘best’ feasible starting point for the scalarized problem (if any previous evaluations were feasible) or the point in the existing database that is most nearly feasible.

Return type:

numpy.ndarray

scalarize(f_vals, x_vals, s_vals_mean, s_vals_sd)

Scalarize a vector of function values using the current weights.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values to be scalarized.

  • x_vals (np.ndarray) – A 1D array specifying a vector the design point corresponding to f_vals (unused by this method).

  • s_vals_mean (np.ndarray) – A 1D array specifying the expected simulation outputs for the x value being scalarized (unused by this method).

  • s_vals_sd (np.ndarray) – A 1D array specifying the standard deviation for each of the simulation outputs (unused by this method).

Returns:

The scalarized value.

Return type:

float

scalarizeGrad(f_vals, g_vals)

Scalarize a Jacobian of gradients using the current weights.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values for the scalarized gradient (not used here).

  • g_vals (numpy.ndarray) – A 2d array specifying the gradient values to be scalarized.

Returns:

The 1d array for the scalarized gradient.

Return type:

np.ndarray

class acquisitions.weighted_sum.FixedWeights(o, lb, ub, hyperparams)

Use fixed scalarizing weights.

Use a fixed scalarization scheme, based on a fixed weighted sum.

__init__(o, lb, ub, hyperparams)

Constructor for the FixedWeights class.

Parameters:
  • o (int) – The number of objectives.

  • lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.

  • ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.

  • hyperparams (dict) –

    A dictionary of hyperparameters for tuning the acquisition function. May contain the following key:

    • ’weights’ (numpy.ndarray): A 1d array of length o that, when present, specifies the scalarization weights to use. When absent, the default weights are w = [1/o, …, 1/o].

Returns:

A new FixedWeights generator.

Return type:

FixedWeights

useSD()

Querry whether this method uses uncertainties.

When False, allows users to shortcut expensive uncertainty computations.

setTarget(data, penalty_func, history)

Randomly generate a feasible starting point.

Parameters:
  • data (dict) – A dictionary specifying the current function evaluation database.

  • penalty_func (function) – A function of one (x) or two (x, sx) inputs that evaluates the (penalized) objectives.

  • history (dict) – Another unused argument for this function.

Returns:

A 1d array containing the ‘best’ feasible starting point for the scalarized problem (if any previous evaluations were feasible) or the point in the existing database that is most nearly feasible.

Return type:

numpy.ndarray

scalarize(f_vals, x_vals, s_vals_mean, s_vals_sd)

Scalarize a vector of function values using the current weights.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values to be scalarized.

  • x_vals (np.ndarray) – A 1D array specifying a vector the design point corresponding to f_vals (unused by this method).

  • s_vals_mean (np.ndarray) – A 1D array specifying the expected simulation outputs for the x value being scalarized (unused by this method).

  • s_vals_sd (np.ndarray) – A 1D array specifying the standard deviation for each of the simulation outputs (unused by this method).

Returns:

The scalarized value.

Return type:

float

scalarizeGrad(f_vals, g_vals)

Scalarize a Jacobian of gradients using the current weights.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values for the scalarized gradient (not used here).

  • g_vals (numpy.ndarray) – A 2d array specifying the gradient values to be scalarized.

Returns:

The 1d array for the scalarized gradient.

Return type:

np.ndarray

Epsilon Constraint Methods

Implementations of the epsilon-constraint-style scalarizations.

This module contains implementations of the AcquisitionFunction ABC, which use the epsilon constraint method.

The classes include:
  • RandomConstraint (randomly set a ub for all but 1 objective)

class acquisitions.epsilon_constraint.RandomConstraint(o, lb, ub, hyperparams)

Improve upon a randomly set target point.

Randomly sets a target point inside the current Pareto front. Attempts to improve one of the objective values by reformulating all other objectives as constraints, upper bounded by their target value.

__init__(o, lb, ub, hyperparams)

Constructor for the RandomConstraint class.

Parameters:
  • o (int) – The number of objectives.

  • lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.

  • ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.

  • hyperparams (dict) – A dictionary of hyperparameters for tuning the acquisition function.

Returns:

A new RandomConstraint scalarizer.

Return type:

RandomConstraint

useSD()

Querry whether this method uses uncertainties.

When False, allows users to shortcut expensive uncertainty computations.

setTarget(data, penalty_func, history)

Randomly generate a target based on current nondominated points.

Parameters:
  • data (dict) –

    A dictionary specifying the current function evaluation database. It contains two mandatory fields:

    • ’x_vals’ (numpy.ndarray): A 2d array containing the list of design points.

    • ’f_vals’ (numpy.ndarray): A 2d array containing the corresponding list of objective values.

  • penalty_func (function) – A function of one (x) or two (x, sx) inputs that evaluates the (penalized) objectives.

  • history (dict) – A persistent dictionary that could be used by the implementation of the AcquisitionFunction to pass data between iterations; also unused by this scheme.

Returns:

A 1d array containing the ‘best’ feasible starting point for the scalarized problem (if any previous evaluations were feasible) or the point in the existing database that is most nearly feasible.

Return type:

numpy.ndarray

scalarize(f_vals, x_vals, s_vals_mean, s_vals_sd)

Scalarize a vector of function values using the current bounds.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values to be scalarized.

  • x_vals (np.ndarray) – A 1D array specifying a vector the design point corresponding to f_vals (unused by this method).

  • s_vals_mean (np.ndarray) – A 1D array specifying the expected simulation outputs for the x value being scalarized (unused by this method).

  • s_vals_sd (np.ndarray) – A 1D array specifying the standard deviation for each of the simulation outputs (unused by this method).

Returns:

The scalarized value.

Return type:

float

scalarizeGrad(f_vals, g_vals)

Scalarize a Jacobian of gradients using the current bounds.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values for the scalarized gradient, which are used to penalize exceeding the bounds.

  • g_vals (numpy.ndarray) – A 2d array specifying the gradient values to be scalarized.

Returns:

The 1d array for the scalarized gradient.

Return type:

np.ndarray

class acquisitions.epsilon_constraint.EI_RandomConstraint(o, lb, ub, hyperparams)

Expected improvement of a randomly set target point.

Randomly sets a target point inside the current Pareto front. Attempts to improve one of the objective values by reformulating all other objectives as constraints, upper bounded by their target value. Uses surrogate uncertainties to maximize expected improvement in the target objective subject to constraints.

__init__(o, lb, ub, hyperparams)

Constructor for the RandomConstraint class.

Parameters:
  • o (int) – The number of objectives.

  • lb (numpy.ndarray) – A 1d array of lower bounds for the design region. The number of design variables is inferred from the dimension of lb.

  • ub (numpy.ndarray) – A 1d array of upper bounds for the design region. The dimension must match ub.

  • hyperparams (dict) –

    A dictionary of hyperparameters for tuning the acquisition function. Including

    • mc_sample_size (int): The number of samples to use for monte carlo integration (defaults to 10 * m ** 2).

Returns:

A new RandomConstraint scalarizer.

Return type:

RandomConstraint

useSD()

Querry whether this method uses uncertainties.

When False, allows users to shortcut expensive uncertainty computations.

setTarget(data, penalty_func, history)

Randomly generate a target based on current nondominated points.

Parameters:
  • data (dict) –

    A dictionary specifying the current function evaluation database. It contains two mandatory fields:

    • ’x_vals’ (numpy.ndarray): A 2d array containing the list of design points.

    • ’f_vals’ (numpy.ndarray): A 2d array containing the corresponding list of objective values.

  • penalty_func (function) – A function of one (x) or two (x, sx) inputs that evaluates the (penalized) objectives.

  • history (dict) – A persistent dictionary that could be used by the implementation of the AcquisitionFunction to pass data between iterations; also unused by this scheme.

Returns:

A 1d array containing the ‘best’ feasible starting point for the scalarized problem (if any previous evaluations were feasible) or the point in the existing database that is most nearly feasible.

Return type:

numpy.ndarray

scalarize(f_vals, x_vals, s_vals_mean, s_vals_sd)

Scalarize a vector of function values using the current bounds.

Parameters:
  • f_vals (numpy.ndarray) – A 1d array specifying the function values to be scalarized.

  • x_vals (np.ndarray) – A 1D array specifying a vector the design point corresponding to f_vals (unused by this method).

  • s_vals_mean (np.ndarray) – A 1D array specifying the expected simulation outputs for the x value being scalarized (unused by this method).

  • s_vals_sd (np.ndarray) – A 1D array specifying the standard deviation for each of the simulation outputs (unused by this method).

Returns:

The scalarized value.

Return type:

float

scalarizeGrad(f_vals, g_vals)

Not implemented for this acquisition function, do not use gradient-based methods.