mfpml.optimization

mfpml.optimization.evolutionary_algorithms

class DE(num_pop, num_gen, step_size=0.5, crossover_rate=0.5, strategy='DE/rand/1/bin')[source]

Bases: EABase

__update_gen_optimum(iter)

update the generation optimum information

Return type:

None

__update_obj(iter)
Return type:

None

__update_pop(iter)
Return type:

None

_de_initializer()[source]

calculate the objective values for the initial population

Return type:

None

run_optimizer(func, num_dim, design_space, print_info=False, save_step_results=False, stopping_error=None, args=())[source]

main function

Parameters:
  • func (any) – objective function

  • num_dim (int) – dimension of objective

  • design_space (np.ndarray) – design space

Returns:

results – optimization results by evolutionary algorithms

Return type:

dict

class EABase[source]

Bases: object

_gen_best_obj(obj)[source]

_summary_

Parameters:

obj (np.ndarray) – _description_

Returns:

_description_

Return type:

np.ndarray

_gen_best_x(pop, obj)[source]

_summary_

Parameters:
  • pop (np.ndarray) – _description_

  • obj (np.ndarray) – _description_

Returns:

_description_

Return type:

np.ndarray

_loc_cons(x)[source]
Return type:

None

_location_initialzer()[source]

location/position initialization

Return type:

ndarray

_save_results()[source]
Return type:

None

plot_optimization_history(figure_name='optimization', save_figure=True)[source]
Return type:

None

print_info(iter)[source]
Return type:

None

run_optimizer(func, num_dim, design_space, print_info=False, args=())[source]

main function

Parameters:
  • func (any) – objective function

  • num_dim (int) – dimension of objective

  • design_space (np.ndarray) – design space

Returns:

results – optimization results by evolutionary algorithms

Return type:

dict

class PSO(num_pop, num_gen, cognition_rate=1.5, social_rate=1.5, weight=0.8, max_velocity_rate=0.2)[source]

Bases: EABase

Initialization of PSO

Parameters:
  • num_pop (int) – number of population

  • num_gen (int) – number of generation

  • cognition_rate (float, optional) – connition rate, by default 1.5

  • social_rate (float, optional) – social rate, by default 1.5

  • weight (float, optional) – weight of previous speed , by default 0.8

  • max_velocity_rate (float) – maximum velocity rate

__init__(num_pop, num_gen, cognition_rate=1.5, social_rate=1.5, weight=0.8, max_velocity_rate=0.2)[source]

Initialization of PSO

Parameters:
  • num_pop (int) – number of population

  • num_gen (int) – number of generation

  • cognition_rate (float, optional) – connition rate, by default 1.5

  • social_rate (float, optional) – social rate, by default 1.5

  • weight (float, optional) – weight of previous speed , by default 0.8

  • max_velocity_rate (float) – maximum velocity rate

__update_gen_optimum(iter)

update the generation optimum information

Return type:

None

__update_obj(iter)
Return type:

None

__update_pop(iter)

update the population information

Parameters:

iter (int) – iteration

Return type:

None

__update_pop_optimum()

update the population optimum information

Return type:

None

__velocity_cons()
Return type:

None

__velocity_initialzer(trick='random')

velocity initialization

Parameters:

trick (str) – “random” or “zero”

Return type:

None

_pso_initializer()[source]

calculate the objective values for the initial population

Return type:

None

run_optimizer(func, num_dim, design_space, print_info=False, save_step_results=False, stopping_error=None, args=())[source]

main function of pso algorithm

Parameters:
  • func (any) – objective function

  • num_dim (int) – dimension of objective

  • design_space (np.ndarray) – design space

Returns:

results – optimization results of pso

Return type:

dict

mfpml.optimization.single_fidelity_Bayesian_optimization

class BayesUnConsOpt(problem, acquisition, num_init, verbose=False, seed=1996)[source]

Bases: ABC

Bayesian optimization for single fidelity unconstrained problems

__print_info(iteration)

print optimum information to screen

Parameters:

iteration (int) – iteration number

Return type:

None

__update_error()

update error between the known optimum and the current optimum

Returns:

error

Return type:

float

_abc_impl = <_abc._abc_data object>
_initialization()[source]

initialization for single fidelity bayesian optimization

Return type:

None

_update_para()[source]
Return type:

None

run_optimizer(max_iter=10, stopping_error=1.0)[source]
Return type:

Tuple[float, ndarray]

mfpml.optimization.single_fidelity_acqusitions

class EI(optimizer=None)[source]

Bases: SFUnConsAcq

Expected improvement acquisition function

_abc_impl = <_abc._abc_data object>
eval(x, surrogate)[source]

core of expected improvement function

Parameters:
  • x (np.ndarray) – locations for evaluation

  • surrogate (Kriging) – Kriging model at current iteration

Returns:

-ei – negative expected improvement values

Return type:

np.ndarray

class LCB(optimizer=None, kappa=[1.0, 1.96])[source]

Bases: SFUnConsAcq

Lower confidence bounding

initialize lcb acquisition function

Parameters:
  • optimizer (Any) – optimizer for getting update points

  • kappa (list, optional) – factors for exploration and exploitation, by default [1.0, 1.96]

__init__(optimizer=None, kappa=[1.0, 1.96])[source]

initialize lcb acquisition function

Parameters:
  • optimizer (Any) – optimizer for getting update points

  • kappa (list, optional) – factors for exploration and exploitation, by default [1.0, 1.96]

_abc_impl = <_abc._abc_data object>
eval(x, surrogate)[source]

Calculate values of LCB acquisition function :type x: ndarray :param x: locations for evaluation :type x: np.ndarray :type surrogate: GaussianProcessRegression :param surrogate: surrogate model :type surrogate: Kriging

Returns:

lcb – lcb values on x

Return type:

np.ndarray

class PI(optimizer=None)[source]

Bases: SFUnConsAcq

Probability improvement acquisition function

initialization for PI

Parameters:

optimizer (Any, optional) – optimizer for get new location, by default None

__init__(optimizer=None)[source]

initialization for PI

Parameters:

optimizer (Any, optional) – optimizer for get new location, by default None

_abc_impl = <_abc._abc_data object>
eval(x, surrogate)[source]

core of probability improvement function

Parameters:
  • x (np.ndarray) – locations for evaluation

  • surrogate (Kriging) – Kriging model at current iteration

Returns:

-pi – negative probability improvement values

Return type:

np.ndarray

class SFUnConsAcq(optimizer)[source]

Bases: ABC

base class for sf acquisition functions for single objective

_abc_impl = <_abc._abc_data object>
abstract eval(x, surrogate)[source]

core of acquisition function

Parameters:
  • x (np.ndarray) – locations for evaluation

  • surrogate (Kriging) – Kriging model at current iteration

Returns:

values of acquisition function

Return type:

np.ndarray

query(surrogate)[source]

get the next location to evaluate

Parameters:

surrogate (Any) – Gaussian process regression model

Returns:

opt_x – location of update point

Return type:

np.ndarray

mfpml.optimization.multi_fidelity_Bayesian_optimization

class mfUnConsBayesOpt(problem, acquisition, num_init, surrogate_name='HierarchicalKriging', seed=1996, verbose=True)[source]

Bases: object

Multi-fidelity single objective Bayesian optimization

Initialize the multi-fidelity Bayesian optimization

Parameters:

problem (Any) – optimization problem

__init__(problem, acquisition, num_init, surrogate_name='HierarchicalKriging', seed=1996, verbose=True)[source]

Initialize the multi-fidelity Bayesian optimization

Parameters:

problem (Any) – optimization problem

__update_error()

update error between the known optimum and the current optimum

Returns:

error

Return type:

float

_initialization()[source]

Initialize parameters

Return type:

None

_print_info(iteration)[source]

Print optimization information

Parameters:

iter (int) – current iteration

Return type:

None

_update_para(update_x, update_y, fidelity)[source]

Update parameters

Parameters:
  • update_x (dict) – update samples

  • update_y (dict) – update responses

Return type:

None

run_optimizer(max_iter=10, cost_ratio=1.0, stopping_error=1.0)[source]

Run the optimization

Parameters:
  • max_iter (int, optional) – maximum number of iterations, by default 10

  • cost_ratio (float, optional) – cost ratio between high-fidelity and low-fidelity, by default 1.0

  • stopping_error (float, optional) – stopping error, by default 1.0

Return type:

None

mfpml.optimization.multi_fidelity_acqusitions

class AugmentedEI(optimizer=None)[source]

Bases: MFUnConsAcq

Augmented Expected Improvement acquisition function

Initialize the multi-fidelity acquisition

Parameters:

optimizer (Any) – optimizer instance for getting update location

__init__(optimizer=None)[source]

Initialize the multi-fidelity acquisition

Parameters:

optimizer (Any) – optimizer instance for getting update location

_abc_impl = <_abc._abc_data object>
static _effective_best(mf_surrogate, c=1.0)[source]

Return the effective best solution

Parameters:
  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • c (float, optional) – degree of risk aversion, by default 1

Returns:

effective best solution

Return type:

np.ndarray

static corr(x, mf_surrogate, fidelity)[source]

Evaluate correlation between different fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • fidelity (str) – str indicating fidelity level

Returns:

correlation values

Return type:

np.ndarray

eval(x, mf_surrogate, cost_ratio, fidelity)[source]

Evaluates selected acquisition function at certain fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (any) – multi-fidelity surrogate instance

  • cost_ratio (float) – ratio of high-fidelity cost to low-fidelity cost

  • fidelity (str) – str indicating fidelity level

Returns:

acqusition function values

Return type:

np.ndarray

query(mf_surrogate, cost_ratio=1.0, fmin=1.0)[source]

Query the acqusition function

Parameters:
  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • params (dict) – parameters of Bayesian Optimization

Returns:

contains two values where ‘hf’ is the update points for high-fidelity and ‘lf’ for low-fidelity

Return type:

dict

class ExtendedPI(optimizer=None)[source]

Bases: MFUnConsAcq

Extended Probability Improvement acqusition function

Reference

[1] Ruan, X., Jiang, P., Zhou, Q., Hu, J., & Shu, L. (2020). Variable-fidelity probability of improvement method for efficient global optimization of expensive black-box problems. Structural and Multidisciplinary Optimization, 62(6), 3021-3052.

Initialize the multi-fidelity acqusition

type optimizer:

Optional[Any]

param optimizer:

optimizer instance

type optimizer:

any

__init__(optimizer=None)[source]

Initialize the multi-fidelity acqusition

Parameters:

optimizer (any) – optimizer instance

_abc_impl = <_abc._abc_data object>
static corr(x, mf_surrogate, fidelity=1)[source]

Evaluate correlation between different fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (any) – multi-fidelity surrogate instance

  • fidelity (str) – str indicating fidelity level

Returns:

correlation values

Return type:

np.ndarray

eval(x, mf_surrogate, fmin, cost_ratio, fidelity)[source]

Evaluates selected acqusition function at certain fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • fmin (np.ndarray) – best prediction of multi-fidelity model

  • cost_ratio (float) – ratio of high-fidelity cost to low-fidelity cost

  • fidelity (str) – str indicating fidelity level

Returns:

acqusition function values

Return type:

np.ndarray

query(mf_surrogate, cost_ratio=1.0, fmin=1.0)[source]

Query the acqusition function

Parameters:
  • mf_surrogate (any) – multi-fidelity surrogate instance

  • params (dict) – parameters of Bayesian Optimization

Returns:

contains two values where ‘hf’ is the update points for high-fidelity and ‘lf’ for low-fidelity

Return type:

dict

class MFUnConsAcq[source]

Bases: ABC

Base class for multi-fidelity acquisition functions

_abc_impl = <_abc._abc_data object>
abstract eval(x, mf_surrogate, cost_ratio, idelity)[source]

Evaluate the acquisition function at certain fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • cost_ratio (float) – ratio of high-fidelity cost to low-fidelity cost

  • fidelity (str) – str indicating fidelity level

Returns:

acqusition function values

Return type:

np.ndarray

abstract query(mf_surrogate, cost_ratio, fmin)[source]

Query the acqusition function

Parameters:
  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • params (dict) – parameters of Bayesian Optimization

Returns:

contains two values where ‘hf’ is the update points for high-fidelity and ‘lf’ for low-fidelity

Return type:

dict

class VFEI(optimizer=None)[source]

Bases: MFUnConsAcq

Initialize the multi-fidelity acqusition

Parameters:

optimizer (Any) – optimizer instance

__init__(optimizer=None)[source]

Initialize the multi-fidelity acqusition

Parameters:

optimizer (Any) – optimizer instance

_abc_impl = <_abc._abc_data object>
eval(x, fmin, mf_surrogate, fidelity)[source]

Evaluates selected acqusition function at certain fidelity

Parameters:

x: np.ndarray

point to evaluate

fmin: float

best observed function evaluation

mf_surrogate: any

multi-fidelity surrogate instance

fidelity: str

str indicating fidelity level

returns:

Acqusition function value w.r.t corresponding fidelity level.

rtype:

np.ndarray

query(mf_surrogate, fmin, cost_ratio=5.0)[source]

Query the acqusition function

Parameters:
  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • params (dict) – parameters of Bayesian Optimization

Returns:

contains two values where ‘hf’ is the update points for high-fidelity and ‘lf’ for low-fidelity

Return type:

dict

class VFLCB(optimizer=None, kappa=[1.0, 1.96])[source]

Bases: MFUnConsAcq

Variable-fidelity Lower Confidence Bound acqusition function

Initialize the vflcb acqusition

Parameters:
  • optimizer (any, optional) – optimizer instance, by default ‘L-BFGS-B’

  • kappa (list, optional) – balance factors for exploitation and exploration respectively , by default [1., 1.96]

__init__(optimizer=None, kappa=[1.0, 1.96])[source]

Initialize the vflcb acqusition

Parameters:
  • optimizer (any, optional) – optimizer instance, by default ‘L-BFGS-B’

  • kappa (list, optional) – balance factors for exploitation and exploration respectively , by default [1., 1.96]

_abc_impl = <_abc._abc_data object>
eval(x, mf_surrogate, cost_ratio, fidelity)[source]

Evaluate vflcb function values at certain fidelity

Parameters:
  • x (np.ndarray) – point to evaluate

  • mf_surrogate (Any) – multi-fidelity surrogate model instance

  • cost_ratio (float) – ratio of high-fidelity cost to low-fidelity cost

  • fidelity (str) – str indicating fidelity level

Returns:

acqusition function values

Return type:

np.ndarray

query(mf_surrogate, cost_ratio=1.0, fmin=1.0)[source]

Query the acqusition function

Parameters:
  • mf_surrogate (Any) – multi-fidelity surrogate instance

  • params (dict) – parameters of Bayesian Optimization

Returns:

contains two values where ‘hf’ is the update points for high-fidelity and ‘lf’ for low-fidelity

Return type:

dict