APOSMM

class gen_classes.aposmm.APOSMM(vocs, max_active_runs, initial_sample_size, History=[], sample_points=None, localopt_method='scipy_Nelder-Mead', rk_const=None, xtol_abs=1e-06, ftol_abs=1e-06, opt_return_codes=[0], mu=1e-08, nu=1e-08, dist_to_bound_multiple=0.5, random_seed=1, **kwargs)

Bases: PersistentGenInterfacer

APOSMM coordinates multiple local optimization runs, dramatically reducing time for discovering multiple minima on parallel systems.

This generator adheres to the Generator Standard.

VOCS variables must include both regular and *_on_cube versions. E.g.,:

vars_std = {
    "var1": [-10.0, 10.0],
    "var2": [0.0, 100.0],
    "var3": [1.0, 50.0],
    "var1_on_cube": [0, 1.0],
    "var2_on_cube": [0, 1.0],
    "var3_on_cube": [0, 1.0],
}
variables_mapping = {
    "x": ["var1", "var2", "var3"],
    "x_on_cube": ["var1_on_cube", "var2_on_cube", "var3_on_cube"],
}
gen = APOSMM(vocs, 3, 3, variables_mapping=variables_mapping, ...)

Getting started

APOSMM requires a minimal sample before starting optimization. A random sample across the domain can either be retrieved via a suggest() call right after initialization, or the user can ingest a set of sample points via ingest(). The minimal sample size is specified via the initial_sample_size parameter. This many evaluated sample points must be provided to APOSMM before it will provide any local optimization points.

# Approach 1: Retrieve sample points via suggest()
gen = APOSMM(vocs, max_active_runs=2, initial_sample_size=10)

# ask APOSMM for some sample points
initial_sample = gen.suggest(10)
for point in initial_sample:
    point["f"] = func(point["x"])
gen.ingest(initial_sample)

# APOSMM will now provide local-optimization points.
points = gen.suggest(10)

# ----------------

# Approach 2: Ingest pre-computed sample points via ingest()
gen = APOSMM(vocs, max_active_runs=2, initial_sample_size=10)

initial_sample = create_initial_sample()
for point in initial_sample:
    point["f"] = func(point["x"])

# provide APOSMM with sample points
gen.ingest(initial_sample)

# APOSMM will now provide local-optimization points.
points = gen.suggest(10)

...

Important

After the initial sample phase, APOSMM cannot accept additional “arbitrary” sample points that are not associated with local optimization runs.

gen = APOSMM(vocs, max_active_runs=2, initial_sample_size=10)

# ask APOSMM for some sample points
initial_sample = gen.suggest(10)
for point in initial_sample:
    point["f"] = func(point["x"])
gen.ingest(initial_sample)

# APOSMM will now provide local-optimization points.
points_from_aposmm = gen.suggest(10)
for point in points_from_aposmm:
    point["f"] = func(point["x"])
gen.ingest(points_from_aposmm)

gen.ingest(another_sample)  # THIS CRASHES
param vocs:

The VOCS object, adhering to the VOCS interface from the Generator Standard.

type vocs:

VOCS

param max_active_runs:

Bound on number of runs APOSMM is concurrently advancing.

type max_active_runs:

int

param initial_sample_size:

Minimal sample points required before starting optimization.

If suggest(N) is called first, APOSMM produces this many random sample points across the domain, with N <= initial_sample_size.

If ingest(sample) is called first, multiple calls like ingest(sample) are required until the total number of points ingested is >= initial_sample_size.

type initial_sample_size:

int

param History:

An optional history of previously evaluated points.

type History:

npt.NDArray = []

param sample_points:

Included for compatibility with the underlying algorithm. Points to be sampled (original domain). If more sample points are needed by APOSMM during the course of the optimization, points will be drawn uniformly over the domain.

type sample_points:

npt.NDArray = None

param localopt_method:

The local optimization method to use. Others being added over time.

type localopt_method:

str = “scipy_Nelder-Mead” (scipy) or “LN_BOBYQA” (nlopt)

param mu:

Distance from the boundary that all localopt starting points must satisfy

type mu:

float = 1e-8

param nu:

Distance from identified minima that all starting points must satisfy

type nu:

float = 1e-8

param rk_const:

Multiplier in front of the r_k value. If not provided, it will be set to 0.5 * ((gamma(1 + (n / 2)) * 5) ** (1 / n)) / sqrt(pi)

type rk_const:

float = None

param xtol_abs:

Localopt method’s convergence tolerance.

type xtol_abs:

float = 1e-6

param ftol_abs:

Localopt method’s convergence tolerance.

type ftol_abs:

float = 1e-6

param opt_return_codes:

scipy only: List of return codes that determine if a point should be ruled a local minimum.

type opt_return_codes:

list[int] = [0]

param dist_to_bound_multiple:

What fraction of the distance to the nearest boundary should the initial step size be in localopt runs.

type dist_to_bound_multiple:

float = 0.5

param random_seed:

Seed for the random number generator.

type random_seed:

int = 1

suggest_updates()

Request a list of NumPy arrays containing entries that have been identified as minima.

Return type:

List[ndarray[tuple[int, …], dtype[_ScalarType_co]]]

suggest(num_points=0)

Request the next set of points to evaluate.

Parameters:

num_points (int | None)

Return type:

List[dict]

ingest(results, tag=2)

Send the results of evaluations to the generator.

Parameters:
  • results (List[dict])

  • tag (int)

Return type:

None

export(user_fields=False, as_dicts=False)

Return the generator’s results :param user_fields: If True, return local_H with variables unmapped from arrays back to individual fields.

Default is False.

Parameters:
  • as_dicts (bool, optional) – If True, return local_H as list of dictionaries instead of numpy array. Default is False.

  • user_fields (bool, optional)

Returns:

  • local_H (npt.NDArray | list) – Generator history array (unmapped if user_fields=True, as dicts if as_dicts=True).

  • persis_info (dict) – Persistent information.

  • tag (int) – Status flag (e.g., FINISHED_PERSISTENT_GEN_TAG).

Return type:

tuple[ndarray[tuple[int, …], dtype[_ScalarType_co]] | list | None, dict | None, int | None]

finalize()

Stop the generator process and store the returned data.

Return type:

None

Parameters:
  • vocs (VOCS)

  • max_active_runs (int)

  • initial_sample_size (int)

  • History (ndarray[tuple[int, ...], dtype[_ScalarType_co]])

  • sample_points (ndarray[tuple[int, ...], dtype[_ScalarType_co]])

  • localopt_method (str)

  • rk_const (float)

  • xtol_abs (float)

  • ftol_abs (float)

  • opt_return_codes (list[int])

  • mu (float)

  • nu (float)

  • dist_to_bound_multiple (float)

  • random_seed (int)

See also

 1        workflow.libE_specs.gen_on_manager = True
 2
 3        aposmm = APOSMM(
 4            vocs,
 5            max_active_runs=workflow.nworkers,  # should this match nworkers always? practically?
 6            variables_mapping={"x": ["core", "edge"], "x_on_cube": ["core_on_cube", "edge_on_cube"], "f": ["energy"]},
 7            initial_sample_size=100,
 8            sample_points=minima,
 9            localopt_method="LN_BOBYQA",
10            rk_const=0.5 * ((gamma(1 + (n / 2)) * 5) ** (1 / n)) / sqrt(pi),
11            xtol_abs=1e-6,
12            ftol_abs=1e-6,
13        )
14
15        # SH TODO - dont want this stuff duplicated - pass with vocs instead
16        workflow.gen_specs = GenSpecs(
17            persis_in=["x", "x_on_cube", "sim_id", "local_min", "local_pt", "f"],
18            generator=aposmm,
19            batch_size=5,
20            initial_batch_size=10,
21            user={"initial_sample_size": 100},
22        )
23
24        if run == 0:
25            workflow.sim_specs = SimSpecs(sim_f=sim_f, inputs=["x"], outputs=[("f", float)])
26            workflow.exit_criteria = ExitCriteria(sim_max=2000)
27        elif run == 1:
28            workflow.persis_info["num_gens_started"] = 0
29            sim_app2 = six_hump_camel.__file__
30            exctr = MPIExecutor()
31            exctr.register_app(full_path=sim_app2, app_name="six_hump_camel", calc_type="sim")  # Named app
32            workflow.sim_specs = SimSpecs(
33                sim_f=sim_f_exec, inputs=["x"], outputs=[("f", float), ("cstat", int)], user={"cores": 1}
34            )
35            workflow.exit_criteria = ExitCriteria(sim_max=200)
36
37        workflow.add_random_streams()
38
39        H, _, _ = workflow.run()
40
 1def test_asktell_ingest_first():
 2    from math import gamma, pi, sqrt
 3
 4    from gest_api.vocs import VOCS
 5
 6    import libensemble.gen_funcs
 7    from libensemble.gen_classes import APOSMM
 8    from libensemble.sim_funcs.six_hump_camel import six_hump_camel_func
 9    from libensemble.tests.regression_tests.support import six_hump_camel_minima as minima
10
11    libensemble.gen_funcs.rc.aposmm_optimizers = "nlopt"
12
13    n = 2
14
15    variables = {"core": [-3, 3], "edge": [-2, 2], "core_on_cube": [0, 1], "edge_on_cube": [0, 1]}
16    objectives = {"energy": "MINIMIZE"}
17
18    variables_mapping = {
19        "x": ["core", "edge"],
20        "x_on_cube": ["core_on_cube", "edge_on_cube"],
21        "f": ["energy"],
22    }
23
24    vocs = VOCS(variables=variables, objectives=objectives)
25
26    my_APOSMM = APOSMM(
27        vocs,
28        max_active_runs=6,
29        initial_sample_size=6,
30        variables_mapping=variables_mapping,
31        localopt_method="LN_BOBYQA",
32        opt_return_codes=[0],
33        rk_const=0.5 * ((gamma(1 + (n / 2)) * 5) ** (1 / n)) / sqrt(pi),
34        xtol_abs=1e-6,
35        ftol_abs=1e-6,
36        dist_to_bound_multiple=0.01,
37    )
38
39    # local_H["x_on_cube"][-num_pts:] = (pts - lb) / (ub - lb)
40    initial_sample = [
41        {
42            "core": minima[i][0],
43            "edge": minima[i][1],
44            "core_on_cube": (minima[i][0] - variables["core"][0]) / (variables["core"][1] - variables["core"][0]),
45            "edge_on_cube": (minima[i][1] - variables["edge"][0]) / (variables["edge"][1] - variables["edge"][0]),
46            "energy": six_hump_camel_func(np.array([minima[i][0], minima[i][1]])),
47        }
48        for i in range(6)
49    ]
50    my_APOSMM.ingest(initial_sample)
51
52    total_evals = 0
53    eval_max = 2000
54
55    potential_minima = []
56
57    while total_evals < eval_max:
58
59        sample, detected_minima = my_APOSMM.suggest(6), my_APOSMM.suggest_updates()
60        if len(detected_minima):
61            for m in detected_minima:
62                potential_minima.append(m)
63        for point in sample:
64            point["energy"] = six_hump_camel_func(np.array([point["core"], point["edge"]]))
65            total_evals += 1
66        my_APOSMM.ingest(sample)
67    my_APOSMM.finalize()
68    H, persis_info, exit_code = my_APOSMM.export()
69