__init__.py
The package entry file declares the public module surface and the library version.
This file is small, but it is still useful because it shows what the project considers part of the top-level API. The __all__ list points to the modules the author expects users to import directly, and the version constant is the simplest place to confirm the packaged release number.
- Main contents:
__version__and__all__. - Role in the package: a lightweight export map, not an implementation module.
- Worth opening next:
optimizer.pyto see how those modules are composed.
The file also hints at the package history. Some names in __all__ refer to wrappers and older components, so it works as a rough inventory of supported optimization styles.
advretry.py
This module runs a smarter form of parallel restart optimization that reuses information from earlier runs.
Plain retry starts many independent optimization runs and keeps the best results. advretry.py goes further. Its shared Store keeps promising solutions, adapts evaluation budgets over time, and feeds crossover-style guesses plus step-size hints back into later runs. That makes it a coordination layer on top of single-objective optimizers rather than a standalone optimizer.
- Main concepts:
minimize,retry,Store, adaptive evaluation limits, persisted retry state, crossover between stored solutions. - Best fit: hard problems where naive random restarts waste too much budget.
- Relation to the rest of the repo: built on
retry.pyideas and on optimizer objects fromoptimizer.py.
The default workflow combines differential evolution and CMA-ES, but the real point is the storage layer. If you want to understand how the library turns a sequence of local or semi-global runs into a coordinated search, this is one of the key modules.
astro.py
This file exposes ESA GTOP space-mission benchmark problems as Python-friendly objects.
The module is a bridge between the optimization library and a set of compiled benchmark functions such as Messenger, Cassini1, Cassini2, Rosetta, Sagas, and Tandem. Each problem class packages bounds, a callable objective, and in a few cases special handling for mixed-integer or multi-objective variants.
- Main contents: benchmark classes,
python_fun, mixed-integer helpers likeTandem_minlp, and special wrappers such ascassini1multi. - Role in the package: test and example support rather than optimizer infrastructure.
- Worth opening next: example scripts and tutorials that use GTOP problems to compare retry, MODE, and mixed-integer workflows.
This module matters because many design choices in fcmaes were shaped by trajectory optimization. It is one of the clearest links between the library and its original application domain.
bitecpp.py
This wrapper exposes the C++ BiteOpt solver through the same minimize-style interface used across the package.
The implementation is intentionally thin. It converts bounds and guesses into C types, wraps the Python objective as a callback, calls the compiled optimizer, and returns a scipy.optimize.OptimizeResult. That makes BiteOpt available anywhere an fcmaes optimizer can be slotted in.
- Main concepts: callback bridging, bound handling,
OptimizeResultconversion. - Role in the architecture: one of several compiled solver adapters alongside
cmaescpp.py,decpp.py,dacpp.py,crfmnescpp.py, andpgpecpp.py. - Worth opening next:
optimizer.py, where BiteOpt can be composed into retry pipelines.
If you are browsing for algorithm details, this is not the place. If you are browsing for API integration patterns, it is a representative example.
cmaes.py
This is the library’s pure Python implementation of active CMA-ES, including an ask/tell interface.
The module provides two levels of access. minimize gives a direct one-shot optimizer call, while the Cmaes class exposes the internal ask/tell loop for code that wants to drive evaluation itself. It also supports delayed population updates for parallel objective evaluation, which is important for slow simulations.
- Main concepts: active covariance adaptation, ask/tell workflow, optional normalization, parallel population evaluation.
- Role in the package: a reference Python implementation and a fallback when compiled code is unavailable.
- Relation to other files: mirrored by
cmaescpp.py; used directly in tests and throughoptimizer.pysequences.
This is one of the core files to read if you want to understand the package’s optimizer internals rather than just call wrappers.
cmaescpp.py
This module wraps the compiled CMA-ES implementation and matches the Python API closely.
The file is the fast path for users who want CMA-ES without reading or maintaining the Python implementation details. Besides the top-level minimize function, it exposes ACMA_C, a lower-level ask/tell style wrapper around the native stateful optimizer.
- Main concepts:
ctypescallbacks, parallel objective evaluation, delayed update support, compiled ask/tell access. - Role in the package: production-oriented CMA-ES backend.
- Relation to the rest of the repo: selected by many composite optimizers in
optimizer.pyand exercised intest_cma.py.
If the Python and C++ CMA-ES modules are read together, the architecture becomes clear: the library aims to keep the user-facing API steady while swapping execution backends underneath.
crfmnes.py
This file implements the CR-FM-NES optimizer in pure Python.
CR-FM-NES is another evolutionary strategy in the package, separate from CMA-ES and DE. The module contains both the simple minimize entry point and the fuller CRFMNES class, plus helper routines for matrix-related calculations used by the algorithm.
- Main concepts: Gaussian search distribution updates, ranking-based evolution strategy logic, optional parallel evaluation.
- Role in the package: an alternative optimizer for problems where CMA-ES or DE are not the preferred choice.
- Relation to other files: wrapped by
optimizer.pyand mirrored bycrfmnescpp.py.
The file is narrower than cmaes.py or de.py, but it matters because it broadens the set of optimizers available to the retry and diversification layers.
crfmnescpp.py
This is the compiled CR-FM-NES binding, shaped to look like the Python version.
The module handles parameter marshalling, callback registration, and result unpacking for the native solver. It also exposes CRFMNES_C for lower-level interaction, similar to the compiled CMA-ES and DE wrappers.
- Main concepts: native callback bridge, optional parallel evaluation, compiled stateful wrapper.
- Role in the package: faster backend for CR-FM-NES.
- Worth opening next:
optimizer.pyto see where it is chosen over the Python implementation.
Like the other *cpp.py files, most of the engineering value here is in interface consistency, not algorithm exposition.
dacpp.py
This wrapper exposes a compiled dual annealing solver with an optional local search phase.
The implementation follows the same package pattern: convert inputs, call native code, return an OptimizeResult. Its algorithm niche is different from the evolutionary solvers around it, so it is mainly useful as one building block in composite workflows rather than as the package’s default recommendation.
- Main concepts: dual annealing, optional LBFGS-B local search, C++ binding.
- Role in the package: a supplementary global-search backend.
- Relation to other files: used through optimizer wrappers such as
da_cmainoptimizer.py.
If you are looking for the main ideas of the library, this is not central. If you are looking for breadth of available solvers, it is part of that picture.
de.py
This module contains the package’s pure Python differential evolution implementation and its ask/tell machinery.
The implementation is more opinionated than a textbook DE port. It uses DE/best/1, oscillating control parameters, temporal locality, age-based reinitialization, and custom handling for integer decision variables. It can also evaluate populations in parallel, which makes it one of the main choices for expensive objectives.
- Main concepts:
minimize,DEask/tell interface, integer-variable mutation, optional candidate filtering, delayed population update. - Role in the package: one of the core single-objective optimizers.
- Relation to the rest of the repo: forms the basis for retry combinations and inspired the multi-objective MODE variants.
This is a key file if you want to understand how the project’s evolutionary code differs from stock library implementations.
decpp.py
This wrapper provides the compiled differential evolution backend and a native ask/tell object.
Functionally, it mirrors de.py but routes the actual search through the shared library. The file supports integer variables, optional initial guesses and sigmas, callback-based stopping, and a stateful DE_C class for direct ask/tell use.
- Main concepts: compiled DE, callback bridge, stateful
DE_C, integer support. - Role in the package: the main fast backend for DE-heavy workflows.
- Relation to other files: heavily reused by
optimizer.py,retry.py, and many examples.
For most users, this is the DE implementation they will actually run, while de.py remains easier to inspect and modify.
diversifier.py
This module turns ordinary optimizers into archive-filling quality-diversity searches.
The central idea is to wrap a solver so that the score it receives is measured relative to the current elite in a behavior niche. That lets CMA-ES, DE, CR-FM-NES, PGPE, and even MAP-Elites-style variation act as emitters inside one shared archive. The module also includes a bridge back to single-objective optimization through apply_advretry.
- Main concepts: niche-relative fitness, shared
Archive, solver sequencing, MAP-Elites emitters,apply_advretry. - Role in the package: a high-level quality-diversity orchestration layer.
- Relation to other files: built directly on
mapelites.py,advretry.py, and optimizer wrappers fromoptimizer.py.
This file is useful when the goal is not a single best point, but a diverse set of strong solutions spread over a descriptor space.
evaluator.py
This is the package’s execution plumbing for serial calls, parallel workers, and native callbacks.
Many modules depend on this file without users ever importing it directly. It provides process-based evaluators, helpers for slicing batch evaluations across workers, bound handling utilities, and several callback wrappers that let compiled optimizers call back into Python objective functions safely enough for the library’s multiprocessing model.
- Main concepts:
Evaluator,parallel,parallel_mo, callback adapters, bound checking, logging-level checks. - Role in the package: shared infrastructure.
- Relation to the rest of the repo: used by nearly every optimizer, especially the compiled wrappers.
If you want to understand how objective evaluation is parallelized or how Python functions are passed into C++, this is the file to read.
journal.py
This module writes optimization progress into Optuna’s journal format so external dashboards can watch long runs.
The code is deliberately practical. It defines the JSON message types Optuna expects, writes study and trial records, and provides journal_wrapper, a fitness wrapper that batches trial metadata and marks invalid results as pruned. It is aimed at monitoring rather than optimization itself.
- Main concepts: Optuna journal messages, study and trial serialization, batched file writes, wrapper around arbitrary fitness functions.
- Role in the package: observability helper.
- Relation to the rest of the repo: optional add-on for slow runs, especially hyperparameter optimization examples.
The module is narrow, but useful when a long optimization needs a live dashboard without rewriting the search code around Optuna.
mapelites.py
This file implements CVT MAP-Elites with a shared-memory archive and extra emitters based on CMA-ES.
The code covers the full archive workflow: niche construction, KD-tree lookup, persistence, archive updates, variation operators, and emitter logic. It supports CVT-style behavior-space partitioning and keeps the archive in shared memory so multiple worker processes can update it with lower communication overhead.
- Main concepts:
Archive, CVT niche generation, KD-tree indexing, SBX and Iso+LineDD variation, CMA-ES emitter, archive persistence. - Role in the package: the main quality-diversity engine.
- Relation to other files: extended by
diversifier.py; reuses pieces fromretry.pyandcmaescpp.py.
This is one of the larger architecture files. If your interest is in behavior-space search rather than pure objective minimization, start here.
mode.py
This module implements the Python version of the library’s multi-objective differential evolution workflow.
mode.py adapts the project’s DE machinery to multi-objective problems and lets the user switch between two survival/update styles: a DE-oriented update and an NSGA-II-like update built around Pareto sorting and crowding. It also supports constrained problems and parallel objective evaluation for expensive functions.
- Main concepts:
MODE, Pareto ranking, crowding distance, constraint ranking, integer handling, multi-objective resultstore. - Role in the package: the inspectable Python implementation of MODE.
- Relation to the rest of the repo: paired with
modecpp.py; compared against weighted-sum retry inmoretry.py.
The file is worth reading when you want to understand the package’s multi-objective design choices, especially how it blends DE mechanics with NSGA-style ranking.
The tutorial explains when MODE is a good fit, when weighted-sum retry is a better tradeoff, and why the package offers both DE-style and NSGA-II-style population updates.
modecpp.py
This is the compiled MODE backend, plus a parallel retry mode for cheap multi-objective functions.
The top-level minimize call matches the Python MODE interface, but the module adds another important path: retry. Instead of one big run with parallel fitness evaluation, that path launches many independent MODE runs and merges their non-dominated results into one shared front. The tutorial makes a point of when that tradeoff is better.
- Main concepts: compiled MODE, DE versus NSGA-II update toggle, multi-objective retry, shared Pareto store.
- Role in the package: primary fast backend for MODE and the only retry-style MODE implementation.
- Relation to other files: built on ranking helpers from
mode.pyand compared directly tomoretry.py.
This file is central if you care about scalable multi-objective optimization in the package. The retry path is especially important for fast objectives.
The tutorial gives the clearest rationale for choosing modecpp.minimize versus modecpp.retry, and for deciding between the DE and NSGA-II update variants.
moretry.py
This module handles multi-objective optimization by running many single-objective retries with random weights.
Instead of evolving a Pareto front directly, moretry.py samples weight vectors, turns a multi-objective problem into repeated weighted-sum scalar problems, and solves those with the ordinary retry stack. It also includes Pareto filtering and plotting helpers. In this codebase, that approach is not a side note. It is presented as a serious alternative to MODE, especially when front coverage matters.
- Main concepts: weighted-sum wrapper, retry over random weights, Pareto extraction, front plotting, compatibility with constrained problems.
- Role in the package: the main multi-objective meta-algorithm outside MODE.
- Relation to other files: depends on
retry.pyoradvretry.pyand is discussed throughout the MODE tutorial.
If a multi-objective problem has only a few objectives and very strong single-objective solvers matter more than one-shot Pareto evolution, this is the file to read first.
multiretry.py
This file manages a tournament over many related problem variants by spending more retry budget on the promising ones.
The intended use case is a family of optimization problems that differ in a discrete structural choice, such as enumerated mixed-integer variants. The module gives each problem a shared advretry.Store, advances them in rounds, removes weak variants, and keeps increasing budget on the survivors.
- Main concepts:
problem_stats, staged elimination, persisted retry state, coordinated comparison of problem instances. - Role in the package: a meta-meta-algorithm for model or structure selection.
- Relation to the rest of the repo: layered directly on
advretry.pyand optimizer objects fromoptimizer.py.
This is a specialized file, but a useful one when discrete choices can be enumerated instead of encoded inside one mixed-variable optimizer.
optimizer.py
This module is the package’s optimizer catalog and composition layer.
Most higher-level code does not call solver modules directly. It works through the Optimizer abstraction defined here. The file wraps Python implementations, C++ implementations, several SciPy solvers, and optional third-party backends into a shared interface. It also defines sequences and random choices of optimizers, such as the common DE-then-CMA pipeline.
- Main concepts:
Optimizer,Sequence,Choice, utility functions likede_cma, and concrete wrappers such asCma_cpp,De_cpp,Bite_cpp,Dual_annealing,NLopt. - Role in the package: the policy layer that lets retry code stay agnostic about the underlying solver.
- Worth opening next:
retry.pyandadvretry.py, which consume these wrappers.
If you want one file that explains how the package is assembled out of interchangeable solver parts, this is the one.
pgpecpp.py
This wrapper exposes a compiled PGPE optimizer with optional parallel population evaluation.
The module packages a policy-gradient style evolutionary method behind the same API used by the rest of the library. It includes the native binding plus PGPE_C for lower-level access, and exposes many algorithm knobs such as ranking, learning-rate decay, and ADAM-style parameters.
- Main concepts: PGPE, native callbacks, parallel evaluation, stateful compiled wrapper.
- Role in the package: an additional optimizer choice for retry and diversification workflows.
- Relation to other files: selected through
optimizer.pyand used bydiversifier.pyas one possible emitter.
This is another case where the interface consistency matters more than the wrapper code itself.
pygmoretry.py
This module applies the package’s parallel retry idea to Pygmo or Pagmo problems and algorithms.
The ordinary retry.py path assumes a scalar objective. pygmoretry.py exists because Pygmo problems can carry constraints and multiple objectives natively. The module launches many population-based runs in parallel processes, collects feasible champions, and stores them through the same basic shared-store pattern used elsewhere in the package.
- Main concepts: Pygmo population retries, feasible-solution filtering, process-based scaling.
- Role in the package: integration layer for users who want Pagmo algorithms but prefer the
fcmaesretry model. - Relation to other files: reuses
retry.Storerather than reimplementing storage and sorting logic.
This file is specialized, but it is one of the clearer examples of the project adapting its meta-algorithms to an external optimization ecosystem.
retry.py
This is the basic parallel restart engine for scalar optimization in the package.
The module launches many optimizer runs across processes, stores good results in shared memory, and returns the best one found. Its Store tracks incumbent solutions, evaluation counts, and optional progress statistics. Compared with advretry.py, this version is simpler and more neutral: it coordinates restarts, but does not try to shape later runs as aggressively.
- Main concepts:
minimize, sharedStore, multiprocessing retry loop, progress plotting,Shared2d. - Role in the package: core meta-algorithm for repeated single-objective runs.
- Relation to other files: foundation for
advretry.py,moretry.py, and several examples.
If you want the shortest path to understanding how the repository thinks about “retry,” start here before reading the advanced version.
test_cma.py
This file is the regression test suite for the main optimization APIs, not part of the runtime library.
The tests cover Python and C++ CMA-ES, Python and C++ DE, ask/tell loops, parallel execution, and the retry layers. The benchmark functions are simple enough to assert convergence and evaluation counts, which makes the file a good reference for expected API behavior as well as correctness checks.
- Main concepts: stochastic test retries, convergence assertions, ask/tell usage examples.
- Role in the package: automated verification.
- Worth opening next:
testfun.pyfor the benchmark definitions used here.
Even if you are not changing tests, this file is useful because it shows the library’s preferred calling patterns in compact form.
testfun.py
This module defines small benchmark problems and a wrapper that counts evaluations and tracks the best result.
The classes here package standard functions such as Rosenbrock, Rastrigin, Sphere, Cigar, Ellipsoid, and Eggholder with bounds and a monitoring wrapper. They are lightweight, but they support the tests and simple experiments throughout the repository.
- Main concepts: named benchmark classes, shared
Wrapperfor count and incumbent tracking, noisy Rastrigin mean variant. - Role in the package: testing and quick experimentation support.
- Relation to other files: imported by
test_cma.pyand handy when validating optimizer changes.
This is a utility module, not a production component, but it is a good place to look for tiny objective functions with bounds already attached.