C++ Sources In _fcmaescpp/
The files in this group are the native optimizer implementations and their exported entry points. Most of them define an internal optimizer class plus a small set of extern "C" functions that make the code callable from Python.
acmaesoptimizer.cpp
This file is the native active CMA-ES engine, with both one-shot execution and a full ask/tell interface.
The heart of the file is AcmaesOptimizer. It owns the population state, covariance adaptation, evolution paths, stopping checks, and the logic for sampling new candidates in Eigen arrays. The code also leans on Fitness from evaluator.h to encode and decode bounded variables, so the optimizer can work in normalized coordinates when the wrapper asks for it.
One detail that matters in this repository is the split between direct runs and stateful runs. optimizeACMA_C handles the simple "run until done" path. The initACMA_C, askACMA_C, tellACMA_C, tellXACMA_C, populationACMA_C, and resultACMA_C exports keep an optimizer object alive across calls. That is the surface used by fcmaes/cmaescpp.py for Python-side control loops.
- Main pieces:
AcmaesOptimizer, population sampling, covariance updates, stop tests, and delayed-update evaluation. - How it fits: one of the main single-objective native solvers in the project.
- Python side: wrapped by
fcmaes/cmaescpp.py.
biteoptimizer.cpp
This is a thin native bridge from the repository callback API to the bundled BiteOpt deep optimizer.
The file does not reimplement the BiteOpt method from first principles. Instead, it defines BiteOptimizer as a small subclass of CBiteOptDeep, wires its virtual methods to the local Fitness wrapper, and adds the stop logic that the Python layer expects. That keeps the C++ surface compact while still exposing the deeper BiteOpt stack.
Compared with the other optimizer sources, the C interface here is simple. The file exports only optimizeBite_C. There is no separate ask/tell lifecycle. That matches how fcmaes/bitecpp.py uses it today: as a direct single-call optimizer rather than a stateful native object.
- Main pieces:
BiteOptimizer,CBiteOptDeepinheritance, bound-aware objective callbacks, and stall-based stopping. - How it fits: a wrapper around bundled BiteOpt code, not a new optimizer family unique to this repository.
- Python side: called by
fcmaes/bitecpp.py.
crfmnes.cpp
This file implements the native CR-FM-NES solver and exposes it in both one-shot and ask/tell form.
CrfmnesOptimizer holds the mean vector, rank weights, evolution state, and the moving coordinate system used by the algorithm. The implementation works in encoded coordinates and can combine bound handling with either a penalty term or explicit constraint-violation treatment. That makes the file more than a direct paper transcription. It is adapted to the repository's bounded-objective setup.
The exported API mirrors the pattern used in other stateful solvers: optimizeCRFMNES_C for direct runs, then init, ask, tell, population, and result functions for repeated control from Python. The matching wrapper is fcmaes/crfmnescpp.py, which uses the parallel callback path because this optimizer naturally evaluates whole populations.
- Main pieces:
CrfmnesOptimizer, encoded-space sampling, feasibility handling, and ask/tell exports. - How it fits: one of the newer native solvers, aimed at high-dimensional search.
- Python side: wrapped by
fcmaes/crfmnescpp.py.
daoptimizer.cpp
This file is a self-contained dual annealing runner with optional LBFGS-B local search.
Unlike most of the other optimizer sources, this one does not lean on evaluator.h. It defines its own bounded Fitness wrapper, the local-search adapter LBFGSFunc, the visiting distribution, the energy state, the annealing strategy chain, and the top-level DARunner. The file is close to a compact subsystem.
The implementation is adapted to normalized bounded coordinates and always uses the bundled LBFGSBSolver when local search is enabled. The C surface is deliberately small. optimizeDA_C is the only exported entry point, which is enough for fcmaes/dacpp.py because the Python layer treats this method as a one-shot optimizer.
- Main pieces: local
Fitness,LBFGSFunc,EnergyState,StrategyChain,DARunner, and the top-levelminimizehelper. - How it fits: the native annealing method, separate from the population-based solvers.
- Python side: wrapped by
fcmaes/dacpp.py.
deoptimizer.cpp
This file is the native differential evolution engine, with delayed-update parallel evaluation and explicit support for discrete variables.
DeOptimizer owns the population, trial-vector generation, aging logic, and replacement rules. The implementation is not a plain textbook DE loop. It folds in temporal locality, age-based reinitialization, oscillating F and CR behavior, and extra mutation handling for integer-marked coordinates. That combination is what the rest of the repository means when it talks about the C++ DE backend.
The file also shows the usual split between direct use and stateful use. optimizeDE_C handles the simple path. The initDE_C, askDE_C, tellDE_C, populationDE_C, and resultDE_C functions keep the optimizer alive for Python-controlled loops. The delayed-update path uses the worker machinery from evaluator.h to keep evaluations moving while the population update waits for results.
- Main pieces:
DeOptimizer, integer-aware mutation, delayed-update worker path, and ask/tell exports. - How it fits: one of the central single-objective native solvers and a baseline for several higher-level workflows.
- Python side: wrapped by
fcmaes/decpp.py; the pure Pythonfcmaes/de.pypoints back to this implementation as the native reference.
modeoptimizer.cpp
This file is the native multi-objective DE core, with a switch that can swap between DE-style and NSGA-like survivor updates.
MoDeOptimizer extends the DE-style population machinery into the multi-objective case. It stores objective and constraint values together, computes Pareto levels, ranks feasible and infeasible solutions, and uses crowding-style information when it needs a total order. The file also keeps the discrete-variable treatment from the single-objective DE backend, which matters for mixed or combinatorial design problems.
The C interface is stateful by design. There is no one-call optimizeMODE_C. Instead the wrapper creates an optimizer with initMODE_C, asks for populations, feeds back evaluated objective vectors, and can even switch the update rule with tellMODE_switchC or replace the live population with setPopulationMODE_C. That matches how fcmaes/modecpp.py drives the algorithm from Python.
- Main pieces:
MoDeOptimizer, Pareto ranking, constraint ranking, configurable DE or NSGA-style survival, and a population replacement hook. - How it fits: the main native multi-objective engine in the repository.
- Python side: wrapped by
fcmaes/modecpp.py.
pgpe.cpp
This file implements the native PGPE optimizer, including mirrored sampling and ADAM-based center updates.
The file is split between a small optimizer helper, ADAM, and the main PGPEOptimizer. The optimizer samples perturbation pairs around a center, turns the paired scores into center and standard-deviation gradients, clips how fast the standard deviations can move, and then updates the center with ADAM. That keeps the code close to the policy-search formulation rather than making it look like another generic ES loop.
Like the native CMA and CR-FM-NES paths, this file offers both direct execution and a stateful ask/tell API. The exported functions are what fcmaes/pgpecpp.py loads. In practice the wrapper uses the direct path for standard runs and the stateful path when it wants tighter control from Python.
- Main pieces:
ADAM,PGPEOptimizer, mirrored perturbation sampling, ranking-aware score processing, and ask/tell exports. - How it fits: the repository's native policy-gradient style optimizer.
- Python side: wrapped by
fcmaes/pgpecpp.py.
Headers In _fcmaescpp/include/
The headers fall into three camps. evaluator.h is repository-specific glue. LBFGS.h and LBFGSB.h are bundled numerical solver headers. The rest are the BiteOpt family and its supporting classes.
evaluator.h
This header is the shared callback, bounds, and worker-thread layer for most of the native backend.
If there is one header worth reading first, it is this one. The file defines the vector aliases used across the C++ sources, the callback signatures for single-objective and parallel evaluation, the Fitness wrapper that handles encoding and decoding for bounded domains, evaluation counting, feasibility helpers, and batch population evaluation. It is the piece that makes the optimizers look like they share one common problem interface.
The second half of the file adds the concurrency support. blocking_queue provides a small producer-consumer queue, vec_id packages a vector with an identifier, and evaluator spins up worker threads that pull evaluation jobs and push completed results back. The delayed-update CMA and DE code paths rely on that machinery directly.
- Main pieces: callback typedefs,
Fitness, bound normalization, evaluation accounting,blocking_queue, andevaluator. - How it fits: project-specific glue shared by most native optimizers.
- Python side: it matches the callback contracts used by the
ctypeswrappers.
LBFGS.h
This is the bundled upstream L-BFGS solver header for unconstrained local optimization.
The file is a header-only template implementation from LBFGSpp. Its main class, LBFGSSolver, keeps the quasi-Newton history, drives the line search, and checks gradient-based convergence. Nothing in the header is tailored to fast-cma-es specifically. It is present so the native code can ship with a known local-search implementation instead of adding another external dependency step.
Within this repository the bound-constrained variant matters more, because dual annealing uses LBFGSB.h. Still, this header belongs in the documentation because it shows where the local-search capability comes from and makes the upstream boundary clear.
- Main pieces:
LBFGSSolver, BFGS memory, line-search adapters, and gradient stopping checks. - How it fits: bundled numerical support, not repo-specific optimizer architecture.
LBFGSB.h
This is the bundled upstream L-BFGS-B solver used for box-constrained local search.
The header extends the same LBFGSpp family with projection to bounds, projected-gradient checks, Cauchy point logic, and subspace minimization. In this codebase its practical role is very clear: daoptimizer.cpp includes it and uses LBFGSBSolver as the fixed local-search phase inside dual annealing.
That makes the file important even though it is upstream code. It is one of the few bundled headers that a top-level optimizer source depends on directly.
- Main pieces:
LBFGSBSolver, bound projection, projected gradient norm, and line-search updates under box constraints. - How it fits: bundled support code, used directly by
daoptimizer.cpp.
biteaux.h
This large header is the foundation of the bundled BiteOpt code: random numbers, selectors, populations, and base optimizer interfaces all live here.
The file starts low in the stack with CBiteRnd, then builds up the selection helpers, population containers, parallel-population support, and the abstract optimizer interfaces. By the time you reach CBiteOptBase, you have the common mechanics the rest of the BiteOpt family leans on: bounds, evaluation hooks, selector bookkeeping, and population updates.
Because so much is packed into one header, this file is best read as infrastructure rather than algorithm narrative. It is also a good reminder that the repository reuses a mature external optimizer family here. The native project code sits on top of that bundle rather than replacing it.
- Main pieces:
CBiteRnd,CBiteSelBase,CBiteSel,CBitePop,CBiteParPops,CBiteOptInterface, andCBiteOptBase. - How it fits: shared upstream support for the BiteOpt family and nearby methods.
biteopt.h
This header contains the main BiteOpt algorithm classes, including the deep multi-optimizer wrapper used by the native BiteOpt bridge.
CBiteOpt is the core class here. It combines a large collection of generator choices, selector updates, and auxiliary optimizers to drive a derivative-free bounded search. The file also defines CBiteOptDeep, which stacks several CBiteOpt objects and works around a current best solution. That deep variant is what biteoptimizer.cpp subclasses and exposes.
The header is substantial because it carries the real BiteOpt search logic, not just support scaffolding. Even so, its role in this repository is still that of bundled upstream code. The project's own contribution is mostly the callback glue and the way the native bridge is shaped around it.
- Main pieces:
CBiteOpt,CBiteOptDeep, selector-heavy generation logic, andCBiteOptMinimize. - How it fits: the main bundled BiteOpt implementation behind
biteoptimizer.cpp.
biteort.h
This header provides the covariance-based rotation machinery used by the BiteOpt family when it wants an orthogonalized population view.
CBiteOrt sits on top of the population support from biteaux.h. It computes weighted centroids, covariance information, eigen-style rotation data, and new samples in the rotated space. That makes it a helper for the more geometry-aware strategies in the bundle rather than a standalone optimizer interface.
The file matters because it helps explain how the bundled CSMAESOpt family can behave differently from the plain population-based methods nearby. It adds a structured view of the current search cloud instead of relying only on direct coordinate-wise moves.
- Main pieces:
CBiteOrt, covariance updates, weighted centroids, and rotated-space sampling. - How it fits: support code for the BiteOpt ecosystem, especially
smaesopt.h.
deopt.h
This header holds the BiteOpt project's own DE-like derivative-free solver.
CDEOpt is easy to misread if you already know the repository's main DE backend. It is not the same implementation as deoptimizer.cpp. This one belongs to the BiteOpt bundle and is built on CBiteOptBase. Its update logic, population handling, and internal conventions come from that family of code.
That distinction is worth making because both files talk about differential-style search, but only deoptimizer.cpp is wired to fcmaes/decpp.py. deopt.h is present because the bundled BiteOpt code ships its own neighboring optimizer variants.
- Main pieces:
CDEOpt, bound-aware initialization, and one-evaluation-at-a-time optimization steps. - How it fits: bundled BiteOpt-side DE variant, separate from the repository's Eigen-based DE engine.
nmsopt.h
This header provides a sequential Nelder-Mead implementation used inside the BiteOpt bundle.
CNMSeqOpt wraps the usual simplex method in the same bound-aware interface used by the other bundled optimizers. The implementation has its own tuned coefficients and a fairly careful internal state machine, but in this repository its main value is as a helper component. biteopt.h can pair it with the main BiteOpt search instead of treating it as a top-level exported method.
That makes this file useful background when you want to understand why the BiteOpt code looks more eclectic than the Eigen-based optimizer files. The bundle mixes several search styles and switches between them when it sees an opening.
- Main pieces:
CNMSeqOpt, simplex state management, and bound-aware objective evaluation. - How it fits: an auxiliary optimizer inside the bundled BiteOpt stack.
smaesopt.h
This header contains the BiteOpt bundle's sigma-adaptation evolution strategy.
CSMAESOpt is related in spirit to CMA-style search, but it is not the same implementation as the repository's active CMA file. It builds on biteort.h, updates a distribution with a strong focus on sigma adaptation, and follows the interfaces and data structures used by the BiteOpt family.
It is worth documenting because the header shows how broad the bundled optimizer set is. The project did not just import one method under the BiteOpt name. It brought in a small neighborhood of compatible optimizers and helper classes.
- Main pieces:
CSMAESOpt, population sampling throughCBiteOrt, and sigma-focused distribution updates. - How it fits: bundled upstream strategy, not the same native CMA path exposed by
cmaescpp.py.
spheropt.h
This header defines a simple hyperspheroid search method used as part of the bundled BiteOpt ecosystem.
CSpherOpt keeps a centroid and radius-style view of the search distribution. The algorithm is lighter than the main BiteOpt path and is described right in the header as a fast, simple optimizer. In the bundle it is useful as one of the alternate engines that the higher-level BiteOpt logic can call into.
That gives the file a support role in this repository. It is not directly wrapped by a Python module, but it helps explain the moving parts that sit behind CBiteOpt and CBiteOptDeep.
- Main pieces:
CSpherOpt, centroid and radius buffers, and bounded one-evaluation-at-a-time updates. - How it fits: auxiliary bundled optimizer used by the BiteOpt family.