How to specify algorithms and algorithm specific options#

The algorithm argument#

The algorithm argument can either be string with the name of a algorithm that is implemented in estimagic, or a function that fulfills the interface laid out in Internal optimizers for estimagic.

Which algorithms are available in estimagic depends on the packages a user has installed. We list all implemented algorithms below.

The algo_options argument#

algo_options is a dictionary with options that are passed to the optimization algorithm.

We align the names of all algo_options across algorithms as far as that is possible.

To make it easier to understand which aspect of the optimization is influenced by an option, we group them with prefixes. For example, the name of all convergence criteria starts with "convergence.". In general, the prefix is separated from the option name by a dot.

Which options are supported, depends on the algorithm you selected and is documented below. Before we get there, let’s look at one example:

algo_options = {
    "trustregion.threshold_successful": 0.2,
    "trustregion.threshold_very_successful": 0.9,
    "trustregion.shrinking_factor.not_successful": 0.4,
    "trustregion.shrinking_factor.lower_radius": 0.2,
    "trustregion.shrinking_factor.upper_radius": 0.8,
    "convergence.scaled_criterion_tolerance": 0.0,
    "convergence.noise_corrected_criterion_tolerance": 1.1,

To make it easier to switch between algorithms, we simply ignore non-supported options and issue a warning that explains which options have been ignored.

To find more information on algo_options that are supported by many optimizers, see The default algorithm options.

Available optimizers and their options#

Optimizers from scipy#

estimagic supports most scipy algorithms. You do not need to install additional dependencies to use them:

Own optimizers#

Estimagic’s own algorithms are considered experimental and should not be used for publication yet.

In the long run we plan to implement a few high quality optimizers that are specially suited for difficult optimizations that arise in estimation problems. Examples are optimizers that exploit a nonlinear least-squares structure and can deal with noisy criterion functions.

Optimizers from the Toolkit for Advanced Optimization (TAO)#

At the moment, estimagic only supports TAO’s POUNDERs algorithm.

The POUNDERs algorithm by Stefan Wild is tailored to minimize a non-linear sum of squares objective function. Remember to cite [algo_13] when using POUNDERs in addition to estimagic.

To use POUNDERs you need to have petsc4py installed.

Optimizers from the Numerical Algorithms Group (NAG)#

Currently, estimagic supports the Derivative-Free Optimizer for Least-Squares Minimization (DF-OLS) and BOBYQA by the Numerical Algorithms Group.

To use DF-OLS you need to have the dfols package installed (pip install DFO-LS). BOBYQA requires the pybobyqa package (pip install Py-BOBYQA).

PYGMO2 Optimizers#

Please cite [algo_18] in addition to estimagic when using pygmo. estimagic supports the following pygmo2 optimizers.

The Interior Point Optimizer (ipopt)#

estimagic’s support for the Interior Point Optimizer ([algo_34], [algo_35], [algo_36], [algo_37]) is built on cyipopt, a Python wrapper for the Ipopt optimization package.

To use ipopt, you need to have cyipopt installed (conda install cyipopt).

The Fides Optimizer#

estimagic supports the Fides Optimizer. To use Fides, you need to have the fides package installed (pip install fides>=0.7.4, make sure you have at least 0.7.1).


While the algorithm does work with boundaries, it requires that the optimum is away from the boundary for theoretically guaranteed convergence. In practice parameters at the boundary have also caused trouble.

The NLOPT Optimizers (nlopt)#

estimagic supports the following NLOPT algorithms. Please add the appropriate citations in addition to estimagic when using an NLOPT algorithm. To install nlopt run conda install nlopt.



Dieter Kraft. A software package for sequential quadratic programming. Technical Report, DLR German Aerospace Center – Institute for Flight Mechanics, Köln, Germany, 1988. URL:


Fuchang Gao and Lixing Han. Implementing the nelder-mead simplex algorithm with adaptive parameters. Computational Optimization and Applications, 51(1):259–277, 2012.


Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.


M Powell. A direct search optimization method that models the objective and constraint functions by linear interpolation, pages 51–67. Kluwer Academic, Dordrecht, 1994.


Michael JD Powell. Direct search algorithms for optimization calculations. Acta numerica, pages 287–336, 1998.


Michael JD Powell. A view of algorithms for optimization without derivatives. Mathematics Today-Bulletin of the Institute of Mathematics and its Applications, 43(5):170–174, 2007.


Marucha Lalee, Jorge Nocedal, and Todd Plantenga. On the implementation of an algorithm for large-scale equality constrained optimization. SIAM Journal on Optimization, 8(3):682–706, 1998.


AR Conn, NI Gould, and PL Toint. Nonlinear equations and nonlinear fitting. In Trust region methods, volume 1, pages 749–774. Siam, 2000.


Richard H Byrd, Mary E Hribar, and Jorge Nocedal. An interior point algorithm for large-scale nonlinear programming. SIAM Journal on Optimization, 9(4):877–900, 1999.


missing journal in Berndt1974


Halbert White. Maximum likelihood estimation of misspecified models. Econometrica, 50(1):1–25, 1982.


S Benson, LC McInnes, JJ Moré, T Munson, and J Sarich. Tao user manual (revision 3.7). Technical Report, Technical Report ANL/MCS-TM-322, Argonne National Laboratory, 2017. URL:


Stefan M. Wild. Solving derivative-free nonlinear least squares problems with pounders. Technical Report, Argonne National Laboratory, 2015. URL:


Coralia Cartis, Jan Fiala, Benjamin Marteau, and Lindon Roberts. Improving the flexibility and robustness of model-based derivative-free optimization solvers. 2018. arXiv:1804.00154.


Michael JD Powell. The bobyqa algorithm for bound constrained optimization without derivatives. Cambridge NA Report NA2009/06, University of Cambridge, Cambridge, pages 26–46, 2009.


Coralia Cartis, Jan Fiala, Benjamin Marteau, and Lindon Roberts. Improving the flexibility and robustness of model-based derivative-free optimization solvers. 2018. arXiv:1804.00154.


Coralia Cartis, Lindon Roberts, and Oliver Sheridan-Methven. Escaping local minima with derivative-free methods: a numerical investigation. 2018. arXiv:1812.11343.


Francesco Biscani and Dario Izzo. A parallel global multiobjective framework for optimization: pagmo. Journal of Open Source Software, 5(53):2338, 2020. URL:, doi:10.21105/joss.02338.


Martin Schlüter, Jose A. Egea, and Julio R. Banga. Extended ant colony optimization for non-convex mixed integer nonlinear programming. Computers & Operations Research, 36(7):2217–2229, jul 2009. doi:10.1016/j.cor.2008.08.015.


Dervis Karaboga and Bahriye Basturk. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization, 39(3):459–471, apr 2007. doi:10.1007/s10898-007-9149-x.


Marjan Mernik, Shih-Hsi Liu, Dervis Karaboga, and Matej Črepinšek. On clarifying misconceptions when comparing variants of the artificial bee colony algorithm by offering a new implementation. Information Sciences, 291:115–127, jan 2015. doi:10.1016/j.ins.2014.08.040.


Rainer Storn and Kenneth Price. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4):341–359, 1997. doi:10.1023/a:1008202821328.


Pietro S. Oliveto, Jun He, and Xin Yao. Time complexity of evolutionary algorithms for combinatorial optimization: a decade of results. International Journal of Automation and Computing, 4(3):281–293, jul 2007. doi:10.1007/s11633-007-0281-3.


Janez Brest, Sao Greiner, Borko Boskovic, Marjan Mernik, and Viljem Zumer. Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Transactions on Evolutionary Computation, 10(6):646–657, 2006. doi:10.1109/TEVC.2006.872133.


Saber M. Elsayed, Ruhul A. Sarker, and Daryl L. Essam. Differential evolution with multiple strategies for solving cec2011 real-world numerical optimization problems. In 2011 IEEE Congress of Evolutionary Computation (CEC), 1041–1048. 2011. doi:10.1109/CEC.2011.5949732.


Nikolaus Hansen. The CMA evolution strategy: a comparing review. In Towards a New Evolutionary Computation, pages 75–102. Springer Berlin Heidelberg, 2006. doi:10.1007/3-540-32494-1_4.


A. Corana, M. Marchesi, C. Martini, and S. Ridella. Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—corrigenda for this article is available here. ACM Trans. Math. Softw., 13(3):262–280, September 1987. URL:, doi:10.1145/29380.29864.


Riccardo Poli, James Kennedy, and Tim Blackwell. Particle swarm optimization. Swarm Intelligence, 1(1):33–57, aug 2007. doi:10.1007/s11721-007-0002-0.


David J. Wales and Jonathan P. K. Doye. Global optimization by basin-hopping and the lowest energy structures of lennard-jones clusters containing up to 110 atoms. The Journal of Physical Chemistry A, 101(28):5111–5116, jul 1997. doi:10.1021/jp970984n.


Tobias Glasmachers, Tom Schaul, Sun Yi, Daan Wierstra, and Jürgen Schmidhuber. Exponential natural evolution strategies. In Proceedings of the 12th annual conference on Genetic and evolutionary computation - GECCO \textquotesingle 10. ACM Press, 2010. doi:10.1145/1830483.1830557.


Seyedali Mirjalili, Seyed Mohammad Mirjalili, and Andrew Lewis. Grey wolf optimizer. Advances in Engineering Software, 69:46–61, mar 2014. doi:10.1016/j.advengsoft.2013.12.007.


Tamara G. Kolda, Robert Michael Lewis, and Virginia Torczon. Optimization by direct search: new perspectives on some classical and modern methods. SIAM Review, 45(3):385–482, jan 2003. doi:10.1137/s003614450242889.


M. Mahdavi, M. Fesanghary, and E. Damangir. An improved harmony search algorithm for solving optimization problems. Applied Mathematics and Computation, 188(2):1567–1579, may 2007. doi:10.1016/j.amc.2006.11.033.


Andreas Wächter and Lorenz T. Biegler. Line search filter methods for nonlinear programming: local convergence. SIAM Journal on Optimization, 16(1):32–48, jan 2005. doi:10.1137/s1052623403426544.


Andreas Wächter and Lorenz T. Biegler. Line search filter methods for nonlinear programming: motivation and global convergence. SIAM Journal on Optimization, 16(1):1–31, jan 2005. doi:10.1137/s1052623403426556.


Andreas Wächter and Lorenz T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25–57, apr 2005. doi:10.1007/s10107-004-0559-y.


Jorge Nocedal, Andreas Wächter, and Richard A. Waltz. Adaptive barrier update strategies for nonlinear interior methods. SIAM Journal on Optimization, 19(4):1674–1693, jan 2009. doi:10.1137/060649513.


N Chiang, C Petra, and V Zavala. Structured nonconvex optimization of large-scale energy systems using pips-nlp. In Proceedings of the 18th power systems computation conference (PSCC). Wroclaw, Poland, 2014.


Thomas F. Coleman and Yuying Li. On the convergence of interior-reflective newton methods for nonlinear minimization subject to bounds. Mathematcial Programming, 67(1-3):189–224, oct 1994. doi:10.1007/bf01582221.


Thomas F. Coleman and Yuying Li. An interior trust region approach for nonlinear minimization subject to bounds. SIAM Journal on Optimization, 6(2):418–445, may 1996. doi:10.1137/0806023.


C. G. Broyden. A class of methods for solving nonlinear simultaneous equations. Mathematics of Computation, 19(92):577–577, 1965. doi:10.1090/s0025-5718-1965-0198670-6.


Jorge Nocedal and Stephen J. Wright, editors. Numerical Optimization. Springer-Verlag, 1999. doi:10.1007/b98874.


J Nelder and R Mead. A simplex method for function minimization. The Computer Journal, 7:308–313, 1965.


Richard Brent. Algorithms for minimization without derivatives. 1972.


T Rowan. Functional stability analysis of numerical algorithms. 1990.


M Powell. The newuoa software for unconstrained optimization without derivatives. In Proc. 40th Workshop on Large Scale Nonlinear Optimization. Erice, Italy, 2004.


R Dembo and T Steihaug. Truncated newton algorithms for large-scale optimization. Math. Programming, 26:190–212, 1983. doi:10.1007/BF02592055.


J Nocedal ; D and J Liu. On the limited memory bfgs method for large scale optimization. Math. Comput, 35:503–528, 1989.


J. Nocedal. Updating quasi-newton matrices with limited storage. Math. Comput, 35:773–782, 1980.


Krister Svanberg. A class of globally convergent optimization methods based on conservative convex separable approximations. SIAM J. Optim, 12(2):555–573, 2002.


J Vlcek and L Luksan. Shifted limited-memory variable metric methods for large-scale unconstrained minimization. J. Computational Appl. Math, 186:365–390, 2006.


Dieter Kraft. Algorithm 733: tomp-fortran modules for optimal control calculations. ACM Transactions on Mathematical Software, 20(3):262–281, 1994.


D Jones, C Perttunen, and B Stuckmann. Lipschitzian optimization without the lipschitz constant. J. Optimization Theory and Applications, 79:157, 1993.


J Gablonsky and C Kelley. A locally-biased form of the direct algorithm. J. Global Optimization, 21(1):27–37, 2001.


C Da Silva, M Santos, H Goncalves, and Hernandez-Figueroa. Designing novel photonic devices by bio-inspired computing. IEEE Photonics Technology Letters, 22(15):1177–1179, 2010.


C Da Silva and Santos. Parallel and bio-inspired computing applied to analyze microwave and photonic metamaterial strucutures. 2010.


H.-G Beyer and H.-P Schwefel. Evolution strategies: a comprehensive introduction. Journal Natural Computing, 1(1):3–52, 2002.


W. Vent. Rechenberg, ingo, evolutionsstrategie — optimierung technischer systeme nach prinzipien der biologischen evolution. 170 s. mit 36 abb. frommann-holzboog-verlag. stuttgart 1973. broschiert. Feddes Repertorium, 86(5):337–337, 1975.


Thomas Philip Runarsson and Xin Yao. Search biases in constrained evolutionary optimization. IEEE Trans. on Systems, Man, and Cybernetics Part C: Applications and Reviews, 35(2):233–243, 2005.


P Thomas, Xin Runarsson, and Yao. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evolutionary Computation, 4(3):284–294, 2000.


P Kaelo and M Ali. Some variants of the controlled random search algorithm for global optimization. J. Optim. Theory Appl, 130(2):253–264, 2006.


W Price. A controlled random search procedure for global optimization, pages 71–84. Volume 2. North-Holland Press, Amsterdam, 1978.


W Price. Global optimization by controlled random search. J. Optim. Theory Appl, 40(3):333–348, 1983.