Benchmarking optimization methods for parameter estimation in large kinetic models

Revision as of 07:49, 18 June 2019 by Ckreutz (talk | contribs) (Paper name)

1 Paper name

Alejandro F Villaverde, Fabian Fröhlich, Daniel Weindl, Jan Hasenauer, Julio R Banga, Benchmarking optimization methods for parameter estimation in large kinetic models, Bioinformatics, 35(5), 830–838, 2019.

Permanent link to the paper

1.1 Summary

In this paper, the performance of multiple optimization approaches for estimating parameters in the context of ODE models in systems biology are investigated.

The following combinations of local and global search strategies were investigated:

  • Local methods: Two deterministic optimization approaches (fmincon with adjoint sensitivities vs. nl2sol with forward sensitivities) vs. gradient-free dynamic hill climbing vs. none (=only global)
  • All these local methods were either combined with Multistart optimization or with the enhanced scatter search (eSS) metaheuristics as global search strategy.
  • As a pure global search strategy, particle swarm optimization was considered.

Moreover, the different combinations were evaluated at the linear and the logarithmic space.

6 benchmark problems were evaluated with 36-383 parameters and 105-7567 data points. Three of these problems have experimental data, for three models only simulated data was available.

1.2 Study outcomes

List the paper results concerning method comparison and benchmarking:

1.2.1 Outcome O1

  • Enhanced scatter search was found as more efficient than multistart optimization, although both approach could solve problems in reasonable time. The average difference in computation time is around a factor of two.

Outcome O1 is presented as Figure X in the original publication.

1.2.2 Outcome O2

  • Pure global search strategies are less efficient than hybrid methods (= combination of local and global approaches).

Outcome O2 is presented as Figure X in the original publication.

1.2.3 Outcome O3

  • Optimization at the log-scale is more efficient than optimization at the linear scale.

Outcome On is presented as Figure X in the original publication.

1.2.4 Outcome O4

  • As a local search strategy, the two gradient-based approaches showed the best performances.

Outcome On is presented as Figure X in the original publication.

1.2.5 Outcome O5

  • Overall, eSS-FMINCON-ADJ-LOG had the best performance. This method combines enhances scatter search as global search strategy with gradient-based local optimization. fmincon is the optimization algorithm and gradients were calculated by the adjoint sensitivity approach. The parameters were optimized at the log-scale.

Outcome On is presented as Figure X in the original publication.

1.3 Study design and evidence level

1.3.1 General aspects

  • The authors provide source-code which seems to enable reproduction of the presented results.

This is very valuable.

  • The study was jointly performed from experts of two fields: stochastic global and deterministic local methods to ensure a fair comparison.
  • The choice of configuratiparameters is not discussed in the paper. However, the authors provide comprehensive information about the selected configuration parameters by publishing their source code.
  • For the 7 evaluated models, only 3 had real experimental data
  • For 5 of the 7 models, rather stringent parameter bounds were assumed. For 4 models, only a range spanning 2 orders of magnitudes around the optimal parameters was defined as search space. Only two models have a realistic range spanning six/ten orders of magnitudes.
  • The parameter bounds were defined as symmetric around the optimal parameters. This might cause a bias because optimization methods which tend to search in the middle of the parameter space are preferred. Moreover, the performance of optimization at bounds (or close to bounds) is not evaluated.
  • Three out of the six models have less data points than parameters.

1.4 Further comments and aspects

  • The approaches were assessed by overall efficiency (OE) which was introduced in this paper. The inverse 1/OE⁠, quantifies how much longer a method needs compared to the best method in order to find a good solution. For the best method, OE=1 is obtained. OE=2, as an example, means that the double computation time is required for a good fit.
  • The best performing method is a combination of previously published methods.

However, the combination is introduced within this study. In general, presenting a new approach by comparing with existing ones easily leads to biased outcomes and validation in independent studies is recommended.