Difference between revisions of "Benchmarking optimization methods for parameter estimation in large kinetic models"

(Page created with some text)
 
(Models added)
Line 9: Line 9:
  
 
=== Summary ===
 
=== Summary ===
Briefly describe the scope of the paper, i.e. the field of research and/or application.
+
In this paper, the performance of multiple optimization approaches for estimating parameters in the context of ODE models in systems biology are investigated.
 +
 
 +
The following combinations of local and global search strategies were investigated:
 +
* Local methods: Two deterministic optimization approaches (fmincon vs. nl2sol) vs. gradient-free dynamic hill climbing vs. none (=only global)
 +
* Multistart optimization vs. enhanced scatter search (eSS) metaheuristics
 +
 
 +
Moreover, the different combinations were evaluated at the linear and the logarithmic space.
 +
 
 +
6 benchmark problems were evaluated with 36-383 parameters and 105-7567 data points.
 +
Three of these problems have experimental data, for three models only simulated data was available.
 +
 
  
 
=== Study outcomes ===
 
=== Study outcomes ===
Line 64: Line 74:
  
 
=== Further comments and aspects ===
 
=== Further comments and aspects ===
* The authors provide
+
* For the 7 evaluated models, only 3 had real experimental data
* A
+
* For 5 of the 7 models, rather stringent parameter bounds were assumed. For 4 models, only a range spanning 2 orders of magnitudes around the optimal parameters was defined as search space. Only two models have a realistic range spanning six/ten orders of magnitudes.
 +
* The parameter bounds were defined as symmetric around the optimal parameters. This might cause a bias because optimization methods which tend to search in the middle of the parameter space are preferred. Moreover, the performance of optimization at bounds (or close to bounds) is not evaluated.
 +
* Three out of the six models have less data points than parameters.
 +
 
  
 
=== References ===
 
=== References ===
 
The list of cited or related literature is placed here.
 
The list of cited or related literature is placed here.

Revision as of 07:19, 18 June 2019

1 Paper name

Alejandro F Villaverde Fabian Fröhlich Daniel Weindl Jan Hasenauer Julio R Banga, Benchmarking optimization methods for parameter estimation in large kinetic models, Bioinformatics, Volume 35, Issue 5, 01 March 2019, Pages 830–838,

Permanent link to the paper


1.1 Summary

In this paper, the performance of multiple optimization approaches for estimating parameters in the context of ODE models in systems biology are investigated.

The following combinations of local and global search strategies were investigated:

  • Local methods: Two deterministic optimization approaches (fmincon vs. nl2sol) vs. gradient-free dynamic hill climbing vs. none (=only global)
  • Multistart optimization vs. enhanced scatter search (eSS) metaheuristics

Moreover, the different combinations were evaluated at the linear and the logarithmic space.

6 benchmark problems were evaluated with 36-383 parameters and 105-7567 data points. Three of these problems have experimental data, for three models only simulated data was available.


1.2 Study outcomes

List the paper results concerning method comparison and benchmarking:

1.2.1 Outcome O1

The performance of ...

Outcome O1 is presented as Figure X in the original publication.

1.2.2 Outcome O2

...

Outcome O2 is presented as Figure X in the original publication.

1.2.3 Outcome On

...

Outcome On is presented as Figure X in the original publication.

1.2.4 Further outcomes

If intended, you can add further outcomes here.


1.3 Study design and evidence level

1.3.1 General aspects

The best performing method is introduced within this study. In general, presenting a new approach by comparing with existing ones easily leads to biased outcomes.

The authors provide source-code which seems to enable reproduction of the presented results. This is very valuable.

The study was jointly performed from experts of two fields: stochastic global and deterministic local methods to ensure a fair comparison.

The study designs for describing specific outcomes are listed in the following subsections:

seven previously published estimation problems

1.3.2 Design for Outcome O1

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

1.3.3 Design for Outcome O2

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

...

1.3.4 Design for Outcome O

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

1.4 Further comments and aspects

  • For the 7 evaluated models, only 3 had real experimental data
  • For 5 of the 7 models, rather stringent parameter bounds were assumed. For 4 models, only a range spanning 2 orders of magnitudes around the optimal parameters was defined as search space. Only two models have a realistic range spanning six/ten orders of magnitudes.
  • The parameter bounds were defined as symmetric around the optimal parameters. This might cause a bias because optimization methods which tend to search in the middle of the parameter space are preferred. Moreover, the performance of optimization at bounds (or close to bounds) is not evaluated.
  • Three out of the six models have less data points than parameters.


1.5 References

The list of cited or related literature is placed here.