Difference between revisions of "Benchmark problems for dynamic modeling of intracellular processes"

(Summary)
(Design for Outcome O2)
 
(4 intermediate revisions by the same user not shown)
Line 5: Line 5:
  
 
[https://doi.org/10.1093/bioinformatics/btz020 Permanent link to the paper]
 
[https://doi.org/10.1093/bioinformatics/btz020 Permanent link to the paper]
 
  
 
=== Summary ===
 
=== Summary ===
 
In this paper, a collection of ODE models with publicly available experimental data is compiled.
 
In this paper, a collection of ODE models with publicly available experimental data is compiled.
  
To prove its usefulness for computational studies within the field of ODE modeling, simulation studies are conducted to find the following three Outcomes which are connected to method comparison and performance assessment.
+
To prove its usefulness for computational studies within the field of ODE modeling, simulation studies are conducted to find the following two Outcomes which are connected to method comparison and performance assessment.
  
 
=== Study outcomes ===
 
=== Study outcomes ===
  
 
==== Outcome O1 ====
 
==== Outcome O1 ====
Optimization benefits from log-transformed parameter space. This could be due to increased convexity through log transformation.
+
Optimization benefits from both (a) optimization in log-transformed parameter space and (b) drawing initial values in log-transformed parameter space. This could be due to increased convexity through log transformation.
  
Outcome O1 is presented as Figure X in the original publication.  
+
Outcome O1 is presented as Figure 2 in the original publication.
  
 
==== Outcome O2 ====
 
==== Outcome O2 ====
The comparison of trust-region-reflective and interior-point algorithms revealed that the former is better suited for most parameter estimation problems encountered in systems biology.
+
Th trust-region-reflective algorithm outperforms the interior-point algorithm in most parameter estimation problems encountered in systems biology.
  
Outcome O2 is presented as Figure X in the original publication.
+
Outcome O2 is presented as Figure 3 in the original publication.
 
==== Outcome O3 ====
 
The scaling behavior confirmed theoretical results showing that the number of optimizer steps does not depend on the number of model parameters.
 
 
 
Outcome O3 is presented as Figure X in the original publication.  
 
  
 
==== Further outcomes ====
 
==== Further outcomes ====
If intended, you can add further outcomes here.
+
This model collection consists of mostly sloppy models.
  
 
=== Study design and evidence level ===
 
=== Study design and evidence level ===
 +
 
==== General aspects ====
 
==== General aspects ====
You can describe general design aspects here.
+
* The performance metric is the average number of converged starts per minute.
The study designs for describing specific outcomes are listed in the following subsections:
+
* Difference in objective function to be smaller than 0.1 to assume that a fit has reached best fit minimum. The results remain the qualitatively the same if different threshold is used as discussed in the supplement of the original paper.
 +
* Family of benchmark models is used to test different approaches.
  
 
==== Design for Outcome O1 ====
 
==== Design for Outcome O1 ====
* The outcome was generated for ...
+
* Drew parameters in log-space and subsequent optimization of parameters is performed in lin / log - space.
* Configuration parameters were chosen ...
+
 
* ...
 
 
==== Design for Outcome O2 ====
 
==== Design for Outcome O2 ====
* The outcome was generated for ...
+
* 1000 multi-start local optimization runs for each of the two optimizers.
* Configuration parameters were chosen ...
+
* Optimization settings are reported: A user-defined gradient and Hessian for Gauss-Newton optimization.
* ...
+
# The tolerance on first-order optimality was set to 0.
 
+
# Termination tolerance on the parameters was set to 10<sup>−6</sup>.
...  
+
# As subproblem-algorithm, cg (conjugate gradient) was always chosen.
 
+
# The maximum number of iterations was set to 10 000.
==== Design for Outcome O ====
 
* The outcome was generated for ...
 
* Configuration parameters were chosen ...
 
* ...
 
  
 
=== Further comments and aspects ===
 
=== Further comments and aspects ===

Latest revision as of 15:13, 25 February 2020


1 Citation

Hass, Helge, et al. "Benchmark problems for dynamic modeling of intracellular processes." Bioinformatics 35.17 (2019): 3073-3082.

Permanent link to the paper

2 Summary

In this paper, a collection of ODE models with publicly available experimental data is compiled.

To prove its usefulness for computational studies within the field of ODE modeling, simulation studies are conducted to find the following two Outcomes which are connected to method comparison and performance assessment.

3 Study outcomes

3.1 Outcome O1

Optimization benefits from both (a) optimization in log-transformed parameter space and (b) drawing initial values in log-transformed parameter space. This could be due to increased convexity through log transformation.

Outcome O1 is presented as Figure 2 in the original publication.

3.2 Outcome O2

Th trust-region-reflective algorithm outperforms the interior-point algorithm in most parameter estimation problems encountered in systems biology.

Outcome O2 is presented as Figure 3 in the original publication.

3.3 Further outcomes

This model collection consists of mostly sloppy models.

4 Study design and evidence level

4.1 General aspects

  • The performance metric is the average number of converged starts per minute.
  • Difference in objective function to be smaller than 0.1 to assume that a fit has reached best fit minimum. The results remain the qualitatively the same if different threshold is used as discussed in the supplement of the original paper.
  • Family of benchmark models is used to test different approaches.

4.2 Design for Outcome O1

  • Drew parameters in log-space and subsequent optimization of parameters is performed in lin / log - space.

4.3 Design for Outcome O2

  • 1000 multi-start local optimization runs for each of the two optimizers.
  • Optimization settings are reported: A user-defined gradient and Hessian for Gauss-Newton optimization.
  1. The tolerance on first-order optimality was set to 0.
  2. Termination tolerance on the parameters was set to 10−6.
  3. As subproblem-algorithm, cg (conjugate gradient) was always chosen.
  4. The maximum number of iterations was set to 10 000.

5 Further comments and aspects

6 References

The list of cited or related literature is placed here.