Benchmark problems for dynamic modeling of intracellular processes

Revision as of 15:13, 25 February 2020 by Bwday (talk | contribs) (Design for Outcome O2)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


1 Citation

Hass, Helge, et al. "Benchmark problems for dynamic modeling of intracellular processes." Bioinformatics 35.17 (2019): 3073-3082.

Permanent link to the paper

2 Summary

In this paper, a collection of ODE models with publicly available experimental data is compiled.

To prove its usefulness for computational studies within the field of ODE modeling, simulation studies are conducted to find the following two Outcomes which are connected to method comparison and performance assessment.

3 Study outcomes

3.1 Outcome O1

Optimization benefits from both (a) optimization in log-transformed parameter space and (b) drawing initial values in log-transformed parameter space. This could be due to increased convexity through log transformation.

Outcome O1 is presented as Figure 2 in the original publication.

3.2 Outcome O2

Th trust-region-reflective algorithm outperforms the interior-point algorithm in most parameter estimation problems encountered in systems biology.

Outcome O2 is presented as Figure 3 in the original publication.

3.3 Further outcomes

This model collection consists of mostly sloppy models.

4 Study design and evidence level

4.1 General aspects

  • The performance metric is the average number of converged starts per minute.
  • Difference in objective function to be smaller than 0.1 to assume that a fit has reached best fit minimum. The results remain the qualitatively the same if different threshold is used as discussed in the supplement of the original paper.
  • Family of benchmark models is used to test different approaches.

4.2 Design for Outcome O1

  • Drew parameters in log-space and subsequent optimization of parameters is performed in lin / log - space.

4.3 Design for Outcome O2

  • 1000 multi-start local optimization runs for each of the two optimizers.
  • Optimization settings are reported: A user-defined gradient and Hessian for Gauss-Newton optimization.
  1. The tolerance on first-order optimality was set to 0.
  2. Termination tolerance on the parameters was set to 10−6.
  3. As subproblem-algorithm, cg (conjugate gradient) was always chosen.
  4. The maximum number of iterations was set to 10 000.

5 Further comments and aspects

6 References

The list of cited or related literature is placed here.