Difference between revisions of "Hierarchical optimization for the efficient parametrization of ODE models"

(Convergence of optimizer)
(Computational Efficiency)
Line 22: Line 22:
  
 
==== Computational Efficiency ====
 
==== Computational Efficiency ====
...
 
  
Outcome On is presented as Figure X in the original publication.  
+
Fixing the computational budget, it was shown that the reduction in computation time per converged fit leads to 5.06 times more optimization runs reaching the best objective function value, visualized in Fig. 4D-E.
  
 
==== Further outcomes ====
 
==== Further outcomes ====

Revision as of 10:55, 25 February 2020

1 General Information

C Loos, S Krause, J Hasenauer (2013) : Hierarchical optimization for the efficient parametrization of ODE models Bioinformatics, Volume 34, Issue 24, Pages 4266–4273.

https://doi.org/10.1093/bioinformatics/bty514

1.1 Summary

In ODE-based modeling in the systems biology field, often only relative data is available whose measurement errors are not unknown. A common approach to deal with this setting is the introduction of scaling and noise parameters, see Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. Since introducing additional parameters can decrease the performance of the parameter optimization algorithm, this paper introduced an hierarchical approach to separate the fitting of the nuisance from the dynamic parameters in every step. This was compared to the standard approach of fitting all parameters simultaneously in terms of optimizer convergence and computational efficiency for 3 systems biology models.

1.2 Study outcomes

In order to generate the outcomes of the study, the MATLAB toolbox PESTO was used. In this framework, multi-start local optimization using the function fmincon.m was employed.

1.2.1 Best Fit

Standard and hierarchical multi-start optimization both find the same globally optimal objective function for both proposed error models (Gaussian and Laplace noise) except for one model with Laplace noise, where both approaches did not produce the same model trajectories.

1.2.2 Convergence of optimizer

The hierarchical optimization improved the number of converged local fits from 18.4% to 29.3%, presented in Fig. 4C in the original publication.

1.2.3 Computational Efficiency

Fixing the computational budget, it was shown that the reduction in computation time per converged fit leads to 5.06 times more optimization runs reaching the best objective function value, visualized in Fig. 4D-E.

1.2.4 Further outcomes

If intended, you can add further outcomes here

1.3 Study design and evidence level

1.3.1 General aspects

You can describe general design aspects here. The study designs for describing specific outcomes are listed in the following subsections:

1.3.2 Design for Outcome O1

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

1.3.3 Design for Outcome O2

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

...

1.3.4 Design for Outcome O

  • The outcome was generated for ...
  • Configuration parameters were chosen ...
  • ...

1.4 Further comments and aspects

1.5 References

The list of cited or related literature is placed here.