Difference between revisions of "Lessons Learned from Quantitative Dynamical Modeling in Systems Biology"

(Paper summarized)
(Lessons Learned from Quantitative Dynamical Modeling in Systems Biology)
Line 2: Line 2:
 
[[https://doi.org/10.1371/journal.pone.0074335: Lessons Learned from Quantitative Dynamical Modeling in Systems Biology]]
 
[[https://doi.org/10.1371/journal.pone.0074335: Lessons Learned from Quantitative Dynamical Modeling in Systems Biology]]
 
Raue A, Schilling M, Bachmann J, Matteson A, Schelke M, et al. (2013) Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE 8(9): e74335. https://doi.org/10.1371/journal.pone.0074335
 
Raue A, Schilling M, Bachmann J, Matteson A, Schelke M, et al. (2013) Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE 8(9): e74335. https://doi.org/10.1371/journal.pone.0074335
 
  
 
=== Summary ===
 
=== Summary ===
 
This paper consideres modelling intracellular interaction networks with ordinary differential equation models (ODEs). Several aspects for robust and efficient estimation of model parameters were investigated.
 
This paper consideres modelling intracellular interaction networks with ordinary differential equation models (ODEs). Several aspects for robust and efficient estimation of model parameters were investigated.
 
  
 
=== Study outcomes ===
 
=== Study outcomes ===
Line 15: Line 13:
 
* Outcome O4: Derivatives calculated by sensitivities was superior to "finite differences"
 
* Outcome O4: Derivatives calculated by sensitivities was superior to "finite differences"
 
* Outcome O5: Reparametrization of the model equations improved the performance for one model
 
* Outcome O5: Reparametrization of the model equations improved the performance for one model
 +
* Outcome O6: A hybrid optimization method combining deterministic and stochastic optimization exhibited intermediate performance (better than pure stochastic, worse than pure deterministic) but at the same time had required the largest number of function evaluations
  
 
The paper discusses further aspects which are outside the benchmarking scope.
 
The paper discusses further aspects which are outside the benchmarking scope.
 
  
 
=== Application settings ===
 
=== Application settings ===
 
Three models are investigated:
 
Three models are investigated:
# A toy model was used to obtain study outcome Ox
+
# A toy model was used to obtain study outcome O2
# The so-called Becker model REF with 16 parameters and 85 experimental data points was used to derive study outcomes Ox and Ox.
+
# The so-called Becker model REF with 16 parameters and 85 experimental data points was used to derive study outcomes O3, O4 and O5.
# The so-called Bachmann model REF with 115 paraemters and 541 experimental data points was used to derive study outcomes O1 and  
+
# The so-called Bachmann model REF with 115 paraemters and 541 experimental data points was used to derive study outcomes O1, O3, O4 and O5.
 
 
  
 
=== Study design and evidence level ===
 
=== Study design and evidence level ===
 
The following aspects should be considered if the level of evidence is assessd.
 
The following aspects should be considered if the level of evidence is assessd.
 +
 +
==== Outcome O1 ====
 +
* The outcome O1 was observed for four different parallelization levels (1 vs. 2 cores, 1 vs. 4, 1 vs. 8, and 1 vs. 16)
 +
* The outcome was generated for a single model (Becker model)
 +
* The outcome was generated for 1000 randomly drawn parameter settings
 +
 +
==== Outcome O2 ====
 +
* A toy model woas used to obtain this outcome
 +
* 200 parameter estimation runs for 100 different simulated data sets were evaluated
  
 
==== Outcome O3 ====
 
==== Outcome O3 ====
 
* Two application models (Becker and Bachmann) were used for this outcome
 
* Two application models (Becker and Bachmann) were used for this outcome
 
* Untuned, standard configuration parameters were used for stochastic optimization
 
* Untuned, standard configuration parameters were used for stochastic optimization
 +
* 100 optimization runs with different randomly drawn initial guesses were evaluated
 +
* Computational speed has been evaluated in terms of number of function evaluations
  
 
==== Outcome O4 ====
 
==== Outcome O4 ====
 
* Two application models (Becker and Bachmann) were used for this outcome
 
* Two application models (Becker and Bachmann) were used for this outcome
 
* The observed performance benefit could be explained by illstrating non-smooth outcomes for finite differences if a parameter is varied and by showing a dependency on the finite difference step-size. Both issues did not occur for the solution of sensitivity equations.
 
* The observed performance benefit could be explained by illstrating non-smooth outcomes for finite differences if a parameter is varied and by showing a dependency on the finite difference step-size. Both issues did not occur for the solution of sensitivity equations.
 +
* 100 optimization runs with different randomly drawn initial guesses were evaluated
 +
* Computational speed has been evaluated in terms of number of function evaluations
  
 
==== Outcome O5 ====
 
==== Outcome O5 ====
* Two application models (Becker and Bachmann) were used for this outcome, but the performance benefit was only visible for the Bachmann model
+
* Two application models (Becker and Bachmann) were used for this outcome. A performance benefit was only visible for the Bachmann model.
 +
* 100 optimization runs with different randomly drawn initial guesses were evaluated
 +
* Computational speed has been evaluated in terms of number of function evaluations
  
 +
==== Outcome O6 ====
 +
* The hybrid algorithm was evaluated with default configuration parameters
 +
* 100 optimization runs with different randomly drawn initial guesses were evaluated
 +
* Computational speed has been evaluated in terms of number of function evaluations
  
 
=== Further References ===
 
=== Further References ===

Revision as of 14:43, 7 August 2018

Lessons Learned from Quantitative Dynamical Modeling in Systems Biology

[Lessons Learned from Quantitative Dynamical Modeling in Systems Biology] Raue A, Schilling M, Bachmann J, Matteson A, Schelke M, et al. (2013) Lessons Learned from Quantitative Dynamical Modeling in Systems Biology. PLOS ONE 8(9): e74335. https://doi.org/10.1371/journal.pone.0074335

Summary

This paper consideres modelling intracellular interaction networks with ordinary differential equation models (ODEs). Several aspects for robust and efficient estimation of model parameters were investigated.

Study outcomes

In this paper, the following approaches were compared:

  • Outcome O1: The reduction in compuatation time was shown if ODE models are fitted in a parallel implementation
  • Outcome O2: The bias of parameter estimation was smaller if error parameters are estimated simultaneously instead of estimating measurement errors as a preprocessing step by averaging over replicates.
  • Outcome O3: Stochastic optimization algorithms exhibited a weak performance compared to deterministic optimization methods
  • Outcome O4: Derivatives calculated by sensitivities was superior to "finite differences"
  • Outcome O5: Reparametrization of the model equations improved the performance for one model
  • Outcome O6: A hybrid optimization method combining deterministic and stochastic optimization exhibited intermediate performance (better than pure stochastic, worse than pure deterministic) but at the same time had required the largest number of function evaluations

The paper discusses further aspects which are outside the benchmarking scope.

Application settings

Three models are investigated:

  1. A toy model was used to obtain study outcome O2
  2. The so-called Becker model REF with 16 parameters and 85 experimental data points was used to derive study outcomes O3, O4 and O5.
  3. The so-called Bachmann model REF with 115 paraemters and 541 experimental data points was used to derive study outcomes O1, O3, O4 and O5.

Study design and evidence level

The following aspects should be considered if the level of evidence is assessd.

Outcome O1

  • The outcome O1 was observed for four different parallelization levels (1 vs. 2 cores, 1 vs. 4, 1 vs. 8, and 1 vs. 16)
  • The outcome was generated for a single model (Becker model)
  • The outcome was generated for 1000 randomly drawn parameter settings

Outcome O2

  • A toy model woas used to obtain this outcome
  • 200 parameter estimation runs for 100 different simulated data sets were evaluated

Outcome O3

  • Two application models (Becker and Bachmann) were used for this outcome
  • Untuned, standard configuration parameters were used for stochastic optimization
  • 100 optimization runs with different randomly drawn initial guesses were evaluated
  • Computational speed has been evaluated in terms of number of function evaluations

Outcome O4

  • Two application models (Becker and Bachmann) were used for this outcome
  • The observed performance benefit could be explained by illstrating non-smooth outcomes for finite differences if a parameter is varied and by showing a dependency on the finite difference step-size. Both issues did not occur for the solution of sensitivity equations.
  • 100 optimization runs with different randomly drawn initial guesses were evaluated
  • Computational speed has been evaluated in terms of number of function evaluations

Outcome O5

  • Two application models (Becker and Bachmann) were used for this outcome. A performance benefit was only visible for the Bachmann model.
  • 100 optimization runs with different randomly drawn initial guesses were evaluated
  • Computational speed has been evaluated in terms of number of function evaluations

Outcome O6

  • The hybrid algorithm was evaluated with default configuration parameters
  • 100 optimization runs with different randomly drawn initial guesses were evaluated
  • Computational speed has been evaluated in terms of number of function evaluations

Further References

V. Becker, M. Schilling, J. Bachmann, U. Baumann, A. Raue, T. Maiwald, J. Timmer, U. Klingmueller. Covering a broad dynamic range: Information processing at the erythropoietin receptor. Science 328, 2010, 1404-1408

J. Bachmann, A. Raue, M. Schilling, M. Böhm, A.C. Pfeifer, C. Kreutz, D. Kaschek, H. Busch, N. Gretz, W.D. Lehmann, J. Timmer, U. Klingmueller. Division of labor by dual feedback regulators controls JAK2/STAT5 signaling over broad ligand range. Mol. Sys. Bio. 7, 2011, 516