Difference between revisions of "A comparison of methods for quantifying prediction uncertainty in systems biology"

(Summary)
(General aspects)
 
(42 intermediate revisions by the same user not shown)
Line 5: Line 5:
  
 
=== Summary ===
 
=== Summary ===
Three methods for quantifying prediction uncertainty in ODE models are assessed. Prediction uncertainty does not refer to estimated parameters, but to the uncertatinty of state trajectories.
+
Three methods for quantifying prediction uncertainty in ODE models are assessed. Here, prediction uncertainty does not refer to estimated parameters, but to the uncertatinty of state trajectories. The three methods are: Fisher Information Matrix (FIM), Prediction Posetrior (PP), Ensemble Consensus (ENS).
  
 
=== Study outcomes ===
 
=== Study outcomes ===
List the paper results concerning method comparison and benchmarking:
 
 
==== Outcome O1 ====
 
==== Outcome O1 ====
The performance of ...
+
For a small, fully-observed ODE model (α-pinene, [https://www.tandfonline.com/doi/abs/10.1080/00401706.1973.10489009 Box et al.]), all three methods yield nearly same results consistent with the known true trajectories.
 
+
For a larger, only-partially observed ODE model (JAK2/STAT5, [https://doi.org/10.1038/msb.2011.50 Bachmann et al.]), PP and ENS yield better accuracy than FIM. However, even for PP and ENS, confidence levels do not cover the truth.
Outcome O1 is presented as Figure X in the original publication.  
 
  
 
==== Outcome O2 ====
 
==== Outcome O2 ====
...
+
The computational cost of the three models is differing, especially for large problems: FIM (small), ENS (intermediate), PP (high)
 
 
Outcome O2 is presented as Figure X in the original publication.
 
 
==== Outcome On ====
 
...
 
 
 
Outcome On is presented as Figure X in the original publication.
 
 
 
==== Further outcomes ====
 
If intended, you can add further outcomes here.
 
 
 
  
 
=== Study design and evidence level ===
 
=== Study design and evidence level ===
 
==== General aspects ====
 
==== General aspects ====
You can describe general design aspects here.
+
Synthetic data is generated for two examplary ODE models given a true parameter set. One model is smaller (5 parameter, 5 states, 5 observables) and is rather used as sanity check, the other is larger (27 parameter, 25 states, 20 observables) and considered more realistic.
The study designs for describing specific outcomes are listed in the following subsections:
+
For FIM and ENS the MATLAB version of the MEIGO toolbox ([https://doi.org/10.1186/1471-2105-15-136 Egea et al.]) was used for parameter estimation, whereas for PP it was used MATLAB parameter estimation toolbox PESTO ([https://doi.org/10.1093/bioinformatics/btx676 Stapor et al.]).
  
 
==== Design for Outcome O1 ====
 
==== Design for Outcome O1 ====
* The outcome was generated for ...
+
The sample correlation coefficient is used to quantify the agreement between predicted and true state trajectories / trajectory errors. The different methods are compared for the different ODE models, respectively.
* Configuration parameters were chosen ...
+
 
* ...
 
 
==== Design for Outcome O2 ====
 
==== Design for Outcome O2 ====
* The outcome was generated for ...
+
Computation time is compared for the different models.
* Configuration parameters were chosen ...
 
* ...
 
 
 
...
 
 
 
==== Design for Outcome O ====
 
* The outcome was generated for ...
 
* Configuration parameters were chosen ...
 
* ...
 
  
 
=== Further comments and aspects ===
 
=== Further comments and aspects ===
 
+
The so far missing assessment of Prediction Profile Likelihood ([https://doi.org/10.1186/1752-0509-6-120 Kreutz et al.], [https://doi.org/10.1093/bioinformatics/btv743 Hass et al.]) as further prediction uncertainty quantification method is planned to be presented "in the short term".
=== References ===
 
The list of cited or related literature is placed here.
 

Latest revision as of 15:13, 25 February 2020

Citation

Villaverde, Alejandro F., et al. "A comparison of methods for quantifying prediction uncertainty in systems biology." IFAC-PapersOnLine 52.26 (2019): 45-51.

Permanent link to the paper

Summary

Three methods for quantifying prediction uncertainty in ODE models are assessed. Here, prediction uncertainty does not refer to estimated parameters, but to the uncertatinty of state trajectories. The three methods are: Fisher Information Matrix (FIM), Prediction Posetrior (PP), Ensemble Consensus (ENS).

Study outcomes

Outcome O1

For a small, fully-observed ODE model (α-pinene, Box et al.), all three methods yield nearly same results consistent with the known true trajectories. For a larger, only-partially observed ODE model (JAK2/STAT5, Bachmann et al.), PP and ENS yield better accuracy than FIM. However, even for PP and ENS, confidence levels do not cover the truth.

Outcome O2

The computational cost of the three models is differing, especially for large problems: FIM (small), ENS (intermediate), PP (high)

Study design and evidence level

General aspects

Synthetic data is generated for two examplary ODE models given a true parameter set. One model is smaller (5 parameter, 5 states, 5 observables) and is rather used as sanity check, the other is larger (27 parameter, 25 states, 20 observables) and considered more realistic. For FIM and ENS the MATLAB version of the MEIGO toolbox (Egea et al.) was used for parameter estimation, whereas for PP it was used MATLAB parameter estimation toolbox PESTO (Stapor et al.).

Design for Outcome O1

The sample correlation coefficient is used to quantify the agreement between predicted and true state trajectories / trajectory errors. The different methods are compared for the different ODE models, respectively.

Design for Outcome O2

Computation time is compared for the different models.

Further comments and aspects

The so far missing assessment of Prediction Profile Likelihood (Kreutz et al., Hass et al.) as further prediction uncertainty quantification method is planned to be presented "in the short term".