Friday, April 4, 2014

Leave the gun, take the cannoli

In the movie The Godfather, Peter Clemenza says to Rocco Lampone, "Leave the gun, take the cannoli." In this post, I want to argue that modelers should leave the sensitivity analysis and related methods, such as bootstrapping and Bayesian approaches for quantifying uncertainties of parameter estimates and model predictions [see the nice papers from Eydgahi et al. (2013) and Klinke (2009)], and take the non-obvious testable prediction. Why should we prefer a non-obvious testable prediction? First, let me say that the methodology mentioned above is valuable. I have nothing against it. I simply want to argue that these analyses are no substitute for a good, non-obvious, testable prediction. Let's consider the Bayesian methods cited above. These methods allow a modeler to generate confidence bounds on not only parameter estimates but also model predictions. That's great. However, these bounds do not guarantee the outcome of an experiment. The bounds are premised on prior knowledge, the data available, which may be incomplete and/or faulty. The same sort of limitation holds for the results of sensitivity analysis, bootstrapping, etc. I once saw a lecturer in the q-bio Summer School tell his audience that no manuscript about a modeling study should pass through peer review without inclusion of results from a sensitivity analysis. That seems like an extreme point of view to me and one that risks elevating sensitivity analysis in the minds of some to something more than it is, something that validates a model. Models can never be validated. They can only be falsified. (After many attempts to prove a model wrong, a model may however become trusted.) The way to subject a model to falsification (and to make progress in science) is to use it to make an interesting and testable prediction.

3 comments :

  1. Hi Bill,

    I think I was that lecturer you mention. I don't disagree with you about the value of predictions or about the fact that sensitivity analysis doesn't validate a model. (In fact, I think systems biology has fetishized many forms of "robustness" with little evidence that they are general biological principles.)

    That said, I do think sensitivity analysis is essential for making "non-obvious testable predictions" that are interesting. Sensitivity analysis will tell you whether your prediction depends on fundamental biological assumptions in the model or whether it depends on (for example) an arbitrary choice to set kcat to 2.0 rather than 13.0. The first case is interesting; the second isn't.

    ReplyDelete
  2. I agree that sensitivity analysis does seem to be a fetish for some. I also agree that predictions that are good candidates for testing will need to be robust to parameter uncertainties given the usual uncertainties about mechanisms and the typical imprecision of available experimental methods but I don't believe sensitivity analysis, narrowly defined as one of the popular techniques, is an obligatory step in the process of finding such predictions, nor do I believe that sensitivity analysis is the most important way to build confidence in a model. Before a model is used to make an interesting prediction, it should be checked against what's already known about system behavior. I'm wondering if there are any modeling studies of biological systems where sensitivity analysis had a clear and meaningful impact. I can't think of any at the moment. Maybe you know of an example?
    p.s. In case there's any doubt... :) Ryan, I would love to have you lecture in the q-bio Summer School again. Your lecture is always one of the best.

    ReplyDelete
  3. In my opinion, proper term should be “robustness analysis”. Modeling papers should include some analysis showing the level of confidence or robustness. Reviewers often ask for parameter sensitivity analysis and it seems fine to include some calculation without telling anything about the model's confidence. For example, doing a local sensitivity analysis is often useless. Bayesian analysis to show parameter distribution is not much useful without a final table or graph showing the confidence interval for the model against certain properties.

    I believe a model can be useful for narrowing down possibilities, and for justifying or invalidating hypotheses/speculations on data or results. In such cases, parameter sensitivity is not essential. Some models are useful because of their structure and justified thoughts behind their formulations.

    ReplyDelete