Tuesday, February 11, 2014

Dismantling the Rube-Goldberg machine

What do this puppy food commercial, an indie music video, and systems biology have in common?

They've all used Rube-Goldberg machines to great effect - devices that execute an elaborate series of steps to accomplish a goal that could have been reached through a (much) simpler process. As the ultimate elevation of means over ends, these machines have become a celebrated expression of ingenuity and humor. However, the presence of such machines in systems biology is perhaps not as obvious, intentional, or entertaining.

So what are the Rube-Goldberg machines of systems biology? In a small but a noticeable fraction of studies, complex models are used to reach conclusions that could be obtained just by looking at a diagram or by giving some thought to the question at hand - assuming that a question is at hand. It seems as though these studies primarily use models to produce plots and equations that reinforce, or embellish, intuitive explanations. However, the true usefulness of models comes into play when we leave the territory of intuition and begin to wonder about factors that can't be resolved by just thinking.

So when and why do we start thinking like Rube-Goldberg engineers, and what impact does it have on the field? A few educated guesses:
  • Some models are built without a question in mind. Its creators then search for a question to address, and end up with one that the model's content isn't well-suited to. 
  • We're all specialists in something, and we don't always know about all the tools and capabilities that others have developed. As a result, we sometimes try to solve a problem by reinventing the wheel, or by applying a tool that isn't a good fit for the problem, which can lead to all kinds of complications. 
  • To some audiences, just the concept of doing simulations seems impressive. As a result, modelers can be drawn into just putting technical skills on display and establishing a mystique around what they do, as opposed to applying their abilities to interesting questions.  
  • Obvious predictions may be easier to validate experimentally. 
I don't know if these practices have had a wholly negative impact on modeling efforts in biology - it may have even helped in some respects. But it would not be a bad idea to focus on challenging questions for which simulations are actually needed, and to try to get the most out of the models that we've taken the time and effort to build.

4 comments :

  1. William Barrett: “The absence of an intelligent idea in the grasp of a problem cannot be redeemed by the elaborateness of the machinery one subsequently employs.” (The Illusion of Technique)

    ReplyDelete
  2. Your second bullet point is something I've thought about a bit before and am very curious how to combat! It seems that often, techniques developed for a particular application would be super useful for an application in a totally different field....but how are the people in the other field going to find out if the technique is described in a seemingly unrelated journal? That medical researcher who rediscovered numerical integration (http://care.diabetesjournals.org/content/17/2/152.abstract) is a fairly extreme example of the reinventing the wheel that can result, I think.

    ReplyDelete
    Replies
    1. You have a really good point. I do think that's the hardest part to avoid - sometimes it's even difficult to figure out what words to look for when trying to learn if something's been done before. I guess that's one reason to be in communication with people in other fields, who might be more familiar with different techniques. (The paper you mentioned could probably have been avoided if the author had talked to someone who'd taken calculus a little more recently...)

      Although... it might not be a bad thing in all cases. Sometimes it's interesting to see how a problem can be solved through different approaches.

      Delete
    2. The paper mentioned by Veronica is cited >200 times according to Google Scholar.

      Delete