Tuesday, July 29, 2014

Resource post: Tools for better colors

Looking through my list of bookmarks, it’s apparent that something I like to read about (other than food) is design. The reason? I believe that well-made, attractive visuals are worth the effort because they can help make a point clearer and more memorable. In a sea of plots and diagrams (or in a room full of posters), you want people to notice, understand, and remember yours. One aspect of this process is finding the right color scheme to get your message across, which can involve several questions: How much (if any) color is necessary? What do I want these colors to convey? Will viewers be able to distinguish every color? When answering these questions, I've found a couple of links to be useful.

Getting inspired/informed: 
  • Dribbble: Although (or perhaps because) this design gallery isn't science-centric, browsing it can trigger new ideas for how to use color and other design elements. 
  • Color Universal Design: How to make figures that are colorblind-friendly, with examples. 
  • The subtleties of color: An introduction to using color in data visualization. 
Tools:  
  • Adobe Kuler: Generate color schemes based on color rules or images, browse schemes created by other users, or put together your own. This is nifty for seeing how different colors will look when they are side by side. 
  • Colorbrewer: A popular tool among scientists in part because it offers different color schemes for different data types. 
  • Colorzilla: A browser plug-in that makes it easy to pick and analyze color from webpages, for when you see a color that you really want to use. 
Do you have any favorite tools or approaches for using color? Or is it something that you'd rather not emphasize?

Sunday, July 20, 2014

Being (and keeping) a collaborator

Recently, a paper of ours was accepted for publication (stay tuned for more about that!). It grew out of a long, trans-atlantic collaboration. It was the first collaboration that I was part of, and I was "spoiled" by the experience because of how productive and fun it was (and continues to be). I remember the first time that my side of project yielded a useful clue. Much to my surprise and delight, our collaborators took that clue to their lab and followed up on it right away. 

Collaborations can be awesome. They're also becoming increasingly prevalent as connections grow between different fields. There are lots of potential benefits for everyone involved: you get to learn about techniques outside your own specialization, your can develop a unique new perspective, and you may find yourself having some friends to visit in faraway places. 
Good memories of great science-friends in Odense, Denmark.

However, I've noticed since then, through observation and experience, that not all collaborations reach their best potential. So I have been thinking about what qualities are possessed by a good collaborator so that I know what to look for and what I should try to be. 
  1. Finding a problem that you can tackle together. It goes without saying, but it's key to pick a problem that all participants care about and can actually work on. Bonus points if it's a problem that can only be addressed by combining the complementary skills of everyone involved. (Otherwise, are you collaborating just for show?)
  2. Reliability and communication. When you and your collaborator work in different offices (or countries), it can be easy to fall off each other's radar and let the project fizzle out. To avoid this outcome, demonstrate that you're serious about the project (even if you don't have spectacular results yet) and that you want to interact with them occasionally. 
  3. Openness to feedback. A big part of collaboration is giving each other feedback. When the person giving you feedback is not in your field, it may feel like they're impinging on your space. When this happens, pause for a minute - they might be giving you a fresh, valid perspective. Or, they might just need you to better clarify/justify what you're doing, which can be a preview of how an outside audience might respond. 
  4. Understanding capabilities and limitations. Everyone has some things (experiments, simulations, etc) that they can do routinely, other things that take more time/money/pain, and some things that would be desirable but are unfeasible. These things may be obvious to someone in your field, but you and your collaborator may need to discuss them to ensure that you both have a realistic picture of what the other can do. 
Have you been, or do you want to be, part of a collaboration? What did you get (or want to get) from the experience? 

Thursday, June 5, 2014

Pathetic thinking

Modelers with shared biological interests can have varying opinions about what a useful model looks like and the purpose of modeling, or rather the opportunities that exist to perform important work in a particular field.

In a recent commentary, Jeremy Gunawardena [BMC Biol 12: 29 (2014)] argues that models in biology are “accurate descriptions of our pathetic thinking.” He also offers three points of advice for modelers: 1) “ask a question,” 2) “keep it simple,” and 3) “If the model cannot be falsified, it is not telling you anything.” I whole-heartedly agree with these points, which are truisms among modelers; however, in my experience, the advice is followed to an extreme by some researchers, who interpret “ask a question” to mean that every model should be purpose-built to address a specific, narrow question, which ignores opportunities for model reuse, and who interpret “keep it simple” to mean that models should be tractable within the framework of traditional approaches only, ignoring new approaches that ease the task of modeling and expand the scope of what’s feasible. Some extremists seem to even hold the view that the mechanistic details elucidated by biologists are too complex to consider and therefore largely irrelevant for modelers.

Gunawardena may have given these extremists encouragement with his comment, “Including all the biochemical details may reassure biologists but it is a poor way to model.” I acknowledge that simple, abstract models, which may focus on capturing certain limited influences among molecular entities and processes and/or certain limited phenomenology, have been useful, and are likely to continue to be useful for a long time. However, there are certainly many important questions that can be feasibly addressed that do depend on consideration of not “all” of the biochemical details but rather on consideration of more, or even far more, of the biochemical details than usually considered by modelers today.

The messy details would also be important for the development of “standard models,” which do not currently exist in biology. Standard models in other fields, such as the Standard Model of particle physics, drive the activities of whole communities and tend to be detailed, because they consolidate understanding and are useful in large part because they identify the outstanding gaps in understanding. Would standard models benefit biologists?

An affirmative answer is suggested by the fact that there are many complicated cellular regulatory systems that have attracted enduring interest, such as the EGFR signaling network, which has been studied for decades for diverse reasons. A comprehensive, extensively tested, and largely validated model for one of these systems, meaning a standard model, would offer the benefits of such a model (which have been proven in non-biological fields) and would aid modelers by providing a trusted reusable starting point for asking not one question but many questions.

The extremists should take note of the saying attributed to Einstein, "Everything should be as simple as possible, but not simpler."

Gunawardena J (2014). Models in biology: 'accurate descriptions of our pathetic thinking'. BMC biology, 12 (1) PMID: 24886484

Bachman, J., & Sorger, P. (2011). New approaches to modeling complex biochemistry Nature Methods, 8 (2), 130-131 DOI: 10.1038/nmeth0211-130

Chelliah V, Laibe C, & Le Novère N (2013). BioModels Database: a repository of mathematical models of biological processes. Methods in molecular biology, 1021, 189-99 PMID: 23715986

Monday, April 28, 2014

Trophy papers

Getting a paper into certain journals is good for one's career. These papers usually represent impressive and important work. It seems that many more such manuscripts are produced than the number that can be published in high-profile journals, such as Nature. It's probably not a bad thing to submit a manuscript to a high-profile journal if you think you have a chance there, but these attempts often generate considerable frustration, for reasons ranging from peculiar formatting requirements to rejection without peer review. Some researchers believe in a piece of work so much that they are not deterred by these frustrations and keep submitting to one high-profile journal after another. This enthusiasm is admirable, but if repeated attempts fail, then the level of frustration can become rather high because of the wasted effort. I wonder how others handle this sort of situation. Do you put more work into the project? Do you submit to an open-access journal? Do you move on to the next desirable target journal and take on the significant non-scientific work, such as figure layout and reference formatting, which a manuscript revision can sometimes entail? Do you wonder if the manuscript is fatally flawed because of the initial attempt to present the findings in a highly concise format? Please share your thoughts and experiences. Should we even be trying to do more than simply sharing our findings?

Sunday, April 13, 2014

Etymology. (Not to be confused with entomology.)

It's time I explained where the name of the blog, "q-bingo", comes from.

It started last year at the q-bio conference, which is a conference focused on quantum quixotic quantitative biology. Like all fields, quantitative biology involves a certain amount of jargon and buzzwords, and certain words crop up more often than they would in everyday conversation.

And where would you hear those words most often? Conferences, of course. In fact, you might start keeping track of how many times certain words come up, and wonder if anyone else is keeping track too...

And thus, q-bingo was born. Simply cover a square whenever you hear a word used in a talk, and when you fill a straight line shout "q-bingo" straight away. Yes,  right there during the talk. [Disclaimer #1: I made this suggestion fully aware that my own talk would be punctuated by a few "bingo"s. Disclaimer #2: There are other examples of such games.] Conference organizers and attendees seemed to love the idea. Sadly, the game didn't quite get off the ground due to the issue of having to print 200+ of these things for everyone at the conference. 

On the other hand... At an immunology meeting, I wouldn't necessarily find it noteworthy or funny that people use specialized words like "clonotype" and "Fab fragment". So why did these words jump out at me?
  • I think part of the reason is that some of these words are used to create a certain impression rather than to communicate information. For example, the word "complexity" is often to used to throw a veil of sophistication over something, without explaining what makes the topic complex. Same with "network" and "circuit", to some extent. 
  • Other words, like "incoherent" (as in incoherent feed-forward, which is a simple pattern of interactions/influences) can mean vastly different things to other scientists and to the general public
  • A few words aren't actually objectionable or amusing - they capture ideas that people are excited about at a particular time. There were several talks about the importance of "single-cell" measurements because of cellular "heterogeneity". 

I want to hear your feedback. Are these just buzzwords, and should we try to use them less? Or are they signs of a young-ish field finding its own language? And of course... if you have ideas for q-bingo words, let me know in the comments because we might need them again this year. 

Friday, April 4, 2014

Leave the gun, take the cannoli

In the movie The Godfather, Peter Clemenza says to Rocco Lampone, "Leave the gun, take the cannoli." In this post, I want to argue that modelers should leave the sensitivity analysis and related methods, such as bootstrapping and Bayesian approaches for quantifying uncertainties of parameter estimates and model predictions [see the nice papers from Eydgahi et al. (2013) and Klinke (2009)], and take the non-obvious testable prediction. Why should we prefer a non-obvious testable prediction? First, let me say that the methodology mentioned above is valuable. I have nothing against it. I simply want to argue that these analyses are no substitute for a good, non-obvious, testable prediction. Let's consider the Bayesian methods cited above. These methods allow a modeler to generate confidence bounds on not only parameter estimates but also model predictions. That's great. However, these bounds do not guarantee the outcome of an experiment. The bounds are premised on prior knowledge, the data available, which may be incomplete and/or faulty. The same sort of limitation holds for the results of sensitivity analysis, bootstrapping, etc. I once saw a lecturer in the q-bio Summer School tell his audience that no manuscript about a modeling study should pass through peer review without inclusion of results from a sensitivity analysis. That seems like an extreme point of view to me and one that risks elevating sensitivity analysis in the minds of some to something more than it is, something that validates a model. Models can never be validated. They can only be falsified. (After many attempts to prove a model wrong, a model may however become trusted.) The way to subject a model to falsification (and to make progress in science) is to use it to make an interesting and testable prediction.

Sunday, March 30, 2014

What can modeling do for you?

In the blog so far, we've talked often about computational models - how they're made, what can go wrong, and what they could be like in the future. But what exactly are they - and why should anyone (especially biologists) care?

A model is a representation, or imitation, of a system that is difficult to examine directly. Biologists already use models all the time. For example, we'd like to understand biological process in humans, but since most of us can't experiment on humans, model organisms and cell lines are used instead. "Model" also refers a working hypothesis about how a system functions, which is often presented as a cartoon diagram.
A cartoon model for how the Shc1 adaptor protein acts at different stages of signaling.
.
Like a cartoon model, a computational model is created based on what a person knows, or on what they hypothesize. The difference is that instead of drawing a picture, which tends to be vague and qualitative, they make concrete and quantitative statements about how molecules interact. They then use this information to create a set of equations or a computer program, which is used to simulate system behavior. In modeling of chemical kinetics, the goal of simulation is often to see how certain outputs (like protein concentration) change over time, or under different conditions. So, like a model organism, a computational model can be used to make new discoveries with potential relevance to real-world questions.

Here are some of the reasons why I think biologists can get excited about what models can offer:
  1. A roadmap for pursuing experiments. Why do we do experiments? Often, it's to test a hypothesis. The more complicated the hypothesis, the greater number of experiments one could try, and the more involved each experiment might be. At the same time, even the most diligent of us want to optimize, and do minimum work for maximum information. Models can potentially help identify which tests would be most meaningful for supporting or disproving a hypothesis. 
  2. A way to make sense of complicated or conflicting data. Sometimes it turns out that two seemingly contradictory ideas are actually compatible if you think about the quantitative details, which is exactly what models are good for. 
  3. Consolidating and testing knowledge about a system. A typical experimental study provides information about one or a few interactions. Models can help us put together multiple pieces of information, like assembling a jigsaw puzzle, to form a more complete picture. Furthermore, by simulating such a model and comparing it to experimental data, interesting discrepancies can sometimes be identified. In other words, we can see whether we have enough puzzle pieces, or if we need to find more through additional experiments. 
At the same time, we need to remember that models won't magically provide the answers to everything. A model that simply recapitulates your expectations could be appealing; however, a model's real worth is in its ability to generate non-obvious, testable predictions. If you're going to start modeling or are thinking about starting a modeling collaboration, try to first learn about about what models can and can't do.

So what does the word "model" mean to you? And what do you think they can be used for?