Friday, August 22, 2014

Elucidating missing links of the TCR signaling network

Just published:
Phosphorylation site dynamics of early T-cell receptor signaling. LA Chylek, V Akimov, J Dengjel,  KTG Rigbolt, WS Hlavacek, B Blagoev. PLOS ONE 9, e104240

Stimulation of the T-cell receptor (TCR) can trigger a cascade of biochemical signaling events with far-reaching consequences for the T cell, including changes in gene regulation and remodeling of the actin cytoskeleton. A driving force in the initiation of signaling is phosphorylation and dephosphorylation of signaling proteins. This process has been difficult to characterize in detail because phosphorylation takes place rapidly, on the timescale of seconds, which can confound efforts to decode the order in which events occur. In addition, multiple residues in a protein may be phosphorylated, each involved in distinct regulatory mechanisms, necessitating analysis of individual sites.

To characterize the dynamics of site-specific phosphorylation in the first 60 seconds of TCR signaling, we stimulated cells for precise lengths of time using a quench-flow system and quantified changes in phosphorylation using mass spectrometry-based phosphoproteomics. We developed a computational model that reproduced experimental measurements and generated predictions that were validated experimentally. We found that the phosphatase SHP-1, previously characterized primarily as a negative regulator, plays a positive role in signal initiation by dephosphorylating negative regulatory sites in other proteins. We also found that the actin regulator WASP is rapidly activated via a shortcut pathway, distinct from the longer pathway previously considered to be the main route for WASP recruitment. Through iterative experimentation and model-based analysis, we have found that early signaling may be driven by transient mechanisms that are likely to be overlooked if only later timepoints are considered.

Wednesday, August 13, 2014

Pre-game announcements!

Greetings, loyal readers! You might remember that a few months ago we showed you the q-bingo game card that we brought to the last q-bio conference, and asked for your ideas on what terms are popular (or perhaps overused) in systems biology so that we could use them in future games. I can now announce that we have used your ideas in the set of playing cards for this year's conference!

If you're at the conference and want to play, come to Poster Session 1 tomorrow (Thursday) and stop by poster #11 (hint: it's very violet) to pick up your card AND to learn about some exciting research that will be coming out in just a few days. See you there!

If you're not coming to the conference (or even if you are), you can still join the fun by following us on (our new) Twitter: @qbiology.

Thursday, August 7, 2014

Thanks, but no thanks

I am posting from the q-bio Summer School, where we are enjoying many discussions about modeling. Several lecturers have advised the junior modelers attending the school, who are mostly graduate students and postdocs, to find an experimental collaborator. I appreciate the advice and the benefits of having an experimental collaborator, but I am usually quite irked by the reasons stated for seeking out opportunities to collaborate with an experimentalist. One reason I've heard many times is that modelers need an experimentalist to explain the biology to them and to help them read papers critically. It certainly could be useful to have a more experienced researcher aid in formulating a model, but that person might as well be a modeler familiar with the relevant biology. I don't subscribe to the idea that modelers need a collaborator to evaluate the soundness of a paper. To suggest so seems insulting to me. Modelers do need to consult experts from time to time to understand the nuances of an unfamiliar experimental technique, for example, but so do experimentalists. I am probably more annoyed by the popular sentiment that a collaborator is essential for getting predictions tested. If I were an experimentalist, I might be insulted by this idea. It's unrealistic to think that experimentalists are lacking for ideas about which experiment to do next. If your prediction is only appealing to your experimental collaborator, then maybe it's not such an interesting prediction? Modelers should be more willing to report their predictions and let the scientific community follow up however they may, partly because it's unlikely that your collaborator is going to be the most qualified experimentalist to test each and every prediction you will ever make. I think the real reason to collaborate with an experimentalist is shared goals and interests and complementary expertise. Finding such a colleague is wonderful, but it shouldn't be forced, and the absence of a collaborator shouldn't be an impediment to progress. If you have a good prediction, you should report it, and if you want to model a system, you should pursue that. Eventually, you will know the system as well as the experimentalists studying it, if not better. After all, it's your role as a modeler to integrate data and insights, to elucidate the logical consequences of accepted understanding and plausible assumptions, and to suggest compelling experiments. Finally, I want to speak to the notion that modelers should do their own experiments. I think that's a good idea if you want to be an experimentalist. If you want to be a modeler, be a modeler.

Tuesday, July 29, 2014

Resource post: Tools for better colors

Looking through my list of bookmarks, it’s apparent that something I like to read about (other than food) is design. The reason? I believe that well-made, attractive visuals are worth the effort because they can help make a point clearer and more memorable. In a sea of plots and diagrams (or in a room full of posters), you want people to notice, understand, and remember yours. One aspect of this process is finding the right color scheme to get your message across, which can involve several questions: How much (if any) color is necessary? What do I want these colors to convey? Will viewers be able to distinguish every color? When answering these questions, I've found a couple of links to be useful.

Getting inspired/informed: 
  • Dribbble: Although (or perhaps because) this design gallery isn't science-centric, browsing it can trigger new ideas for how to use color and other design elements. 
  • Color Universal Design: How to make figures that are colorblind-friendly, with examples. 
  • The subtleties of color: An introduction to using color in data visualization. 
Tools:  
  • Adobe Kuler: Generate color schemes based on color rules or images, browse schemes created by other users, or put together your own. This is nifty for seeing how different colors will look when they are side by side. 
  • Colorbrewer: A popular tool among scientists in part because it offers different color schemes for different data types. 
  • Colorzilla: A browser plug-in that makes it easy to pick and analyze color from webpages, for when you see a color that you really want to use. 
Do you have any favorite tools or approaches for using color? Or is it something that you'd rather not emphasize?

Sunday, July 20, 2014

Being (and keeping) a collaborator

Recently, a paper of ours was accepted for publication (stay tuned for more about that!). It grew out of a long, trans-atlantic collaboration. It was the first collaboration that I was part of, and I was "spoiled" by the experience because of how productive and fun it was (and continues to be). I remember the first time that my side of project yielded a useful clue. Much to my surprise and delight, our collaborators took that clue to their lab and followed up on it right away. 

Collaborations can be awesome. They're also becoming increasingly prevalent as connections grow between different fields. There are lots of potential benefits for everyone involved: you get to learn about techniques outside your own specialization, your can develop a unique new perspective, and you may find yourself having some friends to visit in faraway places. 
Good memories of great science-friends in Odense, Denmark.

However, I've noticed since then, through observation and experience, that not all collaborations reach their best potential. So I have been thinking about what qualities are possessed by a good collaborator so that I know what to look for and what I should try to be. 
  1. Finding a problem that you can tackle together. It goes without saying, but it's key to pick a problem that all participants care about and can actually work on. Bonus points if it's a problem that can only be addressed by combining the complementary skills of everyone involved. (Otherwise, are you collaborating just for show?)
  2. Reliability and communication. When you and your collaborator work in different offices (or countries), it can be easy to fall off each other's radar and let the project fizzle out. To avoid this outcome, demonstrate that you're serious about the project (even if you don't have spectacular results yet) and that you want to interact with them occasionally. 
  3. Openness to feedback. A big part of collaboration is giving each other feedback. When the person giving you feedback is not in your field, it may feel like they're impinging on your space. When this happens, pause for a minute - they might be giving you a fresh, valid perspective. Or, they might just need you to better clarify/justify what you're doing, which can be a preview of how an outside audience might respond. 
  4. Understanding capabilities and limitations. Everyone has some things (experiments, simulations, etc) that they can do routinely, other things that take more time/money/pain, and some things that would be desirable but are unfeasible. These things may be obvious to someone in your field, but you and your collaborator may need to discuss them to ensure that you both have a realistic picture of what the other can do. 
Have you been, or do you want to be, part of a collaboration? What did you get (or want to get) from the experience? 

Thursday, June 5, 2014

Pathetic thinking

Modelers with shared biological interests can have varying opinions about what a useful model looks like and the purpose of modeling, or rather the opportunities that exist to perform important work in a particular field.

In a recent commentary, Jeremy Gunawardena [BMC Biol 12: 29 (2014)] argues that models in biology are “accurate descriptions of our pathetic thinking.” He also offers three points of advice for modelers: 1) “ask a question,” 2) “keep it simple,” and 3) “If the model cannot be falsified, it is not telling you anything.” I whole-heartedly agree with these points, which are truisms among modelers; however, in my experience, the advice is followed to an extreme by some researchers, who interpret “ask a question” to mean that every model should be purpose-built to address a specific, narrow question, which ignores opportunities for model reuse, and who interpret “keep it simple” to mean that models should be tractable within the framework of traditional approaches only, ignoring new approaches that ease the task of modeling and expand the scope of what’s feasible. Some extremists seem to even hold the view that the mechanistic details elucidated by biologists are too complex to consider and therefore largely irrelevant for modelers.

Gunawardena may have given these extremists encouragement with his comment, “Including all the biochemical details may reassure biologists but it is a poor way to model.” I acknowledge that simple, abstract models, which may focus on capturing certain limited influences among molecular entities and processes and/or certain limited phenomenology, have been useful, and are likely to continue to be useful for a long time. However, there are certainly many important questions that can be feasibly addressed that do depend on consideration of not “all” of the biochemical details but rather on consideration of more, or even far more, of the biochemical details than usually considered by modelers today.

The messy details would also be important for the development of “standard models,” which do not currently exist in biology. Standard models in other fields, such as the Standard Model of particle physics, drive the activities of whole communities and tend to be detailed, because they consolidate understanding and are useful in large part because they identify the outstanding gaps in understanding. Would standard models benefit biologists?

An affirmative answer is suggested by the fact that there are many complicated cellular regulatory systems that have attracted enduring interest, such as the EGFR signaling network, which has been studied for decades for diverse reasons. A comprehensive, extensively tested, and largely validated model for one of these systems, meaning a standard model, would offer the benefits of such a model (which have been proven in non-biological fields) and would aid modelers by providing a trusted reusable starting point for asking not one question but many questions.

The extremists should take note of the saying attributed to Einstein, "Everything should be as simple as possible, but not simpler."

Gunawardena J (2014). Models in biology: 'accurate descriptions of our pathetic thinking'. BMC biology, 12 (1) PMID: 24886484

Bachman, J., & Sorger, P. (2011). New approaches to modeling complex biochemistry Nature Methods, 8 (2), 130-131 DOI: 10.1038/nmeth0211-130

Chelliah V, Laibe C, & Le Novère N (2013). BioModels Database: a repository of mathematical models of biological processes. Methods in molecular biology, 1021, 189-99 PMID: 23715986

Monday, April 28, 2014

Trophy papers

Getting a paper into certain journals is good for one's career. These papers usually represent impressive and important work. It seems that many more such manuscripts are produced than the number that can be published in high-profile journals, such as Nature. It's probably not a bad thing to submit a manuscript to a high-profile journal if you think you have a chance there, but these attempts often generate considerable frustration, for reasons ranging from peculiar formatting requirements to rejection without peer review. Some researchers believe in a piece of work so much that they are not deterred by these frustrations and keep submitting to one high-profile journal after another. This enthusiasm is admirable, but if repeated attempts fail, then the level of frustration can become rather high because of the wasted effort. I wonder how others handle this sort of situation. Do you put more work into the project? Do you submit to an open-access journal? Do you move on to the next desirable target journal and take on the significant non-scientific work, such as figure layout and reference formatting, which a manuscript revision can sometimes entail? Do you wonder if the manuscript is fatally flawed because of the initial attempt to present the findings in a highly concise format? Please share your thoughts and experiences. Should we even be trying to do more than simply sharing our findings?