Friday, August 22, 2014

Elucidating missing links of the TCR signaling network

Just published:
Phosphorylation site dynamics of early T-cell receptor signaling. LA Chylek, V Akimov, J Dengjel,  KTG Rigbolt, WS Hlavacek, B Blagoev. PLOS ONE 9, e104240

Stimulation of the T-cell receptor (TCR) can trigger a cascade of biochemical signaling events with far-reaching consequences for the T cell, including changes in gene regulation and remodeling of the actin cytoskeleton. A driving force in the initiation of signaling is phosphorylation and dephosphorylation of signaling proteins. This process has been difficult to characterize in detail because phosphorylation takes place rapidly, on the timescale of seconds, which can confound efforts to decode the order in which events occur. In addition, multiple residues in a protein may be phosphorylated, each involved in distinct regulatory mechanisms, necessitating analysis of individual sites.

To characterize the dynamics of site-specific phosphorylation in the first 60 seconds of TCR signaling, we stimulated cells for precise lengths of time using a quench-flow system and quantified changes in phosphorylation using mass spectrometry-based phosphoproteomics. We developed a computational model that reproduced experimental measurements and generated predictions that were validated experimentally. We found that the phosphatase SHP-1, previously characterized primarily as a negative regulator, plays a positive role in signal initiation by dephosphorylating negative regulatory sites in other proteins. We also found that the actin regulator WASP is rapidly activated via a shortcut pathway, distinct from the longer pathway previously considered to be the main route for WASP recruitment. Through iterative experimentation and model-based analysis, we have found that early signaling may be driven by transient mechanisms that are likely to be overlooked if only later timepoints are considered.

Wednesday, August 13, 2014

Pre-game announcements!

Greetings, loyal readers! You might remember that a few months ago we showed you the q-bingo game card that we brought to the last q-bio conference, and asked for your ideas on what terms are popular (or perhaps overused) in systems biology so that we could use them in future games. I can now announce that we have used your ideas in the set of playing cards for this year's conference!

If you're at the conference and want to play, come to Poster Session 1 tomorrow (Thursday) and stop by poster #11 (hint: it's very violet) to pick up your card AND to learn about some exciting research that will be coming out in just a few days. See you there!

If you're not coming to the conference (or even if you are), you can still join the fun by following us on (our new) Twitter: @qbiology.

Thursday, August 7, 2014

Thanks, but no thanks

I am posting from the q-bio Summer School, where we are enjoying many discussions about modeling. Several lecturers have advised the junior modelers attending the school, who are mostly graduate students and postdocs, to find an experimental collaborator. I appreciate the advice and the benefits of having an experimental collaborator, but I am usually quite irked by the reasons stated for seeking out opportunities to collaborate with an experimentalist. One reason I've heard many times is that modelers need an experimentalist to explain the biology to them and to help them read papers critically. It certainly could be useful to have a more experienced researcher aid in formulating a model, but that person might as well be a modeler familiar with the relevant biology. I don't subscribe to the idea that modelers need a collaborator to evaluate the soundness of a paper. To suggest so seems insulting to me. Modelers do need to consult experts from time to time to understand the nuances of an unfamiliar experimental technique, for example, but so do experimentalists. I am probably more annoyed by the popular sentiment that a collaborator is essential for getting predictions tested. If I were an experimentalist, I might be insulted by this idea. It's unrealistic to think that experimentalists are lacking for ideas about which experiment to do next. If your prediction is only appealing to your experimental collaborator, then maybe it's not such an interesting prediction? Modelers should be more willing to report their predictions and let the scientific community follow up however they may, partly because it's unlikely that your collaborator is going to be the most qualified experimentalist to test each and every prediction you will ever make. I think the real reason to collaborate with an experimentalist is shared goals and interests and complementary expertise. Finding such a colleague is wonderful, but it shouldn't be forced, and the absence of a collaborator shouldn't be an impediment to progress. If you have a good prediction, you should report it, and if you want to model a system, you should pursue that. Eventually, you will know the system as well as the experimentalists studying it, if not better. After all, it's your role as a modeler to integrate data and insights, to elucidate the logical consequences of accepted understanding and plausible assumptions, and to suggest compelling experiments. Finally, I want to speak to the notion that modelers should do their own experiments. I think that's a good idea if you want to be an experimentalist. If you want to be a modeler, be a modeler.

Tuesday, July 29, 2014

Resource post: Tools for better colors

Looking through my list of bookmarks, it’s apparent that something I like to read about (other than food) is design. The reason? I believe that well-made, attractive visuals are worth the effort because they can help make a point clearer and more memorable. In a sea of plots and diagrams (or in a room full of posters), you want people to notice, understand, and remember yours. One aspect of this process is finding the right color scheme to get your message across, which can involve several questions: How much (if any) color is necessary? What do I want these colors to convey? Will viewers be able to distinguish every color? When answering these questions, I've found a couple of links to be useful.

Getting inspired/informed: 
  • Dribbble: Although (or perhaps because) this design gallery isn't science-centric, browsing it can trigger new ideas for how to use color and other design elements. 
  • Color Universal Design: How to make figures that are colorblind-friendly, with examples. 
  • The subtleties of color: An introduction to using color in data visualization. 
Tools:  
  • Adobe Kuler: Generate color schemes based on color rules or images, browse schemes created by other users, or put together your own. This is nifty for seeing how different colors will look when they are side by side. 
  • Colorbrewer: A popular tool among scientists in part because it offers different color schemes for different data types. 
  • Colorzilla: A browser plug-in that makes it easy to pick and analyze color from webpages, for when you see a color that you really want to use. 
Do you have any favorite tools or approaches for using color? Or is it something that you'd rather not emphasize?

Sunday, July 20, 2014

Being (and keeping) a collaborator

Recently, a paper of ours was accepted for publication (stay tuned for more about that!). It grew out of a long, trans-atlantic collaboration. It was the first collaboration that I was part of, and I was "spoiled" by the experience because of how productive and fun it was (and continues to be). I remember the first time that my side of project yielded a useful clue. Much to my surprise and delight, our collaborators took that clue to their lab and followed up on it right away. 

Collaborations can be awesome. They're also becoming increasingly prevalent as connections grow between different fields. There are lots of potential benefits for everyone involved: you get to learn about techniques outside your own specialization, your can develop a unique new perspective, and you may find yourself having some friends to visit in faraway places. 
Good memories of great science-friends in Odense, Denmark.

However, I've noticed since then, through observation and experience, that not all collaborations reach their best potential. So I have been thinking about what qualities are possessed by a good collaborator so that I know what to look for and what I should try to be. 
  1. Finding a problem that you can tackle together. It goes without saying, but it's key to pick a problem that all participants care about and can actually work on. Bonus points if it's a problem that can only be addressed by combining the complementary skills of everyone involved. (Otherwise, are you collaborating just for show?)
  2. Reliability and communication. When you and your collaborator work in different offices (or countries), it can be easy to fall off each other's radar and let the project fizzle out. To avoid this outcome, demonstrate that you're serious about the project (even if you don't have spectacular results yet) and that you want to interact with them occasionally. 
  3. Openness to feedback. A big part of collaboration is giving each other feedback. When the person giving you feedback is not in your field, it may feel like they're impinging on your space. When this happens, pause for a minute - they might be giving you a fresh, valid perspective. Or, they might just need you to better clarify/justify what you're doing, which can be a preview of how an outside audience might respond. 
  4. Understanding capabilities and limitations. Everyone has some things (experiments, simulations, etc) that they can do routinely, other things that take more time/money/pain, and some things that would be desirable but are unfeasible. These things may be obvious to someone in your field, but you and your collaborator may need to discuss them to ensure that you both have a realistic picture of what the other can do. 
Have you been, or do you want to be, part of a collaboration? What did you get (or want to get) from the experience? 

Thursday, June 5, 2014

Pathetic thinking

Modelers with shared biological interests can have varying opinions about what a useful model looks like and the purpose of modeling, or rather the opportunities that exist to perform important work in a particular field.

In a recent commentary, Jeremy Gunawardena [BMC Biol 12: 29 (2014)] argues that models in biology are “accurate descriptions of our pathetic thinking.” He also offers three points of advice for modelers: 1) “ask a question,” 2) “keep it simple,” and 3) “If the model cannot be falsified, it is not telling you anything.” I whole-heartedly agree with these points, which are truisms among modelers; however, in my experience, the advice is followed to an extreme by some researchers, who interpret “ask a question” to mean that every model should be purpose-built to address a specific, narrow question, which ignores opportunities for model reuse, and who interpret “keep it simple” to mean that models should be tractable within the framework of traditional approaches only, ignoring new approaches that ease the task of modeling and expand the scope of what’s feasible. Some extremists seem to even hold the view that the mechanistic details elucidated by biologists are too complex to consider and therefore largely irrelevant for modelers.

Gunawardena may have given these extremists encouragement with his comment, “Including all the biochemical details may reassure biologists but it is a poor way to model.” I acknowledge that simple, abstract models, which may focus on capturing certain limited influences among molecular entities and processes and/or certain limited phenomenology, have been useful, and are likely to continue to be useful for a long time. However, there are certainly many important questions that can be feasibly addressed that do depend on consideration of not “all” of the biochemical details but rather on consideration of more, or even far more, of the biochemical details than usually considered by modelers today.

The messy details would also be important for the development of “standard models,” which do not currently exist in biology. Standard models in other fields, such as the Standard Model of particle physics, drive the activities of whole communities and tend to be detailed, because they consolidate understanding and are useful in large part because they identify the outstanding gaps in understanding. Would standard models benefit biologists?

An affirmative answer is suggested by the fact that there are many complicated cellular regulatory systems that have attracted enduring interest, such as the EGFR signaling network, which has been studied for decades for diverse reasons. A comprehensive, extensively tested, and largely validated model for one of these systems, meaning a standard model, would offer the benefits of such a model (which have been proven in non-biological fields) and would aid modelers by providing a trusted reusable starting point for asking not one question but many questions.

The extremists should take note of the saying attributed to Einstein, "Everything should be as simple as possible, but not simpler."

Gunawardena J (2014). Models in biology: 'accurate descriptions of our pathetic thinking'. BMC biology, 12 (1) PMID: 24886484

Bachman, J., & Sorger, P. (2011). New approaches to modeling complex biochemistry Nature Methods, 8 (2), 130-131 DOI: 10.1038/nmeth0211-130

Chelliah V, Laibe C, & Le Novère N (2013). BioModels Database: a repository of mathematical models of biological processes. Methods in molecular biology, 1021, 189-99 PMID: 23715986

Monday, April 28, 2014

Trophy papers

Getting a paper into certain journals is good for one's career. These papers usually represent impressive and important work. It seems that many more such manuscripts are produced than the number that can be published in high-profile journals, such as Nature. It's probably not a bad thing to submit a manuscript to a high-profile journal if you think you have a chance there, but these attempts often generate considerable frustration, for reasons ranging from peculiar formatting requirements to rejection without peer review. Some researchers believe in a piece of work so much that they are not deterred by these frustrations and keep submitting to one high-profile journal after another. This enthusiasm is admirable, but if repeated attempts fail, then the level of frustration can become rather high because of the wasted effort. I wonder how others handle this sort of situation. Do you put more work into the project? Do you submit to an open-access journal? Do you move on to the next desirable target journal and take on the significant non-scientific work, such as figure layout and reference formatting, which a manuscript revision can sometimes entail? Do you wonder if the manuscript is fatally flawed because of the initial attempt to present the findings in a highly concise format? Please share your thoughts and experiences. Should we even be trying to do more than simply sharing our findings?

Sunday, April 13, 2014

Etymology. (Not to be confused with entomology.)

It's time I explained where the name of the blog, "q-bingo", comes from.

It started last year at the q-bio conference, which is a conference focused on quantum quixotic quantitative biology. Like all fields, quantitative biology involves a certain amount of jargon and buzzwords, and certain words crop up more often than they would in everyday conversation.

And where would you hear those words most often? Conferences, of course. In fact, you might start keeping track of how many times certain words come up, and wonder if anyone else is keeping track too...

And thus, q-bingo was born. Simply cover a square whenever you hear a word used in a talk, and when you fill a straight line shout "q-bingo" straight away. Yes,  right there during the talk. [Disclaimer #1: I made this suggestion fully aware that my own talk would be punctuated by a few "bingo"s. Disclaimer #2: There are other examples of such games.] Conference organizers and attendees seemed to love the idea. Sadly, the game didn't quite get off the ground due to the issue of having to print 200+ of these things for everyone at the conference. 

On the other hand... At an immunology meeting, I wouldn't necessarily find it noteworthy or funny that people use specialized words like "clonotype" and "Fab fragment". So why did these words jump out at me?
  • I think part of the reason is that some of these words are used to create a certain impression rather than to communicate information. For example, the word "complexity" is often to used to throw a veil of sophistication over something, without explaining what makes the topic complex. Same with "network" and "circuit", to some extent. 
  • Other words, like "incoherent" (as in incoherent feed-forward, which is a simple pattern of interactions/influences) can mean vastly different things to other scientists and to the general public
  • A few words aren't actually objectionable or amusing - they capture ideas that people are excited about at a particular time. There were several talks about the importance of "single-cell" measurements because of cellular "heterogeneity". 

I want to hear your feedback. Are these just buzzwords, and should we try to use them less? Or are they signs of a young-ish field finding its own language? And of course... if you have ideas for q-bingo words, let me know in the comments because we might need them again this year. 

Friday, April 4, 2014

Leave the gun, take the cannoli

In the movie The Godfather, Peter Clemenza says to Rocco Lampone, "Leave the gun, take the cannoli." In this post, I want to argue that modelers should leave the sensitivity analysis and related methods, such as bootstrapping and Bayesian approaches for quantifying uncertainties of parameter estimates and model predictions [see the nice papers from Eydgahi et al. (2013) and Klinke (2009)], and take the non-obvious testable prediction. Why should we prefer a non-obvious testable prediction? First, let me say that the methodology mentioned above is valuable. I have nothing against it. I simply want to argue that these analyses are no substitute for a good, non-obvious, testable prediction. Let's consider the Bayesian methods cited above. These methods allow a modeler to generate confidence bounds on not only parameter estimates but also model predictions. That's great. However, these bounds do not guarantee the outcome of an experiment. The bounds are premised on prior knowledge, the data available, which may be incomplete and/or faulty. The same sort of limitation holds for the results of sensitivity analysis, bootstrapping, etc. I once saw a lecturer in the q-bio Summer School tell his audience that no manuscript about a modeling study should pass through peer review without inclusion of results from a sensitivity analysis. That seems like an extreme point of view to me and one that risks elevating sensitivity analysis in the minds of some to something more than it is, something that validates a model. Models can never be validated. They can only be falsified. (After many attempts to prove a model wrong, a model may however become trusted.) The way to subject a model to falsification (and to make progress in science) is to use it to make an interesting and testable prediction.

Sunday, March 30, 2014

What can modeling do for you?

In the blog so far, we've talked often about computational models - how they're made, what can go wrong, and what they could be like in the future. But what exactly are they - and why should anyone (especially biologists) care?

A model is a representation, or imitation, of a system that is difficult to examine directly. Biologists already use models all the time. For example, we'd like to understand biological process in humans, but since most of us can't experiment on humans, model organisms and cell lines are used instead. "Model" also refers a working hypothesis about how a system functions, which is often presented as a cartoon diagram.
A cartoon model for how the Shc1 adaptor protein acts at different stages of signaling.
.
Like a cartoon model, a computational model is created based on what a person knows, or on what they hypothesize. The difference is that instead of drawing a picture, which tends to be vague and qualitative, they make concrete and quantitative statements about how molecules interact. They then use this information to create a set of equations or a computer program, which is used to simulate system behavior. In modeling of chemical kinetics, the goal of simulation is often to see how certain outputs (like protein concentration) change over time, or under different conditions. So, like a model organism, a computational model can be used to make new discoveries with potential relevance to real-world questions.

Here are some of the reasons why I think biologists can get excited about what models can offer:
  1. A roadmap for pursuing experiments. Why do we do experiments? Often, it's to test a hypothesis. The more complicated the hypothesis, the greater number of experiments one could try, and the more involved each experiment might be. At the same time, even the most diligent of us want to optimize, and do minimum work for maximum information. Models can potentially help identify which tests would be most meaningful for supporting or disproving a hypothesis. 
  2. A way to make sense of complicated or conflicting data. Sometimes it turns out that two seemingly contradictory ideas are actually compatible if you think about the quantitative details, which is exactly what models are good for. 
  3. Consolidating and testing knowledge about a system. A typical experimental study provides information about one or a few interactions. Models can help us put together multiple pieces of information, like assembling a jigsaw puzzle, to form a more complete picture. Furthermore, by simulating such a model and comparing it to experimental data, interesting discrepancies can sometimes be identified. In other words, we can see whether we have enough puzzle pieces, or if we need to find more through additional experiments. 
At the same time, we need to remember that models won't magically provide the answers to everything. A model that simply recapitulates your expectations could be appealing; however, a model's real worth is in its ability to generate non-obvious, testable predictions. If you're going to start modeling or are thinking about starting a modeling collaboration, try to first learn about about what models can and can't do.

So what does the word "model" mean to you? And what do you think they can be used for?

Thursday, March 20, 2014

Extreme writing

Out of the blue one day, Pieter Swart stopped by my office, and for some reason, the conversation turned to extreme programming, a practice that Pieter and his colleagues used in their development of NetworkX. One aspect of extreme programming is programming in pairs, or pair programming. Two programmers sit at one workstation. One, the driver, types. The other, the observer, reviews what is typed. Because of Pieter's enthusiasm, I tried it, but for writing, not programming. It turns out that pair writing works very well, at least for me with certain writing partners. If you've ever had writer's block, extreme writing will cure it. If you're the observer, you're off the hook - you just need to give your attention to what's being typed. If you're the driver, a pause will usually lead immediately to a discussion with the observer and a quick return to steady progress, or the observer will just deliver a coup de grace and take over the keyboard. Changing roles occurs frequently. If you haven't tried pair writing, give it a try. It helps to work with a large monitor in a comfortable but isolated and confined environment (to limit the possibilities of escape), where loud conversation will not disturb anyone.

Wednesday, March 19, 2014

How to make yourself understood across field boundaries

It seems like everyone these days is excited about "interdisciplinary science", which is much like regular science but with a longer list of affiliations. Working together means talking together, which includes making presentations that appeal to people in different fields. Is there anything to keep in mind beyond generic advice about giving a talk (develop an outline, make eye contact, don't mumble, etc)?

Here are two pitfalls that I've noticed when people speak for "interdisciplinary" groups:
  • Tunnel vision. A speaker ignores the diverse backgrounds of audience members and assumes they all share his or her knowledge and interests. As a result, the speaker doesn't provide enough basic information for audience members to understand the talk, or to appreciate why the talk matters. 
  • Self-effacement. A speaker goes too far in catering to an audience and loses their own point of view as a result. I once heard a talk from a bioinformatics researcher. They seemed to think their audience contained only chemists and, furthermore, that no one would want to learn anything about biology. As a result, the speaker tried to avoid touching any biological details. The result? A deluge of vagueness and abstraction. 
What can we do to avoid these extremes so that speakers and audiences can meet in the middle? 
  1. Scope out the audience beforehand. Learn about your potential listeners. Think about what they're likely to know or not know, what they might feel strongly about, and why your talk is relevant to them. They might have more in common with you than you think. People are usually eager to latch onto something that connects to their interests, even tangentially. It's usually a good thing, but beware... 
  2. Stay in the driver's seat.  A handful of times, "I know about what the speaker is talking about" can devolve into "I need to prove that I know more about it than she does, especially since I consider myself more of a specialist".  As a speaker, it's your job to address questions/comments thoughtfully. However, if someone tries to derail you - and you'll know it when you see it! - it's also your job to stay on track and remind them that you're the one giving a talk, which is different from a one-on-one meeting. 
  3. Make good use of pictures and examples, which can help make ideas more concrete. A well-made diagram will make descriptions/equations/algorithms more approachable, especially to someone new to the subject. 
  4. Appeal to shared problem-solving tendencies. If you're talking to scientists - or to humans, generally* - it's likely that even if you have different backgrounds, you share an instinct for solving problems. Try to give your audience the basic information needed to answer a question. Then give them a chance to work out an answer before showing them your results. No need to demand a verbal response (which can get awkward), but you want their brains to work while they listen. It'll keep their attention and make your research process more relatable. 
  5. *If you're talking to non-humans, I would love to hear about that.
  6. Have some faith in your listeners. I've come across many blanket statements, like that biologists always quail at the sight of equations, or that one must never utter gene names in the presence of a physicist. Although these statements may allude to general preferences, we need to remember that people aren't defined by what subject their degree is in and, related to point #4, people like to learn. Try to gently and non-patronizingly lead people out from their comfort zone. If you can show them that something they thought was incomprehensible is actually not so bad, you'll help them feel smarter rather than dumber, which is a big step towards bringing together people with diverse backgrounds. 
What are your thoughts? Have you had any interesting experiences when presenting your work to others?  

Friday, March 14, 2014

The alternate routes of allergic responses

claimtoken-5326210a079e4
As spring approaches for the northern hemisphere (or as a cat approaches from across the room), allergy sufferers might wonder about how their symptoms originate. In some ways, they're in luck. The molecules involved in allergic reactions have been studied for decades, and to some extent, we've developed a cohesive picture of how these molecules work together. However, the immune system is always full of surprises. Here are some newly-discovered and seemingly fundamental roles for proteins that we knew existed, but that rarely made appearances in review papers.

Background: The signals before the sneezes

In an article published in Science this February, Rivera and co-workers investigated how mast cells, which play a central role in allergic responses, can distinguish between different antigens. Antigens (also known as allergens) are molecules that bind to antibody-receptor complexes on the mast cell's surface. Antigen binding can initiate a process that leads to release of substances that induce inflammation and the symptoms of allergies. The two different antigens used in this study differ in their affinity, meaning how tightly they bind to receptors.






A typical view of signal initiation in mast cells via the high-affinity receptor for IgE, also known as FcεRI. The upper part of the image represents the space outside of the cell, and the bottom part represents the inside of the cell. Antibody-FcεRI (receptor) complexes are clustered by binding to an antigen. The kinase Lyn can then phosphorylate the receptor, meaning that phosphate groups are attached to multiple parts of the receptor. The phosphorylated receptor can bind another kinase, Syk, which goes on to phosphorylate multiple targets, including Lat.



Why does binding affinity matter? It has been proposed that antigens that bind more tightly, and stay in contact with receptors for a longer period of time, allow signaling to progress further and induce stronger cellular responses. One response that can be measured is overall receptor phosphorylation (see image), one of the earliest steps in signaling. The low-affinity antigen does indeed induce less receptor phosphorylation than an equal dose of the high-affinity antigen. However, if the amount of low-affinity antigen is 100x higher, total receptor phosphorylation is roughly equal. Which raises the question...

Are all responses affected in the same way?

The answer is no (which others have also found). One of the most important downstream players in this system is the adaptor protein Lat, which is phosphorylated to recruit an array of other signaling proteins. Lat undergoes less phosphorylation in response to the low-affinity antigen than the high-affinity one, even when receptor phosphorylation is equal. Surprisingly, the related but less well-studied protein Lat2 undergoes more phosphorylation in response to the low-affinity antigen. Lat2 phosphorylation depends, directly or indirectly, on a kinase called Fgr. Fgr's close relatives, Lyn and Fyn, are well-known for their roles in initiating mast cell signaling, but Fgr has largely gone under the radar. 

A possible clue about the origins of these differences is that even when total receptor phosphorylation (the total phosphorylation of multiple sites) is equalized, the low-affinity antigen causes more phosphorylation of at least one specific receptor site. So although total phosphorylation is the same, the contributions of individual sites may be different.

Finally, the authors considered how the low- and high-affinity antigens influence the messages that the mast cell sends to the rest of the immune system. The two antigens caused mast cells to release different types of signaling molecules (chemokines vs. cytokines), which induced different types of immune cells to arrive at the site of inflammation. So it seems that the Fgr/Lat2 pathway elucidated in this paper enables responses to low-affinity antigens, but these responses are qualitatively different from those induced by high-affinity antigens.

What we can learn:
  • The idea of higher affinity -> more signaling -> stronger responses can explain some aspects of signaling, but is too simplistic to explain how specific responses are enhanced for low-affinity antigens.
  • Lat2 and Fgr may play important roles that are distinct from their more famous protein relatives, Lat and Lyn.
  • Several blanks are yet to be filled. Does Fgr act on Lat2 directly? How does the phosphorylation pattern of individual receptor sites differ with antigen affinity (although, that's likely to be experimentally challenging)? Although this system has been studied for a long time, there's evidently still a lot to learn about how quantitative differences between antigens lead to qualitatively different cellular behaviors.
References:

Suzuki, R., Leach, S., Liu, W., Ralston, E., Scheffel, J., Zhang, W., Lowell, C., & Rivera, J. (2014). Molecular Editing of Cellular Responses by the High-Affinity Receptor for IgE Science, 343 (6174), 1021-1025 DOI: 10.1126/science.1246976

McKeithan TW. Kinetic proofreading in T-cell receptor signal transduction. Proc Natl Acad Sci USA. 92:5042-6. (1995)

Liu ZJ, Haleem-Smith H, Chen H, Metzger H. Unexpected signals in a system subject to kinetic proofreading. Proc Natl Acad Sci USA 98:7289-94. (2001)

Friday, March 7, 2014

The Center for Nonlinear Studies

In 2007, the q-bio Conference was inaugurated through the initiative of the Los Alamos Center for Nonlinear Studies, which is also known as CNLS. A few days ago a nicely produced YouTube video about CNLS became available, and the Center's sponsorship of the conference is mentioned. The video might be of interest to past or future attendees of the q-bio Conference who want to know more about CNLS, which supports a postdoc training program in the area of quantitative biology. The current director of CNLS is Bob Ecke, who is featured in the video. Bob was instrumental in obtaining the funding needed to launch and then sustain the conference series, as well as the affiliated q-bio Summer School. Bob is retiring soon and 2014 may be the last year that we will see him at the conference, where he usually welcomes attendees to New Mexico and says a few words about CNLS. If you happen to see him, or even if you don't, and you appreciate the conference and summer school, it would be nice to let him know. I expect that he would appreciate hearing about the impact of CNLS on q-bio scientists and their research.

Thursday, February 27, 2014

When (specialized parts of) two heads are better than one...

A recent review highlighted the small army of databases that has sprung up to help keep track of what we're learning about cells. Many of these databases focus on a particular feature of cell signaling (like protein-protein interactions or post-translational modifications), with a few databases combining information across multiple features to help build a more complete picture. A question that remains is how these collections of information can be used to help us achieve practical goals - identifying drug targets or predicting the physiological effects of mutations.

Computational modeling could have a role to play by turning descriptions of interactions into quantitative predictions. As databases tend to be managed by groups of people, one might expect that large-scale modeling projects could also benefit from a community-driven approach. However, modeling tends to be carried out by individuals or small groups. Are there ways to turn modeling into a community activity?

A first step is probably to put models into a format that is easy to navigate and that encourages interactions among people. One such format is a wiki, and there are actually a few examples of wikis being used to simultaneously annotate models and to consolidate information about a signaling pathway - a little like an interactive literature review that you can simulate on a computer. I think this is a cool concept, although it seems like these wikis tend to stop being updated soon after their accompanying paper is published. There have also been some efforts to establish databases for models, which would in principle make it easier for people to build on past work. But in practice, so far, it seems that these databases are not very active either.

Reinventing Discovery: The New Era of Networked Science  
[Review]
The issues involved in community-based modeling is also something I thought about when I read (the verbose yet interesting) "Reinventing Discovery" by Michael Nielsen, a book that advocates for "open science": a culture in which data and ideas are shared freely, with the goal of facilitating large-scale collaborations among people with diverse backgrounds. The underlying motivation is that progress can be accelerated if problems are broken down into modular, specialized tasks that can be tackled by experts in a particular area. I can see how such an approach would be beneficial in modeling and understanding cell signaling - a topic that can encompass everything from ligand-receptor interactions to transcriptional regulation to trafficking, each of which are complicated fields in their own right. So, how can experts in these fields be encouraged to pool their knowledge?

Nielsen's book has many examples of where collaborative strategies in science have succeeded and failed. As it turns out, creating wikis just for the sake of it is not always a good idea, because scientists often have little incentive to contribute. They would (understandably) prefer to be writing their own papers rather than spending time contributing to nebulous community goals. It seems like in most examples of where "collective intelligence" has succeeded, specific rewards have been in participants' minds. There's Foldit, the online game where players compete at predicting protein structures. And perhaps the most famous example is Kasparov vs. The World. (It's noteworthy that in both these examples, many participants are not trained professionals in the activity that they are participating in - structural biology and chess, respectively.)

I wonder what the field of cell signaling can learn from these examples. Does there need to be a better incentive for people to help with wikis/databases? One might imagine a database where an experimentalist can contribute a piece of information about a protein-protein interaction, which would automatically gain a citation any time it was used in a model. Or, can some part of the modeling process be turned into a game or other activity that many people would want to participate in? It seems like there are a lot of possibly risky, but also possibly rewarding, paths that could be tried.

Saturday, February 22, 2014

Logical modeling vs. rule-based modeling

Cell signaling systems have been modeled using logical and rule-based approaches. What's the difference? A rule-based model is similar to a logical model, in that both types of models involve rules. However, the rules are usually rather different in character. In a typical logical model, rules define state transitions of biomolecules, including conditions on these transitions. They have an "if-then" flavor. The rules operate on variables representing states of whole biomolecules, and they define when and how such state variables change their values. Biomolecules in logical models are often characterized by state variables that take one of two values, e.g., 0 or 1. Such variables are introduced to represent "on" and "off" states. More than two states can be considered, but there is a limit to what's tractable, as the reachable state space tends to grow exponentially with the number of possible states. As more states are considered, there are more and more transitions between these states, each of which is usually considered explicitly when specifying a logical model. The behavior of a logical model can sometimes depend on the algorithmic protocol used for changing states in a simulation. This seems undesirable. In a rule-based model, the amount of an activated protein can be continuous or discrete, from 0 copies to all copies of the protein. This is because a rule-based model is based on the principles of chemical kinetics. The state variables implicitly defined by rules capture numbers of biomolecules in particular states and/or complexes. Rules are associated with rate laws, which govern the rates or probabilities of transitions of biomolecular site states, not the state transitions of whole molecules. With a physicochemical foundation, it is relatively easy to capture certain phenomena found in cell signaling systems, such as competition, feedback, and crosstalk. These phenomena are more difficult to capture in a logical model. At least, it seems that way to me. With model-specification languages such as BNGL (http://bionetgen.org), a set of rules can be used to perform different tasks: stochastic or deterministic simulation, via a direct or indirect method. Is it possible to modify BNGL to enable logical modeling? Although typical logical models are different from typical rule-based models, it does not seem that the rules used in the two types of models, although usually different, are necessarily fundamentally different, so my answer is a tentative "yes." What do you think? 

Tuesday, February 11, 2014

Dismantling the Rube-Goldberg machine

What do this puppy food commercial, an indie music video, and systems biology have in common?

They've all used Rube-Goldberg machines to great effect - devices that execute an elaborate series of steps to accomplish a goal that could have been reached through a (much) simpler process. As the ultimate elevation of means over ends, these machines have become a celebrated expression of ingenuity and humor. However, the presence of such machines in systems biology is perhaps not as obvious, intentional, or entertaining.

So what are the Rube-Goldberg machines of systems biology? In a small but a noticeable fraction of studies, complex models are used to reach conclusions that could be obtained just by looking at a diagram or by giving some thought to the question at hand - assuming that a question is at hand. It seems as though these studies primarily use models to produce plots and equations that reinforce, or embellish, intuitive explanations. However, the true usefulness of models comes into play when we leave the territory of intuition and begin to wonder about factors that can't be resolved by just thinking.

So when and why do we start thinking like Rube-Goldberg engineers, and what impact does it have on the field? A few educated guesses:
  • Some models are built without a question in mind. Its creators then search for a question to address, and end up with one that the model's content isn't well-suited to. 
  • We're all specialists in something, and we don't always know about all the tools and capabilities that others have developed. As a result, we sometimes try to solve a problem by reinventing the wheel, or by applying a tool that isn't a good fit for the problem, which can lead to all kinds of complications. 
  • To some audiences, just the concept of doing simulations seems impressive. As a result, modelers can be drawn into just putting technical skills on display and establishing a mystique around what they do, as opposed to applying their abilities to interesting questions.  
  • Obvious predictions may be easier to validate experimentally. 
I don't know if these practices have had a wholly negative impact on modeling efforts in biology - it may have even helped in some respects. But it would not be a bad idea to focus on challenging questions for which simulations are actually needed, and to try to get the most out of the models that we've taken the time and effort to build.

Friday, February 7, 2014

Do modelers have low self esteem?

When was the last time an experimental biologist experienced a manuscript rejection because the work in question didn't include modeling? As a modeler working in biology, I tend to be hesitant about trying to report work that doesn't include new data (from a collaborator), because it doesn't usually go well. Modeling without experimentation tends to be held in low regard, especially among modelers, which is a tragic irony. If physicists of the early 20th century had the same attitude as many of today's modelers and experimental biologists, "Zur Elektrodynamik bewegter Körper" would not have been published without its author first doing experiments to confirm his ideas or him finding an experimental collaborator to generate confirmatory data. I don't think that modelers should take a favorable view of every modeling study they come across but I wonder if we need to be more supportive of each other and allow more room for independence from collaborations with experimentalists. If a modeling study is based on reasonable assumptions and performed with care and it produces at least one non-obvious testable prediction, why should it not be reported immediately? It seems that some of us might be concerned that such reports will be ignored or that such reports are too untrustworthy, given all the complexities and ambiguities. It's true that models need to be tested, but it seems unlikely that someone able to build and analyze a model will also be the best person to test the model or to have a circle of friends that includes this special person. Indeed, I think the requirement to publish with data has led some modelers to produce predictions that are, let's say, "obvious," because this is the type of prediction can be confirmed easily. Let's be rigorous, but to a reasonable standard. Let's also be bold. Many experimental results turn out to be misinterpreted, or plain wrong. It's OK for models to be wrong too. Biological systems are complicated. We need models to guide our study of these systems. Most of the work being done in biology today is being performed without models. Until experimentalists start chiding each other for failing to leverage the powerful reasoning aids that models are, it makes little sense for modelers to criticize each other for work that doesn't include generation of new data.