Showing posts with label collaboration. Show all posts
Showing posts with label collaboration. Show all posts

Thursday, August 7, 2014

Thanks, but no thanks

I am posting from the q-bio Summer School, where we are enjoying many discussions about modeling. Several lecturers have advised the junior modelers attending the school, who are mostly graduate students and postdocs, to find an experimental collaborator. I appreciate the advice and the benefits of having an experimental collaborator, but I am usually quite irked by the reasons stated for seeking out opportunities to collaborate with an experimentalist. One reason I've heard many times is that modelers need an experimentalist to explain the biology to them and to help them read papers critically. It certainly could be useful to have a more experienced researcher aid in formulating a model, but that person might as well be a modeler familiar with the relevant biology. I don't subscribe to the idea that modelers need a collaborator to evaluate the soundness of a paper. To suggest so seems insulting to me. Modelers do need to consult experts from time to time to understand the nuances of an unfamiliar experimental technique, for example, but so do experimentalists. I am probably more annoyed by the popular sentiment that a collaborator is essential for getting predictions tested. If I were an experimentalist, I might be insulted by this idea. It's unrealistic to think that experimentalists are lacking for ideas about which experiment to do next. If your prediction is only appealing to your experimental collaborator, then maybe it's not such an interesting prediction? Modelers should be more willing to report their predictions and let the scientific community follow up however they may, partly because it's unlikely that your collaborator is going to be the most qualified experimentalist to test each and every prediction you will ever make. I think the real reason to collaborate with an experimentalist is shared goals and interests and complementary expertise. Finding such a colleague is wonderful, but it shouldn't be forced, and the absence of a collaborator shouldn't be an impediment to progress. If you have a good prediction, you should report it, and if you want to model a system, you should pursue that. Eventually, you will know the system as well as the experimentalists studying it, if not better. After all, it's your role as a modeler to integrate data and insights, to elucidate the logical consequences of accepted understanding and plausible assumptions, and to suggest compelling experiments. Finally, I want to speak to the notion that modelers should do their own experiments. I think that's a good idea if you want to be an experimentalist. If you want to be a modeler, be a modeler.

Sunday, July 20, 2014

Being (and keeping) a collaborator

Recently, a paper of ours was accepted for publication (stay tuned for more about that!). It grew out of a long, trans-atlantic collaboration. It was the first collaboration that I was part of, and I was "spoiled" by the experience because of how productive and fun it was (and continues to be). I remember the first time that my side of project yielded a useful clue. Much to my surprise and delight, our collaborators took that clue to their lab and followed up on it right away. 

Collaborations can be awesome. They're also becoming increasingly prevalent as connections grow between different fields. There are lots of potential benefits for everyone involved: you get to learn about techniques outside your own specialization, your can develop a unique new perspective, and you may find yourself having some friends to visit in faraway places. 
Good memories of great science-friends in Odense, Denmark.

However, I've noticed since then, through observation and experience, that not all collaborations reach their best potential. So I have been thinking about what qualities are possessed by a good collaborator so that I know what to look for and what I should try to be. 
  1. Finding a problem that you can tackle together. It goes without saying, but it's key to pick a problem that all participants care about and can actually work on. Bonus points if it's a problem that can only be addressed by combining the complementary skills of everyone involved. (Otherwise, are you collaborating just for show?)
  2. Reliability and communication. When you and your collaborator work in different offices (or countries), it can be easy to fall off each other's radar and let the project fizzle out. To avoid this outcome, demonstrate that you're serious about the project (even if you don't have spectacular results yet) and that you want to interact with them occasionally. 
  3. Openness to feedback. A big part of collaboration is giving each other feedback. When the person giving you feedback is not in your field, it may feel like they're impinging on your space. When this happens, pause for a minute - they might be giving you a fresh, valid perspective. Or, they might just need you to better clarify/justify what you're doing, which can be a preview of how an outside audience might respond. 
  4. Understanding capabilities and limitations. Everyone has some things (experiments, simulations, etc) that they can do routinely, other things that take more time/money/pain, and some things that would be desirable but are unfeasible. These things may be obvious to someone in your field, but you and your collaborator may need to discuss them to ensure that you both have a realistic picture of what the other can do. 
Have you been, or do you want to be, part of a collaboration? What did you get (or want to get) from the experience? 

Thursday, March 20, 2014

Extreme writing

Out of the blue one day, Pieter Swart stopped by my office, and for some reason, the conversation turned to extreme programming, a practice that Pieter and his colleagues used in their development of NetworkX. One aspect of extreme programming is programming in pairs, or pair programming. Two programmers sit at one workstation. One, the driver, types. The other, the observer, reviews what is typed. Because of Pieter's enthusiasm, I tried it, but for writing, not programming. It turns out that pair writing works very well, at least for me with certain writing partners. If you've ever had writer's block, extreme writing will cure it. If you're the observer, you're off the hook - you just need to give your attention to what's being typed. If you're the driver, a pause will usually lead immediately to a discussion with the observer and a quick return to steady progress, or the observer will just deliver a coup de grace and take over the keyboard. Changing roles occurs frequently. If you haven't tried pair writing, give it a try. It helps to work with a large monitor in a comfortable but isolated and confined environment (to limit the possibilities of escape), where loud conversation will not disturb anyone.

Thursday, February 27, 2014

When (specialized parts of) two heads are better than one...

A recent review highlighted the small army of databases that has sprung up to help keep track of what we're learning about cells. Many of these databases focus on a particular feature of cell signaling (like protein-protein interactions or post-translational modifications), with a few databases combining information across multiple features to help build a more complete picture. A question that remains is how these collections of information can be used to help us achieve practical goals - identifying drug targets or predicting the physiological effects of mutations.

Computational modeling could have a role to play by turning descriptions of interactions into quantitative predictions. As databases tend to be managed by groups of people, one might expect that large-scale modeling projects could also benefit from a community-driven approach. However, modeling tends to be carried out by individuals or small groups. Are there ways to turn modeling into a community activity?

A first step is probably to put models into a format that is easy to navigate and that encourages interactions among people. One such format is a wiki, and there are actually a few examples of wikis being used to simultaneously annotate models and to consolidate information about a signaling pathway - a little like an interactive literature review that you can simulate on a computer. I think this is a cool concept, although it seems like these wikis tend to stop being updated soon after their accompanying paper is published. There have also been some efforts to establish databases for models, which would in principle make it easier for people to build on past work. But in practice, so far, it seems that these databases are not very active either.

Reinventing Discovery: The New Era of Networked Science  
[Review]
The issues involved in community-based modeling is also something I thought about when I read (the verbose yet interesting) "Reinventing Discovery" by Michael Nielsen, a book that advocates for "open science": a culture in which data and ideas are shared freely, with the goal of facilitating large-scale collaborations among people with diverse backgrounds. The underlying motivation is that progress can be accelerated if problems are broken down into modular, specialized tasks that can be tackled by experts in a particular area. I can see how such an approach would be beneficial in modeling and understanding cell signaling - a topic that can encompass everything from ligand-receptor interactions to transcriptional regulation to trafficking, each of which are complicated fields in their own right. So, how can experts in these fields be encouraged to pool their knowledge?

Nielsen's book has many examples of where collaborative strategies in science have succeeded and failed. As it turns out, creating wikis just for the sake of it is not always a good idea, because scientists often have little incentive to contribute. They would (understandably) prefer to be writing their own papers rather than spending time contributing to nebulous community goals. It seems like in most examples of where "collective intelligence" has succeeded, specific rewards have been in participants' minds. There's Foldit, the online game where players compete at predicting protein structures. And perhaps the most famous example is Kasparov vs. The World. (It's noteworthy that in both these examples, many participants are not trained professionals in the activity that they are participating in - structural biology and chess, respectively.)

I wonder what the field of cell signaling can learn from these examples. Does there need to be a better incentive for people to help with wikis/databases? One might imagine a database where an experimentalist can contribute a piece of information about a protein-protein interaction, which would automatically gain a citation any time it was used in a model. Or, can some part of the modeling process be turned into a game or other activity that many people would want to participate in? It seems like there are a lot of possibly risky, but also possibly rewarding, paths that could be tried.