all the information, none of the junk | biotech • healthcare • life sciences

The Reproducibility Initiative: A Good Idea in Theory that Won’t Work in Practice

Xconomy Seattle — 

The failure of scientists to independently confirm much of the data contained in “hot” academic publications is casting a long shadow over the biopharmaceutical industry. Research groups at Amgen and Bayer reported that the data in a significant percentage of published “breakthrough” papers from academic scientists could not be confirmed in their labs. Given that Big Pharma has increasingly turned to academic investigators as a source of molecular targets for new drugs, this represents a Big Problem. I have argued that this lack of experimental reproducibility represents an especially acute problem for virtual biotech companies, who lean on contractors to do most of their R&D, and have no internal lab facilities in which they can try to replicate the data. A new proposal, the Reproducibility Initiative, has recently been established to create a pathway for verifying experimental data outside of the lab that generated it. Science Exchange, a for-profit online marketplace of laboratory services, is coordinating this new initiative in partnership with the open access journal PLoS ONE.

The basic concept behind the Science Exchange marketplace is that it enables scientists to hire service providers to perform experiments that are beyond the capabilities of their own labs. The Reproducibility Initiative is layered on top of this marketplace, with the goal of addressing this irreproducibility problem. Researchers sign up with the Reproducibility Initiative to have their work replicated. An advisory panel finds an outside lab group to perform the studies (which are done anonymously), and the results are then shared with the investigators. Researchers who wish to avail themselves of the Initiative must pay for the confirmation work to be done. In addition, they must also pay a 5 percent transaction fee to Science Exchange for tapping into their network.

While the goal of this new initiative is admirable (and likely profitable for the Science Exchange), I don’t think the approach will work in the real world. Here’s why this initiative will not pan out, despite its good intentions:


Running an academic research lab is expensive, and grants that support these efforts are hard to get. As a result, budgets are very tight, focused on the “must have” items and not the “would be nice” ones. The Principal Investigator in a lab has to find a way to pay him or her self as well as their post-docs, grad students, technicians, and other personnel. They may need to buy very expensive equipment (or contribute towards the expense of a core facility, like an electron microscope or nuclear magnetic resonance machine), and to invest in a costly stream of consumable reagents (e.g. cell culture dishes, chemical supplies, growth media for cells).

This doesn’t leave a lot of leftover money to pay someone else to replicate your studies. Elizabeth Iorns, the CEO of Science Exchange, has said she is hopeful that granting agencies will eventually fund these replication efforts. This is extremely difficult to picture in the current economic climate, where there are serious concerns that the NIH budget will get significantly whacked in the upcoming sequestration process. Even without sequestration, it is hard to fathom that funding agencies would decide to significantly cut the number of grants they award in order to spend the money essentially replicating already obtained results. Iorns estimates that the replication studies might cost only 10 percent as much as the original studies. I have no idea how this number (which sounds way too low to me) was generated, or how these studies could be done so inexpensively by another lab.

Beyond the obvious problems with the proposed academic funding situation, another concern is the notion that the original authors have the option to “republish” their results in the journal PLoS ONE, with a link to the original publication. Who will pay for the additional publication costs? Should people wishing to cite the science refer to the original publication, the rehashed confirmatory paper, or both? Since this confirming publication is optional, labs that learn their work was not capable of being replicated are highly likely to not advertise this fact by publishing this finding. Other ethical and practical concerns related to the “replicating” papers have been raised as well by those with a focus on scholarly publications. Finally, would the failure to ask for “reproducibility” funds as part of a grant application be considered tantamount to admitting that the investigator doing the work viewed it as second rate?


Most scientists have a strong regard for their ability to do science. The good ones are trained to test and retest their hypotheses, to look for alternate explanations of their data. They run … Next Page »

Single PageCurrently on Page: 1 2

By posting a comment, you agree to our terms and conditions.

  • sony2005

    problem with your argument is that it assumes that fraud is only a small part of the problem. Varying degrees of fraud IS the problem and the worse part about it, it is accentuated in the top journals, the so called cutting edge research. The culture in academia is such that unless you don’t get a paper in certain journals with certain frequency you will not get a 1) job, 2) tenure, 3) grants. The peer review system is not honorable because people know their identity will be deducted if for a paper or worse, revealed, if for a grant, as study sections have multiple members. This generates fear of reprisal and for good reason; it happens. In addition, there is widespread cronyism that perpetuates hurding of grants and jobs among those within the right circles. Finally, this problem is accentuated by emerging technologies that lack enough experts for truly comprehensive reviews. Genome wide scans and protein/genome interaction analysis are an example. Many who don’t understand the technology won’t admit it, and are blinded by the flashiness of the data. It is completely broken and pharma would do better to revitalize their own R&D divisions. As to NIH and the basic science, nothing short of a complete revamp of the peer review system, and seismic culture change will help.