Understanding Open Science

8/8/13Follow @wilbanks

The dream of getting to cures, faster, is one shared by many. But every new instrument we’ve invented to examine the human body has shown us only more complexity and interconnection.  The failure of all of our new technologies to translate rapidly into drugs has demonstrated that we’re the folks in the parking lot looking under the lamppost for our keys not because that’s where they might be, but because that’s where the light is.

In tandem with the explosion of technologies to query the body (genome, protein, metabolite, and more), we’ve seen an explosion in organizational models for pharmaceutical companies. Reorganizations around therapeutic area, or around development pipeline steps, or around technological elements, have been tried. None have broken the complexity of the body, and thus none have truly shortened the time and cost of getting a drug to market. It’s still 17 years and a billion dollars, give or take.

So what’s left?

It may seem strange, but left untried in technological and organizational revolutions is the methodological revolution that has taken over software and, increasingly,cultural works: collaborations built on standard, shared pre-competitive systems, low transaction costs, openly licensed intellectual property, and massively increased sample sizes of participants. The short-hand for this methodology is “open source.”

The life sciences would seem, on the surface, ideal for open source. It’s a world built on disclosure – whether publication or patent – it doesn’t count until you tell the world. It’s a world where the knowledge itself snaps together in a fashion that looks eerily like a wiki, where one person only makes a small set of edits in an experiment that establishes a new fact. And it’s a world where the penalty for redundancy is high – no one in their right mind wants to spend scarce research dollars on a problem that has been solved already, a lead that is a dead end, a target guaranteed to lead to side effects.

Indeed it’s precisely this apparent fit between life sciences and open source that has inspired countless projects from the non-profit sector. But those projects, for the most part, run into the teeth of a cruel reality: the business of the life sciences, including the academic business, is set up in such a way as to reward local information hoarding even when it would benefit society, or even increase profits, in a global context.

Fragments of information are published as papers, not integrated into knowledge models that can be applied to data – and aggressive publishers use copyright to prevent that integration from happening post hoc. Data that can’t be reimbursed by a payor is kept behind pharma firewalls, even though it could be reused to build commonly held maps of absorption, distribution, metabolism, excretion – the maps that might let all more easily navigate toxicity mechanisms and get better drugs to market. And money is regularly thrown after projects that have failed,silently, in labs around the world but never been disclosed.

An “open” approach, strategically applied, can fix these problems.  And the environment in which the life sciences exist is changing to favor that open approach. The US policies on public access to literature and data trend towards allowing us to create the wiki of knowledge that is implicit in what we already know. Emergent collaboration projects from pharma represent an acknowledgement that pre-competitive spaces in biology can actually increase the power of the company in chemistry. And the advent of cheap data, available to citizen and scientist alike, presents the chance of that massive increase in sample size that has been missing.

The life sciences, and the body, are too complex to be effectively comprehended by one organization, or company, but that also means we can’t have overly simplistic understanding of how openness fits into the picture. Open might mean public domain, or liberal licensing, or just not withholding. Open needs to be reconciled with privacy (an under-acknowledged challenge). And it needs technical infrastructure, policy infrastructure, boundary organizations, and new norms to flourish.

In my posts for FasterCures over the coming year, I’m going to try and break down “open science” into a set of digestible segments in the hopes of provoking a conversation about how moving to open – where appropriate – can start us towards realizing the goal of cures, faster.

[Editor's note:  This post first appeared on the FasterCures website.]

John Wilbanks is a data commons expert and advocate who has spent his career working to advance open content, open data, and open innovation systems. He is a senior fellow at FasterCures and chief commons officer at Sage Bionetworks. Follow @wilbanks

By posting a comment, you agree to our terms and conditions.

  • Robert Jones

    It was interesting to see David Baltimore was also on the FastCures website video. He wasn’t very fond of open data when it came to Imanishi Kari and Margot O’Toole. Margot knew a secret about that Nature paper David put his name on. It had false information. David wanted Margot to zip it.

    Still, I think this is a good idea. Increase the transparency of all research, including clinical trials. I wonder what the Xconomists think of this. They aren’t exactly cut from the same cloth as Jonas Salk. I think of Xconomists as Xscientists.