The Genomics Comeback
One of the things that fascinates me most about the slow-paced business of biotech is how utterly mismatched it is against the demands of Wall Street.
Asking for quarterly earnings reports from companies that are in the red, and likely to remain so for years, while they test drug candidates in early-stage clinical trials (which may or may not translate to success in a phase III study) is a little bit like requesting a Big Mac in a French restaurant. Investors want as-promised, fast-food results when biotech can only deliver a time-consuming coq au vin of variable success.
I was thinking about that earlier this week, when I saw a really interesting article about the tangible impact genomics and other “omics” are beginning to have in drug development. Those of you who have been around this business for a while remember the days of big promises by the likes of Human Genome Sciences, DeCode Genetics, and Millennium Pharmaceuticals. That was a little over ten years ago, and genomics was being touted as the field that would revolutionize medicine. It would lead to new drug targets and a better understanding of human disease.
And now, given the dearth of drug candidates to materialize from that promise as of yet (and the hundreds of millions of investor dollars lost in the process, since many of the companies from the genomics boom went bust or nearly bust), naysayers love to point their finger at it as an example of another bubble burst.
There is some truth to these lamentations, but similar things were said of monoclonal antibodies, and now these therapies are making a real difference in patients’ lives. Life cycles in biotech and pharma may be too long to agree with many investors’ patience spans, but sometimes it pays to just wait.
Which brings me back to that paper I mentioned before: it just came out in Nature Reviews Drug Discovery (June 2010 issue) and it gives some pretty interesting examples of how genomics and other “omics” (proteomics, metabolomics) are starting to impact drug development behind the scenes.
One of the goals of genomics is to find certain signatures that could be correlated with better response to treatment, or with a certain stages of disease (for example, an early form of cancer) to better select drug candidates or patient populations in clinical trials. But the problem is that this kind of information—usually a readout from massive gene-expression or other array-type experiments—is vast and hard to interpret.
For example, let’s say a company discovers after a failed phase II trial that higher expression of a set of 10 genes correlates, retrospectively, with response to therapy. A company might want to pursue the failed drug candidate within this specific subset of patients, but it would be too risky and costly to do so if it feared the FDA might ultimately not agree with its interpretation that the 10-gene signal is real.
To overcome this problem, the FDA established a program around 2004 whereby companies have been submitting genomic data to the agency on a voluntary basis. The goal of the program was for companies to feel comfortable discussing their complex, still-premature data with the agency without fear that doing so might have regulatory consequences, while at the same time allowing the industry and FDA to figure out what kind of biomarker information might have to be included in formal submissions to the agency.
The conversations that resulted over the past few years—including 45 face-to-face meetings between companies and the FDA—have been immensely productive, says Federico Goodsaid, who is the leader of the Genomics Group at the agency’s Center for Drug Evaluation and Research. “We are the ones that learned the most,” he told me.
The article gives a refreshingly candid description of several meetings between the FDA and various companies, as both parties tried to define what their data meant and how it might be used to help guide approval of drugs and diagnostic tests.
For example, the article described efforts at Novartis to identify a biomarker that predicts poor response in patients with kidney transplants. The company analyzed the expression levels of 12,000 genes and identified 10 whose expression predicted poor outcomes after transplant. Together with the FDA, it worked to define parameters for a possible diagnostic test that might allow doctors to identify these patients on time, the hope being to begin early therapy in cases of likely organ rejection.
The article also mentions AstraZeneca’s efforts to find gene markers predicting an adverse liver reaction it had observed in clinical trials of an anti-clotting drug. There was also a program at Roche aimed at finding gene variants that might predict which patients will suffer toxic side effects from a chemotherapy drug. Pfizer had a project to find a more efficient test to define whether a drug candidate is a possible carcinogen.
This isn’t the kind of sexy stuff likely to make the front page of the New York Times. But the fact that industry and FDA have begun to define how these types of genomic data will be used to guide clinical development is, I think, a much bigger step than any one genomics-era blockbuster in the making.