The FDA’s approval process for new psychiatric drugs is broken. It is time-consuming and costly, and benefits no one—patients, physicians, pharmaceutical companies, managed care organizations, or other payers.
To bring a new compound to market, pharmaceutical companies and academic labs invest years (and millions of dollars) in basic research. When a compound appears promising, it enters “Phase I” testing, to assess the drug’s basic properties and safety profile in healthy human subjects; this phase may take one to two years. If successful, the drug enters “Phase II” testing, which measures responses to the drug in a small target population of patients. After this step comes “Phase III” testing, usually the most expensive and prolonged phase, in which the drug is tested (usually against a placebo) to determine its safety and efficacy for a given indication. This may take many more years, and many more millions of dollars, to complete.
For psychiatric drugs, this process is somewhat of an anachronism. There is extensive overlap among psychiatric diagnoses (and the changes on the horizon with DSM-5 won’t make things any clearer), so it makes little sense to focus on a drug’s efficacy for a single indication (e.g., generalized anxiety disorder) when it could prove quite helpful in another (e.g., major depression). The end result is that doctors think about patients in terms of diagnoses (and assign diagnoses that are sometimes inaccurate) rather than about the symptoms (or the patients) they are treating. Managed care companies, too, force us to pigeonhole patients into a given diagnosis in order for them to pay for a medication. Finally, pharmaceutical companies must conduct expensive, prolonged Phase III trials for each indication they wish to receive (driving up costs of all medications), and are subject to significant penalties when they even suggest that their drug might be used in a slightly different population.
Here is one way the drug approval process could be improved for all involved. Rather than recruit a uniform population of subjects with a given diagnosis (which does not resemble the “real world” in any way), we can require drug companies to test the drug to a large number of subjects with a broad range of psychiatric conditions (as well as normal controls), perform a much more extensive battery of tests on each subject, release all the data, and then allow doctors to determine how to use the drugs.
For instance, let’s say a company believes, on the basis of its research, that “olanzidone” might be an effective antipsychotic. So they recruit several hundred subjects—some with schizophrenia, some with depression, some with bipolar disorder, some with a personality disorder, some with multiple disorders, and so on, and some with no psychiatric diagnosis at all—and subject them to a battery of baseline tests: a physical exam; comprehensive laboratory measures; genetic screens; cognitive tests; personality tests; tests of anxiety, depression, OCD symptoms, panic symptoms, PTSD symptoms, and so on; as well as a full diagnostic clinical interview. They administer olanzidone at a range of doses (determined to be safe on the basis of phase I testing) and over a range of time periods, then perform the same battery of tests after the trial. All results are then published and made available to clinicians.
The results might show that olanzidone is an effective antipsychotic, but only in patients with a concurrent mood disorder. They might show that olanzidone worsens anxiety. They might show that olanzidone causes weight gain, but only in patients with the HTR2C -759C/T polymorphism. They might show that olanzidone worsens negative symptoms of psychosis, but improve cognitive abilities. Get the picture?
It sounds, at first, like this alternative would be just as complex and time-consuming as the current way of doing things. But I don’t think so. For one thing, drug companies wouldn’t have to spend as much time and money finding the “perfect” subject population, and can test a drug’s safety profile in a diverse group of subjects. Also, companies wouldn’t have to invest millions of R&D dollars to obtain each new indication. Furthermore, they would be required to make all data public, preventing them from hiding data which don’t support a medication’s proposed indication. Finally, this proposal would allow doctors to make medication decisions based on a much more extensive and accuate data set, rather than the information that is offered to them in glossy drug-company brochures.
The drawbacks? We might end up with far more compounds on the market, some of questionable efficacy. But drug companies would most likely invest their efforts in developing compounds that have some chance of improving what’s on the market (instead of just finding a new “niche” indication). Drug companies may also fear the loss of market share or the costs of testing drugs on larger populations of patients. But, in reality, this may actually create new markets for drugs and would obviate the need to push for new indications every few years.
This change would also make for more truthful (and informative) marketing material. Instead of an ad proclaiming “Olanzidone newly approved for the treatment of schizophrenia!!” (which doesn’t mean very much, frankly), I might read an ad explaining “Olanzidone shows a 30% decrease in average PANSS score; no effect on mood symptoms; a significant improvement in executive function but not memory; a modest decrease in Beck Anxiety Inventory score; and a significant improvement in Pittsburgh Sleep Quality Index.” Not quite as sexy, but certainly more helpful in my practice.
This will, of course, never happen, because there are simply too many vested interests in the status quo. But now is the time to start thinking of ways to make the approval process more transparent to the public, and to help doctors (as well as patients and payers) make more informed decisions about the drugs we use.