Advertisements
 

Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.

Advertisements

Another Day, Another Seroquel XR Indication?

June 1, 2011

Just when you thought the antipsychotic drug Seroquel had fully penetrated doctors’ offices and patients’ medicine chests (not to mention law offices and children’s tummies) all across America, a new clinical trial is recruiting subjects for yet another indication for this ubiquitous drug.

Technically, the trial is of Seroquel XR, not Seroquel.  (Because, you know, the two are COMPLETELY different drugs, as described in this YouTube video.)  But you get the idea.  Anything to keep the money flowing for Astra-Zeneca, especially after Seroquel goes generic in 2012.

Thanks to a tip from Stephany at Soulful Sepulcher, you can read all the details of this study here.  It’s called the “Quietude Study,” a trial of Seroquel XR for the treatment of agitated depression.  Specifically, they want to compare Seroquel XR (at doses up to 150 or 300 mg/day) with Lexapro (up to 20 mg/day), and the investigators predict that Seroquel XR will be more effective in the management of depression “with prominent agitation.”

Two things caught my eye right away:  First, the name of the study (“Quietude”) is obviously a play on words, since the generic name for Seroquel is quetiapine.  How cute.  I also noticed that the study is being conducted by Roger McIntyre, MD, whom I saw just yesterday on the medical website QuantiaMD giving a blatantly obvious “infomercial” for Geodon (for Quantia members, here’s the link), a competitor’s drug.  [And for more info on QuantiaMD, see Daniel Carlat’s recent post about this site.])

But let’s get more substantive, shall we?  A look at the details of this new “Quietude” study is revealing.  For one thing, the opening statement of the study’s “Purpose” is:  “Most individuals with major depressive disorder manifest clinically significant agitation.”  Really?  I’ve certainly seen cases of agitated depression, but are “most” depressed patients agitated?  Not in my experience.  Maybe when they say “agitation” they’re including patients with akathisia, an occasional side effect of some antidepressant medication.  I understand research proposals always have to start with a statement about how widespread the problem is, but this one seems a bit of a stretch.

The inclusion and exclusion criteria are also included in the study design.  One of the inclusion criteria, along with the typical symptomatic measures (i.e., HAM-D >20 and CGI-S >4), is “significant agitation.”  That’s it.  By whose measure?  Patient report?  Clinician’s evaluation?  I’d really like to know more about how the “agitated” folks are going to be selected.

Some interesting exclusion criteria are (a) “known lack of antidepressant response to escitalopram [Lexapro]” and (b) “known lack of antidepressant response to quetiapine [Seroquel].”  So they’re enriching their population for individuals who have not already tried Lexapro or Seroquel and failed to respond to the antidepressant effect. Perhaps this isn’t a huge problem, but Seroquel XR is not the greatest antidepressant (see below), and this exclusion criterion will probably weed out the patients who gained weight on Seroquel or “felt like a zombie”—two common complaints with this medication which often lead to its discontinuation.

But what disturbs me the most about this trial is the fact that it seems entirely unnecessary.  The fact of the matter is that Seroquel XR is–for better or for worse—already used for many cases of “agitated depression.”  And it’s not even entirely off-label, because Seroquel XR is approved for bipolar depression and for the adjunctive treatment of MDD (whether it actually works as an antidepressant is another story).  As mentioned above, quetiapine is a sedating drug in many patients, so of course a psychiatrist is going to think about it for “agitated depression.”  (Unless he/she wants to take the time to determine the causes of the patient’s agitation, which, unfortunately, often does not happen.)

But there’s more.  When Seroquel XR was first introduced, with much fanfare, for the treatment of depression, I remember being somewhat skeptical and asking my local AstraZeneca sales force whether it had any “antidepressant effect” other than its well-known sedative and appetite-enhancing effects (because, after all, those are two of the symptoms of depression typically measured in clinical trials).  I was reassured that, no, no, Seroquel XR is more than that; it acts on all depressive symptoms, probably through its metabolite norquetiapine.

In fact, a year ago I emailed a local “key opinion leader” who spoke extensively for AstraZeneca and was told the following (emphasis added; BTW, if it’s too technical for you, don’t worry, go ahead and skip):

I think the concept is that quetapine at low doses (25-50-100 mg) is almost entirely anti-histaminergic and anti-muscarinic. However at the 150-300 mg doses there is significant norepinephrine transporter inhibition from the metabolite norquetapine as well as 5HT 1A agonism and 5HT2A AND 5HT2-C antagonism which all increase dopamine. Thus at the higher doses of 150-300 mg there is significant antidepressant activity but also increases in frontal, limbic and striatal dopamine which can be stimulatory (as well as having anti-depressant effects). At the 600-800 mg doses there is significant D-2 antagonism which is where the antipsychotic effect (D-2 antagonism) kicks in. Thus as the doses escalate patients go from pure sedation to antidepressant to antipsychotic effects.  At least this is the theory based on the dose related relative strength and affinities for its respective receptors.

The premise of the “Quietude” study seems to be telling us something different—even though it’s what we already knew if we only paid attention to what our patients tell us (and not necessarily to AstraZeneca): namely, that the primary advantage of intermediate-dose Seroquel XR does seem to be its sedative effect.  And this might indeed make it effective for the treatment of the “psychological and physical restlessness” associated with depression.

Anyway, because the trial is only being run in Canadian sites, I won’t have to worry about whether to refer my patients to it.  But it’s also a trial whose results I won’t exactly be anxiously awaiting.


Off-Label Meds: Caveat Prescriptor

March 13, 2011

In medicine we say that a drug is “indicated” for a given disorder when it has gone through rigorous testing for that condition. Typically, a drug company will perform clinical trials in which they select patients with the condition, give them the new drug, and compare them with similar patients who are given a placebo (or an established drug which is already used to treat the disease). In the US, when the FDA approves a drug, the drug company is then permitted to advertise it in magazines, journals, TV, the internet, and directly to doctors, but they must specify its “approved” use.

In the past few years, several drug companies have found themselves in trouble after accusations of marketing their drugs for off-label indications. Total fines have reached into the billions, and many companies have vowed to change their marketing practices in response.

It should be emphasized, however, that doctors use drugs off-label very frequently. This is particularly true in psychiatry, where an estimated 31% of all prescriptions are off-label. Some familiar examples include trazodone (an antidepressant) for insomnia or beta blockers (originally approved for hypertension and heart failure) for anxiety. Furthermore, some very common symptoms and conditions, such as personality disorders, impulsivity, nightmares, eating disorders, and PTSD, have no (or few) “indicated” medications, and yet we often treat them with medications, sometimes with great success. And since the FDA restricts its approvals to medications and devices, even psychotherapy—something we routinely recommend and “prescribe” to patients—is, technically, off-label.

One colleague took this one step further and explained that virtually any psychiatric drug which has been prescribed for more than 8 or 12 weeks is being used “off-label” since the studies to obtain FDA approval are generally no longer than that. Admittedly, that’s nitpicking, but it does demonstrate how the FDA approval process works with a very limited amount of clinical data.

Drug companies that deliberately market their drugs for off-label indications are indeed guilty of misrepresenting their products and deceiving doctors and consumers. But to blame them for bad patient outcomes conveniently ignores the one missing link in the process: the doctor who decided to prescribe the drug in the first place. Whether we like it or not, drug companies are businesses, they sell products, and as with everything else in our consumerist society, the buyer (in this case the doctor) must beware.

Here’s an example. A new drug came to market in February called Latuda, which has been FDA approved for the treatment of schizophrenia. Before a few months ago, most community psychiatrists (like me) knew absolutely nothing about this drug.

If a sales rep visits my office tomorrow and tells me that it’s approved for schizophrenia and for bipolar disorder, she is obviously giving me false information. This is not good. But how I choose to use the drug is up to me. It’s my responsibility—and my duty, frankly—to look at the data for schizophrenia (which exists, and which is available on the Latuda web site and in a few articles in the literature). If I look for data on bipolar disorder, I’ll find that it doesn’t exist.

That’s just due diligence. After reviewing the data, I may conclude that Latuda looks like a lousy drug for schizophrenia (I’ll save those comments for later). However, I might find that it may have some benefit in bipolar disorder, maybe on particular symptoms or in a certain subgroup of patients. Or, I might find some completely unrelated condition in which it might be effective. If so, I should be able to go ahead and use it—assuming I’ve exhausted the established, accepted, and less costly treatments already. Convincing my patient’s insurance company to pay for it would be another story… but I digress.

I don’t mean to imply that marketing has no place in medicine and that all decisions should be made by the physician with the “purity” of data alone. In fact, for a new drug like Latuda, sales reps and advertising materials are effective vehicles for disseminating information to physicians, and most of the time it is done responsibly. I just think doctors need to evaluate the messages more critically (isn’t that something we all learned to do in med school?). Fortunately, most sales reps are willing to engage doctors in that dialogue and help us to obtain hard data if we request it.

The bottom line is this: psychiatric disorders are complicated entities, and medications may have potential far beyond their “approved” indications. While I agree that pharmaceutical marketing should stick to proven data and not anecdotal evidence or hearsay, doctors should be permitted to use drugs in the ways they see fit, regardless of marketing. But—and this is critical—doctors have a responsibility to evaluate the data for both unapproved and approved indications, and should be able to defend their treatment decisions. Pleading ignorance, or crying “the rep told me so,” is just thoughtless medicine.


Are Your Thoughts Still Racing, Jiefang?

March 10, 2011

A recent Vanity Fair article described the trend by American pharmaceutical companies to conduct more clinical trials outside of the United States and Western Europe.  The writer and bioethicist Carl Elliott also detailed this trend in his book White Coat, Black Hat, and it has recently received increasing scrutiny in the media.  While much attention has focused on the ethical concerns of overseas clinical trials, I’m avoiding that hot topic for now and arguing that we should pay some attention to questions of clinical relevance.

This is no small matter.  The VF article reports that one-third of clinical trials by the 20 largest US-based pharmaceutical companies are conducted exclusively at foreign sites, and medications destined for use in the U.S. have been tested in almost 60,000 clinical trials in 173 countries since 2000.  The reasons for “outsourcing” clinical trials are not surprising:  cheaper costs, less restrictive regulations, more accessible subjects, and patients who are less likely to have taken other medications in the past, thus yielding a more “pure” population and, hopefully, more useful data.

At first glance, overseas clinical trials really shouldn’t be much of a problem.  The underlying biology of a disease should have nothing to do with where the diseased person lives.  Hypertension and hepatitis are probably quite similar, if not identical, whether the patient is in Boston or Bangalore.  An article in this month’s Archives of General Psychiatry appears to reinforce this concept, showing that rates of bipolar disorder—as well as its “severity” and “impact”—are similar in a variety of different international settings.  Hence, if you were to ask me where I’d do a clinical trial for a new bipolar medication, I’d probably go where it would cost less to do so (i.e., overseas), too.

But is this appropriate?  Just because we can find “bipolar disorder” in the U.S. and in Uganda, does this mean we should we treat it the same way?  Over at the blog 1boringoldman, Mickey has uncovered data showing that trials of Seroquel (an atypical antipsychotic) for bipolar depression are being conducted in 11 Chinese provinces.  You can search the data yourself at clinicaltrials.gov (a truly fantastic tool, BTW) and find that many other psychiatric drugs are being tested worldwide, for a wide range of indications.

To a lowly community psychiatrist like me, this raises a few red flags.  As I learned in my transcultural psychiatry lectures in med school and residency, the manifestations of disease—and the recommended treatment approaches—can vary dramatically based on the culture in which the disease appears.  Even in my own practice, “bipolar disorder” varies greatly from person to person:  a bipolar patient from a wealthy San Francisco suburb experiences her disease very differently from the patient from the poverty-stricken neighborhoods of East Oakland.  A good psychiatrist must respect these differences.  Or so I was taught.

In his book Crazy Like Us, author Ethan Watters gives numerous examples of this phenomenon on a much larger scale.  He argues that the cultural dimensions that frame a disease have a profound impact on how a patient experiences and interprets his or her symptoms.  He also describes how patients’ expectations of treatments (drugs, “talk” therapy) differ from culture to culture, and can determine the success or failure of a treatment.

Let’s say you asked me to treat Jiefang, a young peasant woman with bipolar disorder from Guangdong Province.  Before doing so, I would want to read up on her community’s attitudes towards mental illness (and try to understand what “bipolar disorder” itself means in her community, if anything), learn about the belief systems in place regarding her signs and symptoms, and understand her goals for treatment.  Before prescribing Seroquel (or any other drug, for that matter), I’d like to know how she feels about using a chemical substance which might affect her feelings, emotions, and behavior.  I imagine it would take me a while before Jiefang and I felt comfortable proceeding with this approach.

There’s just something fishy about scientists from a multinational Contract Research Organization hired by Astra-Zeneca, flying into Guangdong with their white coats and clipboards, recruiting a bunch of folks with (western-defined) bipolar disorder just like Jiefang, giving them various doses of Seroquel, measuring their responses to bipolar rating scales (developed by westerners, of course), and submitting those data for FDA approval.

I sure hope I’m oversimplifying things.  Then again, maybe not.  When the next me-too drug is “FDA approved” for schizophrenia or bipolar depression (or, gasp, fibromyalgia), how can I be sure that it was tested on patients like the ones in my practice?  Or even tested at all on patients who know what those diagnoses even mean?   There’s no way to tell anymore.

The “pathoplastic” features of disease—what Watters calls the “coloring and content”—make psychiatry fascinating.  But they’re often more than just details; they include the ways in which patients are influenced by public beliefs and cultural ideas, the forces to which they attribute their symptoms, and the faith (or lack thereof) they put into medications.  These factors must be considered in any attempt to define and treat mental illness.

Clinical trials have never resembled the “real world.”  But designing clinical trials that resemble our target patients even less—simply for the sake of bringing  a drug to market quickly and more cheaply—is not just unreal, but deceptive and potentially dangerous.


%d bloggers like this: