Advertisements
 

Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.

Advertisements

How Abilify Works, And Why It Matters

September 13, 2011

One lament of many in the mental health profession (psychiatrists and pharmascolds alike) is that we really don’t know enough about how our drugs work.  Sure, we have hypothetical mechanisms, like serotonin reuptake inhibition or NMDA receptor antagonism, which we can observe in a cell culture dish or (sometimes) in a PET study, but how these mechanisms translate into therapeutic effect remains essentially unknown.

As a clinician, I have noticed certain medications being used more frequently over the past few years.  One of these is Abilify (aripiprazole).  I’ve used Abilify for its approved indications—psychosis, acute mania, maintenance treatment of bipolar disorder, and adjunctive treatment of depression.  It frequently (but not always) works.  But I’ve also seen Abilify prescribed for a panoply of off-label indications: “anxiety,” “obsessive-compulsive behavior,” “anger,” “irritability,” and so forth.  Can one medication really do so much?  And if so, what does this say about psychiatry?

From a patient’s perspective, the Abilify phenomenon might best be explained by what it does not do.  If you ask patients, they’ll say that—in general—they tolerate Abilify better than other atypical antipsychotics.  It’s not as sedating as Seroquel, it doesn’t cause the same degree of weight gain as Zyprexa, and the risk of contracting uncomfortable movement disorders or elevated prolactin is lower than that of Risperdal.  To be sure, many people do experience side effects of Abilify, but as far as I can tell, it’s an acceptable drug to most people who take it.

Abilify is a unique pharmacological animal.  Like other atypical antipsychotics, it binds to several different neurotransmitter receptors; this “signature” theoretically accounts for its therapeutic efficacy and side effect profile.  But unlike others in its class, it doesn’t block dopamine (specifically, dopamine D2) or serotonin (specifically, 5-HT1A) receptors.  Rather, it’s a partial agonist at those receptors.  It can activate those receptors, but not to the full biological effect.  In lay terms, then, it can both enhance dopamine and serotonin signaling where those transmitters are deficient, and inhibit signaling where they’re in excess.

Admittedly, that’s a crude oversimplification of Abilify’s effects, and an inadequate description of how a “partial agonist” works.  Nevertheless, it’s the convenient shorthand that most psychiatrists carry around in their heads:  with respect to dopamine and serotonin (the two neurotransmitters which, at least in the current vernacular, are responsible for a significant proportion of pathological behavior and psychiatric symptomatology), Abilify is not an all-or-none drug.  It’s not an on-off switch. It’s more of a “stabilizer,” or, in the words of Stephen Stahl, a “Goldilocks drug.”

Thus, Abilify can be seen, at the same time, as both an antipsychotic, and not an antipsychotic.  It’s both an antidepressant, and not an antidepressant.  And when you have a drug that is (a) generally well tolerated, (b) seems to work by “stabilizing” two neurotransmitter systems, and (c) resists conventional classification in this way, it opens the floodgates for all sorts of potential uses in psychiatry.

Consider the following conditions, all of which are subjects of Abilify clinical trials currently in progress (thanks to clinicaltrials.gov):  psychotic depression; alcohol dependence; “aggression”; improvement of insulin sensitivity; antipsychotic-induced hyperprolactinemia; cocaine dependence; Tourette’s disorder; postpartum depression; methamphetamine dependence; obsessive-compulsive disorder (OCD); late-life bipolar disorder; post-traumatic stress disorder (PTSD); cognitive deficits in schizophrenia; alcohol dependence; autism spectrum disorders; fragile X syndrome; tardive dyskinesia; “subsyndromal bipolar disorder” (whatever that is) in children; conduct disorder; ADHD; prodromal schizophrenia; “refractory anxiety”; psychosis in Parkinson’s disease; anorexia nervosa; substance-induced psychosis; prodromal schizophrenia; trichotillomania; and Alzheimers-related psychosis.

Remember, these are the existing clinical trials of Abilify.  Each one has earned IRB approval and funding support.  In other words, they’re not simply the fantasies of a few rogue psychiatrists; they’re supported by at least some preliminary evidence, or at least a very plausible hypothesis.  The conclusion one might draw from this is that Abilify is truly a wonder drug, showing promise in nearly all of the conditions we treat as psychiatrists.  We’ll have to wait for the clinical trial results, but what we can say at this point is that a drug which works as a “stabilizer” of two very important neurotransmitter systems can be postulated to work in virtually any way a psychopharmalogist might want.

But even if these trials are negative, my prediction is that this won’t stop doctors from prescribing Abilify for each of the above conditions.  Why?  Because the mechanism of Abilify allows for such elegant explanations of pathology (“we need to tune down the dopamine signal to get rid of those flashbacks” or “the serotonin 1A effect might help with your anxiety” – yes, I’ve heard both of these in the last week), that it would be anathema, at least to current psychiatric practice, not to use it in this regard.

This fact alone should lead us to ask what this says about psychiatry as a whole.  The fact that one drug is prescribed so widely—owing to its relatively nonspecific effects and a good deal of creative psychopharmacology on the part of doctors like me—and is so broadly accepted by patients, should call into question our hypotheses about the pathophysiology of mental illness, and how psychiatric disorders are distinguished from one another.  It should challenge our theories of neurotransmitters and receptors and how their interactions underlie specific symptoms.  And it should give us reason to question whether the “stories” we tell ourselves and our patients carry more weight than the medications we prescribe.


Are Your Thoughts Still Racing, Jiefang?

March 10, 2011

A recent Vanity Fair article described the trend by American pharmaceutical companies to conduct more clinical trials outside of the United States and Western Europe.  The writer and bioethicist Carl Elliott also detailed this trend in his book White Coat, Black Hat, and it has recently received increasing scrutiny in the media.  While much attention has focused on the ethical concerns of overseas clinical trials, I’m avoiding that hot topic for now and arguing that we should pay some attention to questions of clinical relevance.

This is no small matter.  The VF article reports that one-third of clinical trials by the 20 largest US-based pharmaceutical companies are conducted exclusively at foreign sites, and medications destined for use in the U.S. have been tested in almost 60,000 clinical trials in 173 countries since 2000.  The reasons for “outsourcing” clinical trials are not surprising:  cheaper costs, less restrictive regulations, more accessible subjects, and patients who are less likely to have taken other medications in the past, thus yielding a more “pure” population and, hopefully, more useful data.

At first glance, overseas clinical trials really shouldn’t be much of a problem.  The underlying biology of a disease should have nothing to do with where the diseased person lives.  Hypertension and hepatitis are probably quite similar, if not identical, whether the patient is in Boston or Bangalore.  An article in this month’s Archives of General Psychiatry appears to reinforce this concept, showing that rates of bipolar disorder—as well as its “severity” and “impact”—are similar in a variety of different international settings.  Hence, if you were to ask me where I’d do a clinical trial for a new bipolar medication, I’d probably go where it would cost less to do so (i.e., overseas), too.

But is this appropriate?  Just because we can find “bipolar disorder” in the U.S. and in Uganda, does this mean we should we treat it the same way?  Over at the blog 1boringoldman, Mickey has uncovered data showing that trials of Seroquel (an atypical antipsychotic) for bipolar depression are being conducted in 11 Chinese provinces.  You can search the data yourself at clinicaltrials.gov (a truly fantastic tool, BTW) and find that many other psychiatric drugs are being tested worldwide, for a wide range of indications.

To a lowly community psychiatrist like me, this raises a few red flags.  As I learned in my transcultural psychiatry lectures in med school and residency, the manifestations of disease—and the recommended treatment approaches—can vary dramatically based on the culture in which the disease appears.  Even in my own practice, “bipolar disorder” varies greatly from person to person:  a bipolar patient from a wealthy San Francisco suburb experiences her disease very differently from the patient from the poverty-stricken neighborhoods of East Oakland.  A good psychiatrist must respect these differences.  Or so I was taught.

In his book Crazy Like Us, author Ethan Watters gives numerous examples of this phenomenon on a much larger scale.  He argues that the cultural dimensions that frame a disease have a profound impact on how a patient experiences and interprets his or her symptoms.  He also describes how patients’ expectations of treatments (drugs, “talk” therapy) differ from culture to culture, and can determine the success or failure of a treatment.

Let’s say you asked me to treat Jiefang, a young peasant woman with bipolar disorder from Guangdong Province.  Before doing so, I would want to read up on her community’s attitudes towards mental illness (and try to understand what “bipolar disorder” itself means in her community, if anything), learn about the belief systems in place regarding her signs and symptoms, and understand her goals for treatment.  Before prescribing Seroquel (or any other drug, for that matter), I’d like to know how she feels about using a chemical substance which might affect her feelings, emotions, and behavior.  I imagine it would take me a while before Jiefang and I felt comfortable proceeding with this approach.

There’s just something fishy about scientists from a multinational Contract Research Organization hired by Astra-Zeneca, flying into Guangdong with their white coats and clipboards, recruiting a bunch of folks with (western-defined) bipolar disorder just like Jiefang, giving them various doses of Seroquel, measuring their responses to bipolar rating scales (developed by westerners, of course), and submitting those data for FDA approval.

I sure hope I’m oversimplifying things.  Then again, maybe not.  When the next me-too drug is “FDA approved” for schizophrenia or bipolar depression (or, gasp, fibromyalgia), how can I be sure that it was tested on patients like the ones in my practice?  Or even tested at all on patients who know what those diagnoses even mean?   There’s no way to tell anymore.

The “pathoplastic” features of disease—what Watters calls the “coloring and content”—make psychiatry fascinating.  But they’re often more than just details; they include the ways in which patients are influenced by public beliefs and cultural ideas, the forces to which they attribute their symptoms, and the faith (or lack thereof) they put into medications.  These factors must be considered in any attempt to define and treat mental illness.

Clinical trials have never resembled the “real world.”  But designing clinical trials that resemble our target patients even less—simply for the sake of bringing  a drug to market quickly and more cheaply—is not just unreal, but deceptive and potentially dangerous.


The Placebo Effect: It Just Gets Better and Better

February 13, 2011

The placebo response is the bane of clinical research.  Placebos, by definition, are inert, inactive compounds that should have absolutely no effect on a patient’s symptoms, although they very frequently do.  Researchers compare new drugs to placebos so that any difference in outcome between drug and placebo can be attributed to the drug rather than to any unrelated factor.

In psychiatry, placebo effects are usually quite robust.  Trials of antidepressants, antianxiety medications, mood stabilizers, and other drugs typically show large placebo response rates.  A new paper by Bruce Kinon and his colleagues in this month’s Current Opinion in Psychiatry, however, reports that placebos are also show some improvement in schizophrenia.  Moreover, placebos seem to have become more effective over the last 20 years!

Now, if there’s any mental illness in which you would not expect to see a placebo response, its schizophrenia.  Other psychiatric disorders, one might argue, involve cognitions, beliefs, expectations, feelings, etc.—all of which could conceivably improve when a patient believes an intervention (yes, even a placebo pill) might make him feel better.  But schizophrenia, by definition, is characterized by a distorted sense of reality, impaired thought processes, an inability to grasp the differences between the external world and the contents of one’s mind, and, frequently, the presence of bizarre sensory phenomena that can only come from the aberrant firing of the schizophrenic’s neurons.  How could these symptoms, which almost surely arise from neurochemistry gone awry, respond to a sugar pill?

Yet respond they do.  And not only do subjects in clinical trials get better with placebo, but the placebo response has been steadily improving over the last 20 years!  Kinon and his colleagues summarized placebo response rates from various antipsychotic trials since 1993 and found a very clear and gradual improvement in scores over the last 15-20 years.

Very mysterious stuff.  Why would patients respond better to placebo today than in years past?  Well, as it turns out (and is explored in more detail in this article), the answer may lie not in the fact that schizophrenics are being magically cured by a placebo, but rather that they have greater expectations for improvement now than in the past (although this is hard to believe for schizophrenia), or that clinical researchers have greater incentives for including patients in trials and therefore inadequately screen their subjects.

In support of the latter argument, Kinon and his colleagues showed that in a recent antidepressant trial (in which some arbitrary minimum depression score was required for subjects to be included), researchers routinely rated their subjects as more depressed than the subjects rated themselves at the beginning of the trial—the “screening phase.”  Naturally, then, subjects showed greater improvement at the end of the trial, regardless of whether they received an antidepressant or placebo.

A more cynical argument for why antipsychotic drugs don’t “separate from placebo” is because they really aren’t that much better than placebo (for an excellent series of posts deconstructing the trials that led to FDA approval of Seroquel, and showing how results may have been “spun” in Seroquel’s favor, check out 1BoringOldMan).

This is an important topic that deserves much more attention.  Obviously, researchers and pharmaceutical companies want their drugs to look as good as possible, and want placebo responses to be nil (or worse than nil).  In fact, Kinon and his colleagues are all employees of Eli Lilly, manufacturer of Zyprexa and other drugs they’d like to bring to market, so they have a clear interest in this phenomenon.

Maybe researchers do “pad” their studies to include as many patients as they can, including some whose symptoms are not severe.  Maybe new antipsychotics aren’t as effective as we’d like to believe them to be.  Or maybe schizophrenics really do respond to a “placebo effect” the same way a depressed person might feel better simply by thinking they’re taking a drug that will help.  Each of these is a plausible explanation.

For me, however, a much bigger question arises: what exactly are we doing when we evaluate a schizophrenic patient and prescribe an antipsychotic?  When I see a patient whom I think may be psychotic, do I (unconsciously) ask questions that lead me to that diagnosis?  Do I look for symptoms that may not exist?  Does it make sense for me to prescribe an antipsychotic when a placebo might do just as well?  (See my previous post on the “conscious” placebo effect.)  If a patient “responds” to a drug, why am I (and the patient) so quick to attribute it to the effect of the medication?

I’m glad that pharmaceutical companies are paying attention to this issue and developing ways to tackle these questions.  Unfortunately, because their underlying goal is to make a drug that looks as different from placebo as possible (to satisfy the shareholders, you know) I question whether their solutions will be ideal.  As with everything in medicine, though, it’s the clinician’s responsibility to evaluate the studies critically—and to evaluate their own patients’ responses to treatment in an unbiased fashion—and not to give credit where credit isn’t due.


%d bloggers like this: