Advertisements
 

Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.

Advertisements

To Treat Depression, Just Give ‘Em What They Want

February 23, 2011

A doctor’s chief task is to determine the cause of a patient’s suffering and to develop a course of treatment.  In psychiatry, the task is no different: examine the patient, determine a diagnosis, and initiate treatment.  However, “treatment” comes in many forms, and what works for one patient may not work for another.  A good psychiatrist tries to figure out which approach is ideal for the patient in his office, rather than reflexively reaching for the prescription pad and the latest drug option.

How to determine what’s the best course of action for a patient?  Recent research suggests one potentially foolproof way:  Ask him.

A paper in this month’s Psychotherapy and Psychosomatics by Mergl and colleagues shows that patient preference (that is, whether the patient prefers medications or psychotherapy) predicts how effective a treatment will be.  In their study, patients who expressed a preference for medications at the beginning of treatment had a better response to Zoloft than to group therapy, while patients who preferred therapy showed the exact opposite response.

In an even larger study published in 2009 by James Kocsis and colleagues at Weill-Cornell in New York (comparing nefazodone, an antidepressant, with a cognitive therapy approach called CBASP), a similar result was obtained:  patients with chronic major depression who entered the study expressing a preference for drug treatment had higher remission rates when receiving medication than when receiving psychotherapy, and vice versa.

The numbers were quite shocking:

Patients who preferred medication:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study (high score = more depressed)
Meds 45.5% 11.6
Therapy 22.2% 21.0

Patients who preferred therapy:

Treatment received Remission rate Avg. depression score (HAM-D) at end of study
Meds 7.7% 18.3
Therapy 50.0% 12.1

(original HAM-D scores were approximately 26-27 for all patients, constituting major depression, and patients in this study had been depressed for over two years)

Thus, if a depressed patient wanted therapy but got medications instead, their chances of “remitting” (ie, having a fully therapeutic response to nefazodone) were less than 1 in 12.  But if they did get therapy, those chances improved to 1 in 2.  Interestingly, patients who preferred therapy and got combination treatment (meds and therapy) actually did worse than with therapy alone (remission rate was only 38.9%), leading the authors to conclude that “few patients who stated a preference for psychotherapy benefited much from the addition of medication.”

It’s not surprising, at first glance, that people who “get what they want” do better.  After all, a depressed patient who insists on taking meds probably won’t get much better if he’s dragged into psychotherapy against his will, and the patient who believes that a weekly session with a therapist is exactly what she needs, will probably have some resistance to just getting a pill.

But then again, isn’t depression supposed to be a hard-wired biological illness?  Shouldn’t a medication have a more profound effect, regardless of whether the patient “wants” it or not?

Apparently not.  The fact that people responded to the treatment they preferred means one of two things.  There may be two different types of depression, one that’s biological and one that’s more behavioral or “exogenous,” and people just happen to choose the appropriate treatment for their type due to some predisposition or innate tendency (self-knowledge?).  Alternatively, the “biological” basis of depression is not all it’s cracked up to be.

One question raised by these results is, why don’t we listen more to our patients and give them what they say they want?  If half the people who want therapy actually get better with therapy, doesn’t that make it hard to justify meds for this population?  Conversely, when we talk about “treatment-resistant depression,” or “depression that doesn’t respond to antidepressants alone,” could it be that the people who don’t respond to pills are simply those who would rather engage in psychotherapy instead?

I believe the implications of these findings may be significant.  For one thing, insurers are becoming less likely to pay for therapy, while they spend more and more money on antidepressant medications.  These studies say that this is exactly what we don’t want to do for a large number of patients (and these patients are easy to identify—they’re the ones who tell us they don’t want meds!).  Furthermore, trials of new antidepressant treatments should separate out the self-described “medication responders” and “therapy responders” and determine how each group responds.  [Note:  in the large STAR*D trial, which evaluated “switching” strategies, patients were given the opportunity to switch from meds to therapy or from one med to a different one of their choosing, but there was no group of patients who didn’t have the option to switch.]  If the “therapy responders” routinely fail to respond to drugs, we need to seriously revamp our biological theories of depression.  Its chemical basis may be something entirely different from how our current drugs are thought to work, or maybe depression isn’t “biological” at all in some people.  This will also keep us from wasting money and resources on treatments that are less likely to work.

While it’s often risky to ask a patient what he or she wants (and to give it to them), depression may be just the opportunity to engage the patient in a way that respects their desires.  These data show that the patient may know more than the doctor what “works” and what doesn’t, and maybe it’s time we pay closer attention.


%d bloggers like this: