Some readers might say that this is a ridiculous question. Of course it should be quantitative; that’s what medicine is all about. Psychiatry’s problem, they argue, is that it’s not quantitative enough. Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus. A patient saying he feels “depressed” means nothing unless we can measure how depressed he is. What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.
But sometimes measurement can mislead us. If we’re going to measure anything, we need to make sure it’s something worth measuring.
By virtue of our training, physicians are fond of measuring things. What we don’t realize is that the act of measurement itself leads to an almost immediate bias. As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.” And medical science dictates that we should make things “normal.” When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.
The problem is, they’re not potassium levels or BUN/Cr ratios. When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct. In psychiatry, what’s the pathology? For a woman who attempted suicide two days ago, does it really matter how much she’s eating today? Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night? Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.
Clinical trials take measurement to an entirely new level. In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments. Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health? It’s hard to say.
One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable. Developed in 1976 and still in widespread use, the CGI scale has three parts: the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment. (See here for a more detailed description.) It is incredibly simple. Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it. In other words, subjective assessment made objective.
The problem is, the CGI has been criticized precisely for that reason—it’s too subjective. As such, it is almost never used as a primary outcome measure in clinical trials. Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA. But what’s wrong with subjectivity? Isn’t everything that counts subjective, when it really comes down to it? Especially in psychiatry? The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!” The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible. The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”
Nevertheless, measurement is not a fad, it’s here to stay. And as the old saying goes, resistance is futile. Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters. They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets. And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.
If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist. Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments. In some settings, that that’s what we have now. I don’t think it’s too late to salvage the human element of what we do. A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon. And to remind ourselves that numbers don’t capture everything.