Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?

About these ads

Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.


Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.


Biomarker Envy III: Medial Prefrontal Cortex

May 28, 2011

Well, what do you know…. No sooner did I publish my last post about the “depression biomarker” discovered by a group of Japanese scientists, than yet another article appeared, describing a completely different biomarker.  This time, however, instead of simply diagnosing depression, the goal was to identify who’s at risk of relapse.  And the results are rather tantalizing… Could this be the real deal?

The paper, to be published in the journal Biological Psychiatry, by Norman Farb, Adam Anderson, and colleagues at the University of Toronto, had a simple research design.  They recruited 16 patients with a history of depression, but who were currently in remission (i.e., symptom-free for at least five months), as well as 16 control subjects.  They performed functional MRI (fMRI) imaging on all 32 participants while exposing them to an emotional stressor: specifically, they showed the subjects “sad” or “neutral” film clips while they were in the MRI scanner.

Afterward, they followed all 16 depressed patients for a total of 18 months.  Ten of these patients relapsed during this period.  When the group went back to look for fMRI features that distinguished the relapsers from the non-relapsers, they found that the relapsers, while viewing the “sad” film clips, had greater activity in the medial prefrontal cortex (mPFC).  The non-relapsers, on the other hand, showed greater activation in the visual cortex when viewing the same emotional trigger.

Even though the number of patients was very small (16 total), the predictive power of the tests was actually quite high (see the figure at right – click for a larger version).  It’s certainly conceivable that a test like this one might be used in the future to determine who needs more aggressive treatment—even if our checklists show that a depressed patient is in remission.  As an added bonus, it has better face validity than simply measuring a chemical in the bloodstream: in other words, it makes sense that a depressed person’s brain responds differently to sad stimuli, and that we might use this to predict outcomes.

As with most neuroimaging research, the study itself was fairly straightforward.  Making some sense out of the results, however, is another story.  (Especially if you like salmon.)

The researchers had predicted, based on previous studies, that patients who are prone to relapse might show greater activity in the ventromedial prefrontal cortex (VMPFC) and lower activity in the dorsolateral PFC (DLPFC).  But that’s not what they found.  Instead, relapsers had greater activity in the mPFC (which is slightly different from the VMPFC).  Moreover, non-relapsers had greater activity in the visual cortex (specifically the calcarine sulcus).

What might this mean?  The authors hypothesize that mPFC activity may lead to greater “ruminative thought” (i.e., worrying, brooding).  In fact, they did show that mPFC activation was correlated with scores on the RSQ-R, a psychological test of ruminative thought patterns.  Regarding the increased visual cortex activity, the authors suggest that this may be protective against further depressive episodes.  They surmise that it may be a “compensatory response” which might reflect “an attitude of acceptance or observation, rather than interpretation and analysis.”

In other words, to grossly oversimplify:  if you’re in recovery from depression, it’s not a good idea to ruminate, worry, and brood over your losses, or to internalize someone else’s sadness (even if it’s just a 45-second clip from the movie “Terms of Endearment”—which, by the way, was the “sad stimulus” in this experiment).  Instead, to prevent another depressive episode, you should strengthen your visual skills and use your visual cortex to observe and accept (i.e., just watch the darn movie!).

This all seems plausible, and the explanation certainly “fits” with the data.  But different conclusions can also be drawn.  Maybe those “recovered” patients who had less mPFC activity were simply “numb” to any emotional stimuli.  (All patients were taking antidepressants at the time of the fMRI study, which some patients report as having a “numbing” effect on emotions.)  Moreover, it has been said that depression can sometimes be beneficial; maybe the elevated mPFC activity in relapsers was an ongoing attempt to process the “sad” inputs in a more productive way?  As for the protective effect of visual cortex activity, maybe it isn’t about “acceptance” or “non-judgment” at all, but something else entirely?  Maybe those patients just enjoyed watching Shirley Maclaine and Jack Nicholson.

Nevertheless, the more psychologically minded among us might gladly embrace their explanations.  After all, it just seems “right” to say:  “Rumination is bad, acceptance and mindfulness (NB:  the authors did not use this term) is good.”  However, their “mediation analysis” showed that rumination scores did not predict relapse, and acceptance scores did not predict prolonged remission.  In other words, even though these psychological measures were correlated with the MRI findings, the psychological test results didn’t predict outcome.  Only the MRI findings did.

This leads to an interesting take-home message.  The results seem to support a psychological approach to maintaining remission—i.e., teaching acceptance and mindfulness, and avoiding ruminative tendencies—but this is only part of the solution.  Activity in the mPFC and the visual cortex might underlie pro-depressive and anti-depressive tendencies, respectively, in depressed patients, via mechanisms that are entirely unknown (and, dare I say it, entirely biologic?).

[An interesting footnote:  the risk of relapse was not correlated with medications.  Out of the ten who relapsed, three were still taking antidepressants.  Of the other seven, four were engaged in mindfulness-based cognitive therapy (MBCT), while the others were taking a placebo.]

Anyway, this paper describes an interesting finding with potential real-world application.  Although it’s a small study, it’s loaded with testable follow-up hypotheses.  I sincerely hope they continue to fire up the scanner, find some patients, and test them.  Who knows—we might just find something worth using.


Biomarker Envy II: Ethanolamine Phosphate

May 27, 2011

In my inbox yesterday was a story describing a new biological test for a psychiatric disorder.  Hallelujah!  Is this the holy grail we’ve all been waiting for?

Specifically, scientists at Human Metabolome Technologies (HMT) and Japan’s Keio University presented data earlier this week at a scientific conference in Tokyo, showing that they could diagnose depression by measuring levels of a chemical—ethanolamine phosphate—in patients’ blood.

Let me repeat that once again, for emphasis:  Japanese scientists now have a blood test to diagnose depression!

Never mind all that messy “talk-to-the-patient” stuff.  And you can throw away your tired old DSM-IV, because this is the new world: biological diagnosis!!  The press release describing the research even suggests that the test “could improve early detection rates of depression if performed during regular medical checkups.”  That’s right:  next time you see your primary doc, he or she might order—along with your routine CBC and lipid panel—an ethanolamine phosphate test.  If it comes back positive, congratulations!  You’re depressed!

If you can detect the skepticism in my voice, good.  Because even if this “biomarker” for depression turns out to be 100% accurate (which it is not—see below), its use runs entirely against how we should be practicing person-centered (not to be confused with “personalized”) medicine.  As a doctor, I want to hear your experiences and feelings, and help you with those symptoms, not run a blood test and order a drug.

[Incidentally, the Asahi press release made me chuckle when it stated: "About 90 percent of doctors base their diagnosis of depression on experience and varying factors."  What about the other 10%?  Magic?]

As it turns out, I think there’s a lot to suggest that this particular blood test may not yet be ready for prime time.  For one, the work has not yet been published (and deciphering scientific results from a press release is always a risky proposition).  Secondly, the test was not 100% accurate; it failed to identify depression in 18% of cases, and falsely labeled healthy people as “depressed” 5% of the time.  (That’s a sensitivity of 82% and a specificity of 95%, for those of you playing along at home.)

Further, what the heck is ethanolamine phosphate, and why would it be low in depressed people?  Is it a chemical secreted by the “happiness centers” of the brain?  Does it predict the onset or worsening of a depressive episode?  Is it somehow affected by antidepressant treatment?  As far as I can tell from a quick literature search, there has been no report—or even a suggestion—of ethanolamine (or any of its metabolites) being involved in the pathogenesis of mood disorders.  Then again, maybe I didn’t get the Japanese translation just right.

Anyway, where this “marker” came from is anybody’s guess.  It’s entirely possible (although I can’t be sure, because the Japanese group has not yet published their findings) that the researchers measured the blood levels of dozens of molecules and found the “best” results with this one.  We sometimes call this a “fishing expedition.”  Obviously, the finding has to be replicated, and if it was, in fact, just a lucky result, further research will bear that out.

But Dr Yoshiaki Ohashi, board director and chief security officer at HMT (“chief security officer”? does he wear a badge and sit at the front desk during the overnight shift, too?) maintains that the findings “will make it easier for an objective, biological diagnosis of depressive patients.”

Wow.  In 2011.  (And just in time for DSM-5.)

What if he’s right?  How would you feel if you went to a routine doctor’s visit next week, got an order for blood work, and a secretary called you a few days later to tell you that you have depression?  Even if you don’t feel depressed?

Were there other motives for developing such a test?  Probably.  One of the press releases quotes the Japanese Ministry of Health as saying that “only one quarter of the people who need treatment” actually get it.  So maybe this blood test is simply a way to offer treatment to more people expand the market for antidepressants—even to those who don’t want treatment.  And then, of course, HMT probably wants a piece of the pie.  HMT is already developing a commercial test to measure ethanolamine phosphate levels; obviously, widespread adoption of this test would translate into big bucks for HMT, indeed.

So while many other questions remain to be answered, I must say I’m not holding my breath. Biological screening tests for psychiatric disorders have no face validity (in other words, if a test is positive but a person shows no signs or symptoms, then what?) and a positive result may expose patients to “preventive” treatments that are costly and cause unwanted side effects.

In my opinion, the best way (if any) to use a biomarker is in a “confirmatory” or “rule-out” function.  Is that demoralized, ruminative, potentially suicidal patient in your office simply going through a rough period in her life?  Or is she clinically depressed?  Will she respond to medications, or is this something that will simply “pass”?  In cases like this, measuring ethanolamine phosphate (or another similar marker) might be helpful.

But I don’t think we’ll ever be able to screen for psychiatric illness the same way a primary care doc might screen for, say, breast cancer or diabetes.  To do so would redefine the entire concept of “mental” illness (perhaps making it “neurological” illness instead?).  It also takes the person out of the picture.  At the end of the day, it’s always the patient’s thoughts, words, and experiences that count.  Ignoring those—and focusing instead on a chemical in the bloodstream—would be an unfortunate path to tread.


Biomarker Envy I: Cortical Thickness

May 13, 2011

In the latest attempt to look for biological correlates or predictors of mental illness, a paper in this month’s Archives of General Psychiatry shows that children with major depressive disorder (MDD) have thinner cortical layers than “healthy” children, or children with obsessive-compulsive disorder (OCD).  Specifically, researchers performed brain MRI scans on 78 children with or without a diagnosis, and investigated seven specific areas of the cerebral cortex.  Results showed four areas which were thinner in children with MDD than in healthy children, two which were thicker, and one that did not vary.

These results add another small nugget of data to our (admittedly scant) understanding of mental illness—particularly in children, before the effects of years of continuous medication treatment.  They also represent the bias towards imaging studies in psychiatry, whose findings—even if statistically significant—are not always that reliable or meaningful.  (But I digress…)

An accompanying press release, however, was unrealistically enthusiastic.  It suggested that this study “offers an exciting new way to identify more objective markers of psychiatric illness in children.”  Indeed, the title of the paper itself (“Distinguishing between MDD and OCD in children by measuring regional cortical thickness”) might suggest a way to use this information in clinical practice right away.  But it’s best not to jump to these conclusions just yet.

For one, there was tremendous variability in the data, as shown in the figure at left (click for larger view).  While on average the children with MDD had a thinner right superior parietal gyrus (one of the cortical regions studied) than healthy children or children with OCD, no individual measurement was predictive of anything.

Second, the statement that we can “distinguish between depression and OCD” based on a brain scan reflects precisely the type of biological determinism and certainty (and hype?) that psychiatry has been striving for, but may never achieve (just look at the figure again).  Lay readers—and, unfortunately, many clinicians—might read the headline and believe that “if we just order an MRI for Junior, we’ll be able to get the true diagnosis.”  The positive predictive value of any test must be high enough to warrant its use in a larger population, and so far, the predictive value of most tests in psychiatry is poor.

Third, there is no a priori reason why there should be a difference between the brains (or anything else, for that matter) of patients with depression and patients with OCD, when you consider the overlap between these—and other—psychiatric conditions.  There are many shades of grey between “depression” and “OCD”:  some depressed children will certainly have OCD-like traits, and vice versa.  Treating the individual (and not necessarily the individual’s brain scan) is the best way to care for a person.

To be fair, the authors of the study, Erin Fallucca and David Rosenberg from Wayne State University in Detroit, do not state anywhere in their paper that this approach represents a “novel new diagnostic method” or make any other such sweeping claims about their findings.  In fact, they write that the differences they observed “merit further investigation” and highlight the need to look “beyond the frontal-limbic circuit.”  In other words, our current hypotheses about depression are not entirely supported by their findings (true), so we need to investigate further (also true).  And this, admittedly, is how science should proceed.

However, the history of psychiatry is dotted with tantalizing neurobiological theories or findings which find their way into clinical practice before they’ve been fully proven, or even shown any great clinical relevance.  Pertinent examples are the use of SPECT scans to diagnose ADHD, championed by Daniel Amen; quantitiative EEG to predict response to psychotropics; genotyping for metabolic enzymes; and the use of SSRIs to treat depression.  (Wait, did I say that???)

The quest to identify “biomarkers” of psychiatric illness may similarly lead us to believe we know more about a disease than we do.  A biomarker is a biological feature (an endocrine or inflammatory measure, a genotype, a biochemical response to a particular intervention) that distinguishes a person with a condition from one without.  They’re used throughout medicine for diagnosis, risk stratification and monitoring treatment response.   A true biomarker for mental illness would represent a significant leap ahead in personalized treatment.  Or would it?  What if a person’s clinical presentation differs from what the marker predicts?  “I’m sorry Mrs. Jones, but even though Katie compulsively washes her hands and counts to twelve hundreds of times a day, her right superior parietal gyrus is too thin for a diagnosis of OCD.”

Other fields of medicine don’t experience this dilemma.  If you have an elevated hsCRP and high LDL, even though you “feel fine,” you are still at elevated risk for cardiovascular disease and really ought to take preventive measures (exercise, diet, etc).  (However, see this recent editorial in the BMJ about “who should define disease.”)  But if your brain scan shows cortical thinning and you have no symptoms of depression, do you need to be treated?  Are you even at risk?

Some day (hopefully) these questions will be answered, as we gain a greater understanding of the biology of mental illness.  But until then, let’s keep research and clinical practice separate until we know what we’re doing.  Psychiatry doesn’t have to be like other fields of medicine.  Patients suffer and come to us for help; let’s open our eyes and ears before sending them off to the scanner or the lab.  In doing so, we might learn something important.


Follow

Get every new post delivered to your Inbox.

Join 1,379 other followers

%d bloggers like this: