Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.


Biomarker Envy V: BDNF and Cocaine Relapse

October 18, 2011

The future of psychiatric diagnosis and treatment lies in the discovery and development of “biomarkers” of pathological processes.  A biomarker, as I’ve written before, is something that can be measured or quantified, usually from a biological specimen like a blood sample, which helps to diagnose a disease or predict response to a treatment.

Biomarkers are the embodiment of the new “personalized medicine”:  instead of wasting time talking to a patient, asking questions, and possibly drawing incorrect conclusions, the holy grail of a biomarker allows the clinician to order a simple blood test (or brain scan, or genotype) and make a decision about that specific patient’s case.  But “holy grail” status is elusive, and a recent study from the Yale University Department of Psychiatry, published this month in the journal Biological Psychiatry, provides yet another example of a biomarker which is not quite there—at least not yet.

The Yale group, led by Rajita Sinha, PhD, were interested in the question, what makes newly-abstinent cocaine addicts relapse?, and set out to identify a biological marker for relapse potential.  If such a biomarker exists, they argue, then it could not only tell us more about the biology of cocaine dependence, craving, and relapse, but it might also be used clinically, as a way to identify patients who might need more aggressive treatment or other measures to maintain their abstinence.

The researchers chose BDNF, or brain-derived neurotrophic factor, as their biomarker.  In studies of cocaine-dependent animals who are forced into prolonged abstinence, those animals show elevations in BDNF when exposed to a stressor; moreover, cocaine-seeking is associated with BDNF elevations, and BDNF injections can promote cocaine-seeking behavior in these same abstinent animals.  In their recent study, Sinha’s group took 35 cocaine-dependent (human) patients and admitted them to the hospital for 4 weeks.  After three weeks of NO cocaine, they measured blood levels of BDNF and compared these numbers to the levels measured in “healthy controls.”  Then they followed all 35 cocaine users for the next 90 days to determine which of them would relapse during this three-month period.

The results showed that the abstinent cocaine users generally had higher BDNF levels than the healthy controls (see figure below, A).  However, when the researchers looked at the patients who relapsed on cocaine during the 3-month follow-up (n = 23), and compared them to those who stayed clean (n = 12), they found that the relapsers, on average, had higher BDNF levels than the non-relapsers (see figure, B).  Their conclusion is that high levels of BDNF may predict relapse.

These results are intriguing, and Dr Sinha presented her findings at the California Society of Addiction Medicine (CSAM) annual conference last week.  Audience members—all of whom treat drug and alcohol addiction—asked about how they might measure BDNF levels in their patients, and whether the same BDNF elevations might be found in dependence on other drugs.

But one question really got to what I think is the heart of the matter.  Someone asked Dr Sinha: “Looking back at the 35 patients during their four weeks in the hospital, were there any characteristics that separated the high BDNF patients from those with low BDNF?”  In other words, were there any behavioral or psychological features that might, in retrospect, be correlated with elevated BDNF?  Dr Sinha responded, “The patients in the hospital who seemed to be experiencing the most stress or who seemed to be depressed had higher BDNF levels.”

Wait—you mean that the patients at high risk for relapse could be identified by talking to them?  Dr Sinha’s answer shows why biomarkers have little place in clinical medicine, at least at this point.  Sure, her group showed correlations of BDNF with relapse, but nowhere in their paper did they describe personal features of the patients (psychological test scores, psychiatric complaints, or even responses to a checklist of symptoms).  So those who seemed “stressed or depressed” had higher BDNF levels, and—as one might predict—relapsed.  Did this (clinical) observation really require a BDNF blood test?

Dr Sinha’s results (and the results of others who study BDNF and addiction) make a strong case for the role of BDNF in relapse or in recovery from addiction.  But as a clinical tool, not only is it not ready for prime time, but it distracts us from what really matters.  Had Dr Sinha’s group spent four weeks interviewing, analyzing, or just plain talking with their 35 patients instead of simply drawing blood on day 21, they might have come up with some psychological measures which would be just as predictive of relapse—and, more importantly, which might help us develop truly “personalized” treatments that have nothing to do with BDNF or any biochemical feature.

But I wouldn’t hold my breath.  As Dr Sinha’s disclosures indicate, she is on the Scientific Advisory Board of Embera NeuroTherapeutics, a small biotech company working to develop a compound called EMB-001.  EMB-001 is a combination of oxazapam (a benzodiazepine) and metyrapone.  Metyrapone inhibits the synthesis of cortisol, the primary stress hormone in humans.  Dr Sinha, therefore, is probably more interested in the stress responses of her patients (which would include BDNF and other stress-related proteins and hormones) than in whether they say they feel like using cocaine or not.

That’s not necessarily a bad thing.  Science must proceed this way.  If EMB-001 (or a treatment based on BDNF) turns out to be an effective therapy for addiction, it may save hundreds or thousands of lives.  But until science gets to that point, we clinicians must always remember that our patients are not just lab values, blood samples, or brain scans.  They are living, thinking, and speaking beings, and sometimes the best biomarker of all is our skilled assessment and deep understanding of the patient who comes to us for help.


Lexapro, Hot Flashes, and Doing What Works

June 15, 2011

One of the most common—and distressing—symptoms of menopause is the “hot flash.”  As many as 85% of perimenopausal women complain of hot flashes, characterized by a sensation of intense heat, a flushed appearance, perspiration, and pressure in the head.  An effective remedy for hot flashes over the years has been hormone replacement therapy, but many women shun this treatment because of the increased risk of breast cancer, heart disease, and stroke.  In its place, antidepressants like SSRIs and SNRIs have become more commonly prescribed for hot flashes.  Many women report great improvement in symptoms, both anecdotally and in some small open-label trials, with antidepressant therapy.

But do antidepressants actually do anything at all?

Jim Edwards covers this story in a post today on bnet’s “Placebo Effect” blog. Edwards describes a study published in the Journal of the American Medical Association (JAMA) in January 2011 (PDF here).  The study showed the clear benefit of Lexapro (an SSRI made by Forest Labs) relative to placebo in a randomized clinical trial of more than 200 menopausal women with hot flashes.  However, Edwards also reports that a brand new study (which he calls “elegant”) published in the journal Menopause found NO effect of Lexapro.  This second study measured hot flashes not by patient report, but instead by a “battery-powered hot flash detector” worn by women participating in the research.

Does Edwards conclude that the first study was bogus?  Well, not quite.  Edwards argues that the integrity of the JAMA study was dubious from the start because its lead author, Ellen Freeman, received money (honoraria and research support) from Forest Labs, while the paper in Menopause was not tainted by drug company money.  (Note: he neglected to point out that the author of the second study, Robert Freedman, holds a patent, US # 60,741,376, on the “hot flash detector” used in his study.  Yeah, that’s “elegant.”)

Now, I understand that pharmaceutical company funding has a potential to bias research (sometimes a great deal), even when the researchers swear by their objectivity.  But in this case, Edwards’ axe-grinding seems to have obscured some more relevant arguments.  In his zeal to criticize Freeman for her nefarious Forest ties, he ignores the fact that patients often do report a benefit of Lexapro.  A more relevant (and convincing) argument might have been: What makes Lexapro that much better than a generic SSRI—which would be significantly cheaper—in the treatment of hot flashes?  But no, that question was overlooked.

It’s also important to consider the methods used in the Menopause study.  Freedman and his colleagues used “objective” measures of hot flashes (using a device patented by the author, remember) instead of patients’ self-report.  What did these ambulatory monitors measure?  “Humidity on the chest”—that’s it.  (Hmmm… maybe the Exmovere Corporation could build an “Exmobaby garment” for menopausal women??)  Lexapro had no significant effect on this objective measurement.

But the problem is, hot flashes are subjective experiences.  Just like depressed mood, fatigue, pain, gastrointestinal upset, and many other symptoms we treat in medicine.  There’s probably a physiological explanation, but we don’t know what it is.  I’m sorry, but it seems presumptuous (if not downright arrogant) to say that a biometric device is an “accurate” detector of hot flashes, regardless of what the woman reports.  It’s like saying that a person is depressed because his ethanolamine phosphate level is high, or that another has OCD because she has a thicker right superior parietal gyrus in an MRI scan.

Anyway, back to Edwards’ blog post:  His opening sentence, dripping with obvious sarcasm, is “Never mind the evidence; just treat patients’ complaints.”  He then proceeds to completely downplay (if not ridicule) the fact that women frequently report a benefit of Lexapro and other SSRIs.

I wonder whether Edwards has paid any attention to what we’ve been doing in psychiatry for the last several decades.  Trust me, I would love to understand the biological basis of my patients’ symptoms—whether depression, psychosis, anxiety, or hot flashes—in order to develop more “targeted” medical treatment.  But the evidence is just not there (yet?).  In the meantime, we have to use what we’ve got.  If a woman reports improvement on Lexapro without any side effects (in other words, if the benefit exceeds the risk), I’ll prescribe it.

Let me be clear.  I’m not defending Lexapro:  if there’s a cheaper generic alternative available we should use it.  Similarly, I’m not defending Ellen Freeman: pharmaceutical funding should be fully disclosed and, moreover, it does skew what gets published (or not).  And I’m not criticizing Dr Freedman’s Hot Flash Detector (why does that sound like something out of a 1920’s Sears Catalog?): objective measures of subjective complaints help us to understand complicated pathophysiology more clearly.

But if patients benefit from a treatment (and aren’t harmed by it), we owe it to them to provide it.  Arguments like “the research is biased,” “it’s not scientific enough,” or “doctors don’t know how it works anyway” are valid, and should not be ignored, but should also not keep us from prescribing treatments that alleviate our patients’ suffering.


Biomarker Envy IV: The Exmobaby

June 12, 2011

To what lengths would you go to keep your child healthy?  Organic, non-GMO baby food?  Hypoallergenic baby lotions and shampoos?  Bisphenol-free baby bottles?  How about a battery-powered biosensor garment that transmits ECG, skin temperature, and other biometric data about your baby wirelessly to your computer or via SMS message to your smartphone in real time?

Never fear, the Exmobaby is here.  Introduced late last year (and shown in the picture above—by the way, I don’t think that’s Jeff Daniels as a paid spokesman), the Exmobaby is a sleep garment designed for babies aged 0-12 months, which contains “embedded, non-contact sensors, a battery-powered Zigbee transmitter pod, a USB Zigbee receiver dongle that plugs into a Windows PC,” and all the necessary software.  Their slogan is “We Know How Your Baby Feels.”

It sounds like science fiction, but in reality it’s just a souped-up, high-tech version of a baby monitor.  But is it an improvement upon the audio- or video baby monitors currently available?  Exmovere certainly thinks so.  And, luckily for them, there’s no shortage of worried parents who are willing to pay for peace of mind (the device starts at $1000 and goes up to $2500, plus monthly data charges). [Note: please see addendum below.]

But while this might be an example of “a fool and his money being soon parted,” Exmovere makes some claims about the product that are highly questionable.  I first learned about the Exmobaby in a post on the KevinMD website, in which Exmovere’s CEO, David Bychkov, commented that “using Exmobaby to observe and record physiological data symptomatic of emotional changes can be useful… if you are a parent of a child with autism.”

In other words, this isn’t just a fancy monitoring device, this is a high-tech way of understanding your child’s thoughts and emotions—an “emotional umbilical cord between mother and child”—and, quite possibly, a way to diagnose a psychiatric, neurodevelopmental disorder in your newborn, all in the comfort of your own home.

I surfed over to the Exmobaby web site, whose home page shows a smiling, happy infant wearing these newfangled jammies.  Cute!  And the device (?) looks harmless enough.  But the FAQ page is where it gets interesting (or scary, depending on your position).  One question asks, “how is it possible to detect emotional states using Exmobaby?”  The response sounds like pure 21st century biobehavioral mumbo jumbo:

Detection of emotion involves software that compares heart rate, delta temperature and movement data (arousal) to heart rate variability and skin temperature (valence). These data, if tracked over time, enable a system to “guess” from a series of words that could be used to describe an emotional state: anger, fatigue, depression, joy, etc….In the case of babies, Exmovere is asking its users to try something new: name states. Exmobaby software will monitor trends in vital states. Parents will be asked to name states, such as “giggly” or “grumpy,” and the system can and will alert them when the underlying readings that match those states are detected. The idea is … to create a deeper level of communication between babies and their parents at the beginning of such a critical relationship.

In plain English: they’re asking parents to correlate data from the Exmobaby software (rather than their direct observations of the baby, which is how parents used to interact with their kids) with what they consider to be the baby’s emotional state.  Thus:  “My baby’s happy because the software says he is” rather than using old-fashioned signs—you know, like smiles and giggles.

The Exmovere website also includes an article, clearly written for parents, on “Exmobaby and Autism.”  Now, autism and “autism-spectrum disorders” (ASDs) are hot topics receiving a great deal of attention these days.  ASDs currently have an estimated prevalence of 1 in 110 (and rising rapidly), with an average age of diagnosis of approximately 4 years.  Nonetheless, parents of children with ASDs begin to identify concerns by the age of 12 to 18 months, and finding a “biomarker” to enable earlier diagnosis would allay the fears and insecurities of new parents.

But is Exmovere preying on precisely these fears and insecurities?  Well, let’s first ask: is it even reasonable to think about diagnosing ASDs before the age of 12 months (when the Exmobaby garment would be worn?).  A recent study showed that ASDs could be diagnosed as early as 14 months of age, based on social and communication development (but no biometric measures).  The American Association of Pediatrics recommends ASD screening (an interview with the parents and structured observation of the child) at ages 18 and 24 months, no earlier.  And a recent article in Pediatrics remarked that there are few measures sensitive and specific enough to detect ASD before 2 years of age (and, again, no “biological” measures to speak of).

The Exmobaby handout (which I’ve uploaded here), on the other hand, is a perfect example of a drug/device manufacturer capitalizing on the fears of parents by conflating statistics, commentary, and recommendations in a way that makes their device sound like a vital necessity for healthy infant development.  It’s deceptive marketing, pure and simple.

For example, it states “One of the ‘red flags’ in early diagnosis of ASDs is a lack of response from baby to the use of their name. Parents can potentially use Exmobaby to record times when baby’s name was said so that the reports will correlate any movement or vital sign response.”  Also, “specific tests can be designed in consultation with pediatricians to use Exmobaby to assist with diagnoses of ASDs and related developmental disorders.”  Never mind that there’s nothing in the literature correlating movement or vital-sign responses with diagnosing ASDs in this age group.

Conveniently, Exmovere also included its marketing strategy on its website (available here). It’s clear they’re planning to market Exmobaby as a garment (“a $5 billion per year worldwide market”) and not as a medical device.  That’s probably a good idea.  Or is it?  Bypassing medical professionals and tapping into a wide market of “worried well” might be good for business, but what about the “downstream” impact on our health care system?

So many questions.  But I’ll have to address them some other time, because I need to go make a sandwich.  I just got a text message telling me I’m hungry.

Addendum:  After posting this article, I received an email from Exmovere’s Investor Relations Advisor who pointed out that the $1000-$2500 prices I quoted above are for Evaluation Kits, specifically for distributors, researchers, and hospitals.  Exmobaby is not available for retail purchase at this time.  They anticipate a lower cost when the device/garment is sold directly to end users.


Biomarker Envy III: Medial Prefrontal Cortex

May 28, 2011

Well, what do you know…. No sooner did I publish my last post about the “depression biomarker” discovered by a group of Japanese scientists, than yet another article appeared, describing a completely different biomarker.  This time, however, instead of simply diagnosing depression, the goal was to identify who’s at risk of relapse.  And the results are rather tantalizing… Could this be the real deal?

The paper, to be published in the journal Biological Psychiatry, by Norman Farb, Adam Anderson, and colleagues at the University of Toronto, had a simple research design.  They recruited 16 patients with a history of depression, but who were currently in remission (i.e., symptom-free for at least five months), as well as 16 control subjects.  They performed functional MRI (fMRI) imaging on all 32 participants while exposing them to an emotional stressor: specifically, they showed the subjects “sad” or “neutral” film clips while they were in the MRI scanner.

Afterward, they followed all 16 depressed patients for a total of 18 months.  Ten of these patients relapsed during this period.  When the group went back to look for fMRI features that distinguished the relapsers from the non-relapsers, they found that the relapsers, while viewing the “sad” film clips, had greater activity in the medial prefrontal cortex (mPFC).  The non-relapsers, on the other hand, showed greater activation in the visual cortex when viewing the same emotional trigger.

Even though the number of patients was very small (16 total), the predictive power of the tests was actually quite high (see the figure at right – click for a larger version).  It’s certainly conceivable that a test like this one might be used in the future to determine who needs more aggressive treatment—even if our checklists show that a depressed patient is in remission.  As an added bonus, it has better face validity than simply measuring a chemical in the bloodstream: in other words, it makes sense that a depressed person’s brain responds differently to sad stimuli, and that we might use this to predict outcomes.

As with most neuroimaging research, the study itself was fairly straightforward.  Making some sense out of the results, however, is another story.  (Especially if you like salmon.)

The researchers had predicted, based on previous studies, that patients who are prone to relapse might show greater activity in the ventromedial prefrontal cortex (VMPFC) and lower activity in the dorsolateral PFC (DLPFC).  But that’s not what they found.  Instead, relapsers had greater activity in the mPFC (which is slightly different from the VMPFC).  Moreover, non-relapsers had greater activity in the visual cortex (specifically the calcarine sulcus).

What might this mean?  The authors hypothesize that mPFC activity may lead to greater “ruminative thought” (i.e., worrying, brooding).  In fact, they did show that mPFC activation was correlated with scores on the RSQ-R, a psychological test of ruminative thought patterns.  Regarding the increased visual cortex activity, the authors suggest that this may be protective against further depressive episodes.  They surmise that it may be a “compensatory response” which might reflect “an attitude of acceptance or observation, rather than interpretation and analysis.”

In other words, to grossly oversimplify:  if you’re in recovery from depression, it’s not a good idea to ruminate, worry, and brood over your losses, or to internalize someone else’s sadness (even if it’s just a 45-second clip from the movie “Terms of Endearment”—which, by the way, was the “sad stimulus” in this experiment).  Instead, to prevent another depressive episode, you should strengthen your visual skills and use your visual cortex to observe and accept (i.e., just watch the darn movie!).

This all seems plausible, and the explanation certainly “fits” with the data.  But different conclusions can also be drawn.  Maybe those “recovered” patients who had less mPFC activity were simply “numb” to any emotional stimuli.  (All patients were taking antidepressants at the time of the fMRI study, which some patients report as having a “numbing” effect on emotions.)  Moreover, it has been said that depression can sometimes be beneficial; maybe the elevated mPFC activity in relapsers was an ongoing attempt to process the “sad” inputs in a more productive way?  As for the protective effect of visual cortex activity, maybe it isn’t about “acceptance” or “non-judgment” at all, but something else entirely?  Maybe those patients just enjoyed watching Shirley Maclaine and Jack Nicholson.

Nevertheless, the more psychologically minded among us might gladly embrace their explanations.  After all, it just seems “right” to say:  “Rumination is bad, acceptance and mindfulness (NB:  the authors did not use this term) is good.”  However, their “mediation analysis” showed that rumination scores did not predict relapse, and acceptance scores did not predict prolonged remission.  In other words, even though these psychological measures were correlated with the MRI findings, the psychological test results didn’t predict outcome.  Only the MRI findings did.

This leads to an interesting take-home message.  The results seem to support a psychological approach to maintaining remission—i.e., teaching acceptance and mindfulness, and avoiding ruminative tendencies—but this is only part of the solution.  Activity in the mPFC and the visual cortex might underlie pro-depressive and anti-depressive tendencies, respectively, in depressed patients, via mechanisms that are entirely unknown (and, dare I say it, entirely biologic?).

[An interesting footnote:  the risk of relapse was not correlated with medications.  Out of the ten who relapsed, three were still taking antidepressants.  Of the other seven, four were engaged in mindfulness-based cognitive therapy (MBCT), while the others were taking a placebo.]

Anyway, this paper describes an interesting finding with potential real-world application.  Although it’s a small study, it’s loaded with testable follow-up hypotheses.  I sincerely hope they continue to fire up the scanner, find some patients, and test them.  Who knows—we might just find something worth using.


Biomarker Envy II: Ethanolamine Phosphate

May 27, 2011

In my inbox yesterday was a story describing a new biological test for a psychiatric disorder.  Hallelujah!  Is this the holy grail we’ve all been waiting for?

Specifically, scientists at Human Metabolome Technologies (HMT) and Japan’s Keio University presented data earlier this week at a scientific conference in Tokyo, showing that they could diagnose depression by measuring levels of a chemical—ethanolamine phosphate—in patients’ blood.

Let me repeat that once again, for emphasis:  Japanese scientists now have a blood test to diagnose depression!

Never mind all that messy “talk-to-the-patient” stuff.  And you can throw away your tired old DSM-IV, because this is the new world: biological diagnosis!!  The press release describing the research even suggests that the test “could improve early detection rates of depression if performed during regular medical checkups.”  That’s right:  next time you see your primary doc, he or she might order—along with your routine CBC and lipid panel—an ethanolamine phosphate test.  If it comes back positive, congratulations!  You’re depressed!

If you can detect the skepticism in my voice, good.  Because even if this “biomarker” for depression turns out to be 100% accurate (which it is not—see below), its use runs entirely against how we should be practicing person-centered (not to be confused with “personalized”) medicine.  As a doctor, I want to hear your experiences and feelings, and help you with those symptoms, not run a blood test and order a drug.

[Incidentally, the Asahi press release made me chuckle when it stated: “About 90 percent of doctors base their diagnosis of depression on experience and varying factors.”  What about the other 10%?  Magic?]

As it turns out, I think there’s a lot to suggest that this particular blood test may not yet be ready for prime time.  For one, the work has not yet been published (and deciphering scientific results from a press release is always a risky proposition).  Secondly, the test was not 100% accurate; it failed to identify depression in 18% of cases, and falsely labeled healthy people as “depressed” 5% of the time.  (That’s a sensitivity of 82% and a specificity of 95%, for those of you playing along at home.)

Further, what the heck is ethanolamine phosphate, and why would it be low in depressed people?  Is it a chemical secreted by the “happiness centers” of the brain?  Does it predict the onset or worsening of a depressive episode?  Is it somehow affected by antidepressant treatment?  As far as I can tell from a quick literature search, there has been no report—or even a suggestion—of ethanolamine (or any of its metabolites) being involved in the pathogenesis of mood disorders.  Then again, maybe I didn’t get the Japanese translation just right.

Anyway, where this “marker” came from is anybody’s guess.  It’s entirely possible (although I can’t be sure, because the Japanese group has not yet published their findings) that the researchers measured the blood levels of dozens of molecules and found the “best” results with this one.  We sometimes call this a “fishing expedition.”  Obviously, the finding has to be replicated, and if it was, in fact, just a lucky result, further research will bear that out.

But Dr Yoshiaki Ohashi, board director and chief security officer at HMT (“chief security officer”? does he wear a badge and sit at the front desk during the overnight shift, too?) maintains that the findings “will make it easier for an objective, biological diagnosis of depressive patients.”

Wow.  In 2011.  (And just in time for DSM-5.)

What if he’s right?  How would you feel if you went to a routine doctor’s visit next week, got an order for blood work, and a secretary called you a few days later to tell you that you have depression?  Even if you don’t feel depressed?

Were there other motives for developing such a test?  Probably.  One of the press releases quotes the Japanese Ministry of Health as saying that “only one quarter of the people who need treatment” actually get it.  So maybe this blood test is simply a way to offer treatment to more people expand the market for antidepressants—even to those who don’t want treatment.  And then, of course, HMT probably wants a piece of the pie.  HMT is already developing a commercial test to measure ethanolamine phosphate levels; obviously, widespread adoption of this test would translate into big bucks for HMT, indeed.

So while many other questions remain to be answered, I must say I’m not holding my breath. Biological screening tests for psychiatric disorders have no face validity (in other words, if a test is positive but a person shows no signs or symptoms, then what?) and a positive result may expose patients to “preventive” treatments that are costly and cause unwanted side effects.

In my opinion, the best way (if any) to use a biomarker is in a “confirmatory” or “rule-out” function.  Is that demoralized, ruminative, potentially suicidal patient in your office simply going through a rough period in her life?  Or is she clinically depressed?  Will she respond to medications, or is this something that will simply “pass”?  In cases like this, measuring ethanolamine phosphate (or another similar marker) might be helpful.

But I don’t think we’ll ever be able to screen for psychiatric illness the same way a primary care doc might screen for, say, breast cancer or diabetes.  To do so would redefine the entire concept of “mental” illness (perhaps making it “neurological” illness instead?).  It also takes the person out of the picture.  At the end of the day, it’s always the patient’s thoughts, words, and experiences that count.  Ignoring those—and focusing instead on a chemical in the bloodstream—would be an unfortunate path to tread.


%d bloggers like this: