Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


Is The Criticism of DSM-5 Misguided? Part II

March 14, 2012

A few months ago, I wrote about how critics of the DSM-5 (led by Allen Frances, editor of the DSM-IV) might be barking up the wrong tree.  I argued that many of the problems the critics predict are not the fault of the book, but rather how people might use it.  Admittedly, this sounds a lot like the “guns don’t kill people, people do” argument against gun control (as one of my commenters pointed out), or a way for me to shift responsibility to someone else (as another commenter wrote).  But it’s a side of the issue that no one seems to be addressing.

The issue emerges again with the ongoing controversy over the “bereavement exclusion” in the DSM-IV.  Briefly, our current DSM says that grieving over a loved one does not constitute major depression (as long as it doesn’t last more than two months) and, as such, should not be treated.  However, some have argued that this exclusion should be removed in DSM-5.  According to Sidney Zisook, a UCSD psychiatrist, if we fail to recognize and treat clinical depression simply because it occurs in the two-month bereavement period, we do those people a “disservice.”  Likewise, David Kupfer, chair of the DSM-5 task force, defends the removal of the bereavement exclusion because “if patients … want help, they should not be prevented from getting [it] because somebody tells them that this is what everybody has when they have a loss.”

The NPR news program “Talk of the Nation” featured a discussion of this topic on Tuesday’s broadcast, but the guests and callers described the issue in a more nuanced (translation: “real-world”) fashion.  Michael Craig Miller, Editor of the Harvard Mental Health Letter, referred to the grieving process by saying: “The reality is that there is no firm line, and it is always a judgment call…. labels tend not to matter as much as the practical concern, that people shouldn’t feel a sense of shame.  If they feel they need some help to get through something, then they should ask for it.”  Bereavement and the need for treatment, therefore, is not a yes/no, either/or proposition, but something individually determined.

This sentiment was echoed in a February 19 editorial in Lancet by the psychiatrist/anthropologist Arthur Kleinman, who wrote that the experience of loss “is always framed by meanings and values, which themselves are affected by all sorts of things like one’s age, health, financial and work conditions, and what is happening in one’s life and in the wider world.”  Everyone seems to be saying pretty much the same thing:  people grieve in different ways, but those who are suffering should have access to treatment.

So why the controversy?  I can only surmise it’s because the critics of DSM-5 believe that mental health clinicians are unable to determine who needs help and, therefore, have to rely on a book to do so.  Listening to the arguments of Allen Frances et al, one would think that we have no ability to collaborate, empathize, and relate with our patients.  I think that attitude is objectionable to anyone who has made it his or her life’s work to treat the emotional suffering of others, and underestimates the effort that many of us devote to the people we serve.

But in some cases the critics are right.  Sometimes clinicians do get answers from the book, or from some senseless protocol (usually written by a non-clinician).  One caller to the NPR program said she was handed an antidepressant prescription upon her discharge from the hospital after a stillbirth at 8 months of pregnancy.  Was she grieving?  Absolutely.  Did she need the antidepressant?  No one even bothered to figure that out.  It’s like the clinicians who see “bipolar” in everyone who has anger problems; “PTSD” in everyone who was raised in a turbulent household; or “ADHD” in every child who does poorly in school.

If a clinician observes a symptom and makes a diagnosis simply on the basis of a checklist from a book, or from a single statement by a patient, and not on the basis of his or her full understanding, experience, and clinical assessment of that patient, then the clinician (and not the book) deserves to take full responsibility for any negative outcome of that treatment.  [And if this counts as acceptable practice, then we might as well fire all the psychiatrists and hire high-school interns—or computers!—at a mere fraction of the cost, because they could do this job just as well.]

Could the new DSM-5 be misused?  Yes.  Drug companies could (and probably will) exploit it to develop expensive and potentially harmful drugs.  Researchers will use it to design clinical trials on patients that, regrettably, may not resemble those in the “real world.”  Unskilled clinicians will use it to make imperfect diagnoses and give inappropriate labels to their patients.  Insurance companies will use the labels to approve or deny treatment.  Government agencies will use it to determine everything from who’s “disabled” to who gets access to special services in preschool.  And, of course, the American Psychiatric Association will use it as their largest revenue-generating tool, written by authors with extensive drug-industry ties.

To me, those are the places where critics should focus their rage.  But remember, to most good clinicians, it’s just a book—a field guide, helping us to identify potential concerns, and to guide future research into mental illness and its treatment.  What we choose to do with such information depends upon our clinical acumen and our relationship with our patients.  To assume that clinicians will blindly use it to slap the “depression” label and force antidepressants on anyone whose spouse or parent just died “because the book said so,” is insulting to those of us who actually care about our patients, and about what we do to improve their lives.


Whatever Works?

January 29, 2012

My iPhone’s Clock Radio app wakes me each day to the live stream of National Public Radio.  Last Monday morning, I emerged from my post-weekend slumber to hear Alix Spiegel’s piece on the serotonin theory of depression.  In my confused, half-awake state, I heard Joseph Coyle, professor of psychiatry at Harvard, remark: “the ‘chemical imbalance’ is sort of last-century thinking; it’s much more complicated than that.”

Was I dreaming?  It was, admittedly, a surreal experience.  It’s not every day that I wake up to the voice of an Ivy League professor lecturing me in psychiatry (those days are long over, thank Biederman god).  Nor did I ever expect a national news program to challenge existing psychiatric dogma.  As I cleared my eyes, though, I realized, this is the real deal.  And it was refreshing, because this is what many of us have been thinking all along.  The serotonin hypothesis of depression is kaput.

Understandably, this story has received lots of attention (see here and here and here and here and here).  I don’t want to jump on the “I-told-you-so” bandwagon, but instead to offer a slightly different perspective.

A few disclaimers:  first and foremost, I agree that the “chemical imbalance” theory has overrun our profession and has commandeered the public’s understanding of mental illness—so much so that the iconic image of the synaptic cleft containing its neurotransmitters has become ensconced in the national psyche.  Secondly, I do prescribe SSRIs (serotonin-reuptake inhibitors), plus lots of other psychiatric medications, which occasionally do work.  (And, in the interest of full disclosure, I’ve taken three of them myself.  They did nothing for me.)

To the extent that psychiatrists talk about “chemical imbalances,” I can see why this could be misconstrued as “lying” to patients.  Ronald Pies’ eloquent article in Psychiatric Times last summer describes the chemical-imbalance theory as “a kind of urban legend,” which no “knowledgeable, well-trained psychiatrist” would ever believe.  But that doesn’t matter.  Thanks to us, the word is out there.  The damage has already been done.  So why, then, do psychiatrists (even the “knowledgeable, well-trained” ones) continue to prescribe SSRI antidepressants to patients?

Because they work.

Okay, maybe not 100% of the time.  Maybe not even 40% of the time, according to antidepressant drug trials like STAR*D.  Experience shows, however, that they work often enough for patients to come back for more.  In fact, when discussed in the right context, their potential side effects described in detail, and prescribed by a compassionate and apparently intelligent and trusted professional, antidepressants probably “work” far more than they do in the drug trials.

But does that make it right to prescribe them?  Ah, that’s an entirely different question.  Consider the following:  I may not agree with the serotonin theory, but if I prescribe an SSRI to a patient with depression, and they report a benefit, experience no obvious side effects, pay only $4/month for the medication, and (say) $50 for a monthly visit with me, is there anything wrong with that?  Plenty of doctors would say, no, not at all.  But what if my patient (justifiably so) doesn’t believe in the serotonin hypothesis and I prescribe anyway?  What if my patient experiences horrible side effects from the drug?  What if the drug costs $400/month instead of $4?  What if I charge the patient $300 instead of $50 for each return visit?  What if I decide to “augment” my patient’s SSRI with yet another serotonin agent, or an atypical antipsychotic, causing hundreds of dollars more, and potentially causing yet more side effects?  Those are the aspects that we don’t often think of, and constitute the unfortunate “collateral damage” of the chemical-imbalance theory.

Indeed, something’s “working” when a patient reports success with an antidepressant; exactly why and how it “works” is less certain.  In my practice, I tell my patients that, at individual synapses, SSRIs probably increase extracellular serotonin levels (at least in the short-term), but we don’t know what that means for your whole brain, much less for your thoughts or behavior.  Some other psychiatrists say that “a serotonin boost might help your depression” or “this drug might act on pathways important for depression.”   Are those lies?  Jeffrey Lacasse and Jonathan Leo write that “telling a falsehood to patients … is a serious violation of informed consent.”  But the same could be said for psychotherapy, religion, tai chi, ECT, rTMS, Reiki, qigong, numerology, orthomolecular psychiatry, somatic re-experiencing, EMDR, self-help groups, AA, yoga, acupuncture, transcendental meditation, and Deplin.  We recommend these things all the time, not knowing exactly how they “work.”

Most of those examples are rather harmless and inexpensive (except for Deplin—it’s expensive), but, like antidepressants, all rest on shaky ground.  So maybe psychiatry’s problem is not the “falsehood” itself, but the consequences of that falsehood.  This faulty hypothesis has spawned an entire industry capitalizing on nothing more than an educated guess, costing our health care system untold millions of dollars, saddling huge numbers of patients with bothersome side effects (or possibly worse), and—most distressingly to me—giving people an incorrect and ultimately dehumanizing solution to their emotional problems.  (What’s dehumanizing about getting better, you might ask?  Well, nothing, except when “getting better” means giving up one’s own ability to manage his/her situation and instead attribute their success to a pill.)

Dr Pies’ article in Psychiatric Times closed with an admonition from psychiatrist Nassir Ghaemi:  “We must not be drawn into a haze of promiscuous eclecticism in our treatment; rather, we must be guided by well-designed studies and the best available evidence.”  That’s debatable.  If we wait for “evidence” for all sorts of interventions that, in many people, do help, we’ll never get anywhere.  A lack of “evidence” certainly hasn’t eliminated religion—or, for that matter, psychoanalysis—from the face of the earth.

Thus, faulty theory or not, there’s still a place for SSRI medications in psychiatry, because some patients swear by them.  Furthermore—and with all due respect to Dr Ghaemi—maybe we should be a bit more promiscuous in our eclecticism.  Medication therapy should be offered side-by-side with competent psychosocial treatments including, but not limited to, psychotherapy, group therapy, holistic-medicine approaches, family interventions, and job training and other social supports.  Patients’ preferences should always be respected, along with safeguards to protect patient safety and prevent against excessive cost.  We may not have good scientific evidence for certain selections on this smorgasbord of options, but if patients keep coming back, report successful outcomes, and enter into meaningful and lasting recovery, that might be all the evidence we need.


Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.


Another Day, Another Seroquel XR Indication?

June 1, 2011

Just when you thought the antipsychotic drug Seroquel had fully penetrated doctors’ offices and patients’ medicine chests (not to mention law offices and children’s tummies) all across America, a new clinical trial is recruiting subjects for yet another indication for this ubiquitous drug.

Technically, the trial is of Seroquel XR, not Seroquel.  (Because, you know, the two are COMPLETELY different drugs, as described in this YouTube video.)  But you get the idea.  Anything to keep the money flowing for Astra-Zeneca, especially after Seroquel goes generic in 2012.

Thanks to a tip from Stephany at Soulful Sepulcher, you can read all the details of this study here.  It’s called the “Quietude Study,” a trial of Seroquel XR for the treatment of agitated depression.  Specifically, they want to compare Seroquel XR (at doses up to 150 or 300 mg/day) with Lexapro (up to 20 mg/day), and the investigators predict that Seroquel XR will be more effective in the management of depression “with prominent agitation.”

Two things caught my eye right away:  First, the name of the study (“Quietude”) is obviously a play on words, since the generic name for Seroquel is quetiapine.  How cute.  I also noticed that the study is being conducted by Roger McIntyre, MD, whom I saw just yesterday on the medical website QuantiaMD giving a blatantly obvious “infomercial” for Geodon (for Quantia members, here’s the link), a competitor’s drug.  [And for more info on QuantiaMD, see Daniel Carlat’s recent post about this site.])

But let’s get more substantive, shall we?  A look at the details of this new “Quietude” study is revealing.  For one thing, the opening statement of the study’s “Purpose” is:  “Most individuals with major depressive disorder manifest clinically significant agitation.”  Really?  I’ve certainly seen cases of agitated depression, but are “most” depressed patients agitated?  Not in my experience.  Maybe when they say “agitation” they’re including patients with akathisia, an occasional side effect of some antidepressant medication.  I understand research proposals always have to start with a statement about how widespread the problem is, but this one seems a bit of a stretch.

The inclusion and exclusion criteria are also included in the study design.  One of the inclusion criteria, along with the typical symptomatic measures (i.e., HAM-D >20 and CGI-S >4), is “significant agitation.”  That’s it.  By whose measure?  Patient report?  Clinician’s evaluation?  I’d really like to know more about how the “agitated” folks are going to be selected.

Some interesting exclusion criteria are (a) “known lack of antidepressant response to escitalopram [Lexapro]” and (b) “known lack of antidepressant response to quetiapine [Seroquel].”  So they’re enriching their population for individuals who have not already tried Lexapro or Seroquel and failed to respond to the antidepressant effect. Perhaps this isn’t a huge problem, but Seroquel XR is not the greatest antidepressant (see below), and this exclusion criterion will probably weed out the patients who gained weight on Seroquel or “felt like a zombie”—two common complaints with this medication which often lead to its discontinuation.

But what disturbs me the most about this trial is the fact that it seems entirely unnecessary.  The fact of the matter is that Seroquel XR is–for better or for worse—already used for many cases of “agitated depression.”  And it’s not even entirely off-label, because Seroquel XR is approved for bipolar depression and for the adjunctive treatment of MDD (whether it actually works as an antidepressant is another story).  As mentioned above, quetiapine is a sedating drug in many patients, so of course a psychiatrist is going to think about it for “agitated depression.”  (Unless he/she wants to take the time to determine the causes of the patient’s agitation, which, unfortunately, often does not happen.)

But there’s more.  When Seroquel XR was first introduced, with much fanfare, for the treatment of depression, I remember being somewhat skeptical and asking my local AstraZeneca sales force whether it had any “antidepressant effect” other than its well-known sedative and appetite-enhancing effects (because, after all, those are two of the symptoms of depression typically measured in clinical trials).  I was reassured that, no, no, Seroquel XR is more than that; it acts on all depressive symptoms, probably through its metabolite norquetiapine.

In fact, a year ago I emailed a local “key opinion leader” who spoke extensively for AstraZeneca and was told the following (emphasis added; BTW, if it’s too technical for you, don’t worry, go ahead and skip):

I think the concept is that quetapine at low doses (25-50-100 mg) is almost entirely anti-histaminergic and anti-muscarinic. However at the 150-300 mg doses there is significant norepinephrine transporter inhibition from the metabolite norquetapine as well as 5HT 1A agonism and 5HT2A AND 5HT2-C antagonism which all increase dopamine. Thus at the higher doses of 150-300 mg there is significant antidepressant activity but also increases in frontal, limbic and striatal dopamine which can be stimulatory (as well as having anti-depressant effects). At the 600-800 mg doses there is significant D-2 antagonism which is where the antipsychotic effect (D-2 antagonism) kicks in. Thus as the doses escalate patients go from pure sedation to antidepressant to antipsychotic effects.  At least this is the theory based on the dose related relative strength and affinities for its respective receptors.

The premise of the “Quietude” study seems to be telling us something different—even though it’s what we already knew if we only paid attention to what our patients tell us (and not necessarily to AstraZeneca): namely, that the primary advantage of intermediate-dose Seroquel XR does seem to be its sedative effect.  And this might indeed make it effective for the treatment of the “psychological and physical restlessness” associated with depression.

Anyway, because the trial is only being run in Canadian sites, I won’t have to worry about whether to refer my patients to it.  But it’s also a trial whose results I won’t exactly be anxiously awaiting.


Biomarker Envy III: Medial Prefrontal Cortex

May 28, 2011

Well, what do you know…. No sooner did I publish my last post about the “depression biomarker” discovered by a group of Japanese scientists, than yet another article appeared, describing a completely different biomarker.  This time, however, instead of simply diagnosing depression, the goal was to identify who’s at risk of relapse.  And the results are rather tantalizing… Could this be the real deal?

The paper, to be published in the journal Biological Psychiatry, by Norman Farb, Adam Anderson, and colleagues at the University of Toronto, had a simple research design.  They recruited 16 patients with a history of depression, but who were currently in remission (i.e., symptom-free for at least five months), as well as 16 control subjects.  They performed functional MRI (fMRI) imaging on all 32 participants while exposing them to an emotional stressor: specifically, they showed the subjects “sad” or “neutral” film clips while they were in the MRI scanner.

Afterward, they followed all 16 depressed patients for a total of 18 months.  Ten of these patients relapsed during this period.  When the group went back to look for fMRI features that distinguished the relapsers from the non-relapsers, they found that the relapsers, while viewing the “sad” film clips, had greater activity in the medial prefrontal cortex (mPFC).  The non-relapsers, on the other hand, showed greater activation in the visual cortex when viewing the same emotional trigger.

Even though the number of patients was very small (16 total), the predictive power of the tests was actually quite high (see the figure at right – click for a larger version).  It’s certainly conceivable that a test like this one might be used in the future to determine who needs more aggressive treatment—even if our checklists show that a depressed patient is in remission.  As an added bonus, it has better face validity than simply measuring a chemical in the bloodstream: in other words, it makes sense that a depressed person’s brain responds differently to sad stimuli, and that we might use this to predict outcomes.

As with most neuroimaging research, the study itself was fairly straightforward.  Making some sense out of the results, however, is another story.  (Especially if you like salmon.)

The researchers had predicted, based on previous studies, that patients who are prone to relapse might show greater activity in the ventromedial prefrontal cortex (VMPFC) and lower activity in the dorsolateral PFC (DLPFC).  But that’s not what they found.  Instead, relapsers had greater activity in the mPFC (which is slightly different from the VMPFC).  Moreover, non-relapsers had greater activity in the visual cortex (specifically the calcarine sulcus).

What might this mean?  The authors hypothesize that mPFC activity may lead to greater “ruminative thought” (i.e., worrying, brooding).  In fact, they did show that mPFC activation was correlated with scores on the RSQ-R, a psychological test of ruminative thought patterns.  Regarding the increased visual cortex activity, the authors suggest that this may be protective against further depressive episodes.  They surmise that it may be a “compensatory response” which might reflect “an attitude of acceptance or observation, rather than interpretation and analysis.”

In other words, to grossly oversimplify:  if you’re in recovery from depression, it’s not a good idea to ruminate, worry, and brood over your losses, or to internalize someone else’s sadness (even if it’s just a 45-second clip from the movie “Terms of Endearment”—which, by the way, was the “sad stimulus” in this experiment).  Instead, to prevent another depressive episode, you should strengthen your visual skills and use your visual cortex to observe and accept (i.e., just watch the darn movie!).

This all seems plausible, and the explanation certainly “fits” with the data.  But different conclusions can also be drawn.  Maybe those “recovered” patients who had less mPFC activity were simply “numb” to any emotional stimuli.  (All patients were taking antidepressants at the time of the fMRI study, which some patients report as having a “numbing” effect on emotions.)  Moreover, it has been said that depression can sometimes be beneficial; maybe the elevated mPFC activity in relapsers was an ongoing attempt to process the “sad” inputs in a more productive way?  As for the protective effect of visual cortex activity, maybe it isn’t about “acceptance” or “non-judgment” at all, but something else entirely?  Maybe those patients just enjoyed watching Shirley Maclaine and Jack Nicholson.

Nevertheless, the more psychologically minded among us might gladly embrace their explanations.  After all, it just seems “right” to say:  “Rumination is bad, acceptance and mindfulness (NB:  the authors did not use this term) is good.”  However, their “mediation analysis” showed that rumination scores did not predict relapse, and acceptance scores did not predict prolonged remission.  In other words, even though these psychological measures were correlated with the MRI findings, the psychological test results didn’t predict outcome.  Only the MRI findings did.

This leads to an interesting take-home message.  The results seem to support a psychological approach to maintaining remission—i.e., teaching acceptance and mindfulness, and avoiding ruminative tendencies—but this is only part of the solution.  Activity in the mPFC and the visual cortex might underlie pro-depressive and anti-depressive tendencies, respectively, in depressed patients, via mechanisms that are entirely unknown (and, dare I say it, entirely biologic?).

[An interesting footnote:  the risk of relapse was not correlated with medications.  Out of the ten who relapsed, three were still taking antidepressants.  Of the other seven, four were engaged in mindfulness-based cognitive therapy (MBCT), while the others were taking a placebo.]

Anyway, this paper describes an interesting finding with potential real-world application.  Although it’s a small study, it’s loaded with testable follow-up hypotheses.  I sincerely hope they continue to fire up the scanner, find some patients, and test them.  Who knows—we might just find something worth using.


Biomarker Envy II: Ethanolamine Phosphate

May 27, 2011

In my inbox yesterday was a story describing a new biological test for a psychiatric disorder.  Hallelujah!  Is this the holy grail we’ve all been waiting for?

Specifically, scientists at Human Metabolome Technologies (HMT) and Japan’s Keio University presented data earlier this week at a scientific conference in Tokyo, showing that they could diagnose depression by measuring levels of a chemical—ethanolamine phosphate—in patients’ blood.

Let me repeat that once again, for emphasis:  Japanese scientists now have a blood test to diagnose depression!

Never mind all that messy “talk-to-the-patient” stuff.  And you can throw away your tired old DSM-IV, because this is the new world: biological diagnosis!!  The press release describing the research even suggests that the test “could improve early detection rates of depression if performed during regular medical checkups.”  That’s right:  next time you see your primary doc, he or she might order—along with your routine CBC and lipid panel—an ethanolamine phosphate test.  If it comes back positive, congratulations!  You’re depressed!

If you can detect the skepticism in my voice, good.  Because even if this “biomarker” for depression turns out to be 100% accurate (which it is not—see below), its use runs entirely against how we should be practicing person-centered (not to be confused with “personalized”) medicine.  As a doctor, I want to hear your experiences and feelings, and help you with those symptoms, not run a blood test and order a drug.

[Incidentally, the Asahi press release made me chuckle when it stated: “About 90 percent of doctors base their diagnosis of depression on experience and varying factors.”  What about the other 10%?  Magic?]

As it turns out, I think there’s a lot to suggest that this particular blood test may not yet be ready for prime time.  For one, the work has not yet been published (and deciphering scientific results from a press release is always a risky proposition).  Secondly, the test was not 100% accurate; it failed to identify depression in 18% of cases, and falsely labeled healthy people as “depressed” 5% of the time.  (That’s a sensitivity of 82% and a specificity of 95%, for those of you playing along at home.)

Further, what the heck is ethanolamine phosphate, and why would it be low in depressed people?  Is it a chemical secreted by the “happiness centers” of the brain?  Does it predict the onset or worsening of a depressive episode?  Is it somehow affected by antidepressant treatment?  As far as I can tell from a quick literature search, there has been no report—or even a suggestion—of ethanolamine (or any of its metabolites) being involved in the pathogenesis of mood disorders.  Then again, maybe I didn’t get the Japanese translation just right.

Anyway, where this “marker” came from is anybody’s guess.  It’s entirely possible (although I can’t be sure, because the Japanese group has not yet published their findings) that the researchers measured the blood levels of dozens of molecules and found the “best” results with this one.  We sometimes call this a “fishing expedition.”  Obviously, the finding has to be replicated, and if it was, in fact, just a lucky result, further research will bear that out.

But Dr Yoshiaki Ohashi, board director and chief security officer at HMT (“chief security officer”? does he wear a badge and sit at the front desk during the overnight shift, too?) maintains that the findings “will make it easier for an objective, biological diagnosis of depressive patients.”

Wow.  In 2011.  (And just in time for DSM-5.)

What if he’s right?  How would you feel if you went to a routine doctor’s visit next week, got an order for blood work, and a secretary called you a few days later to tell you that you have depression?  Even if you don’t feel depressed?

Were there other motives for developing such a test?  Probably.  One of the press releases quotes the Japanese Ministry of Health as saying that “only one quarter of the people who need treatment” actually get it.  So maybe this blood test is simply a way to offer treatment to more people expand the market for antidepressants—even to those who don’t want treatment.  And then, of course, HMT probably wants a piece of the pie.  HMT is already developing a commercial test to measure ethanolamine phosphate levels; obviously, widespread adoption of this test would translate into big bucks for HMT, indeed.

So while many other questions remain to be answered, I must say I’m not holding my breath. Biological screening tests for psychiatric disorders have no face validity (in other words, if a test is positive but a person shows no signs or symptoms, then what?) and a positive result may expose patients to “preventive” treatments that are costly and cause unwanted side effects.

In my opinion, the best way (if any) to use a biomarker is in a “confirmatory” or “rule-out” function.  Is that demoralized, ruminative, potentially suicidal patient in your office simply going through a rough period in her life?  Or is she clinically depressed?  Will she respond to medications, or is this something that will simply “pass”?  In cases like this, measuring ethanolamine phosphate (or another similar marker) might be helpful.

But I don’t think we’ll ever be able to screen for psychiatric illness the same way a primary care doc might screen for, say, breast cancer or diabetes.  To do so would redefine the entire concept of “mental” illness (perhaps making it “neurological” illness instead?).  It also takes the person out of the picture.  At the end of the day, it’s always the patient’s thoughts, words, and experiences that count.  Ignoring those—and focusing instead on a chemical in the bloodstream—would be an unfortunate path to tread.


Biomarker Envy I: Cortical Thickness

May 13, 2011

In the latest attempt to look for biological correlates or predictors of mental illness, a paper in this month’s Archives of General Psychiatry shows that children with major depressive disorder (MDD) have thinner cortical layers than “healthy” children, or children with obsessive-compulsive disorder (OCD).  Specifically, researchers performed brain MRI scans on 78 children with or without a diagnosis, and investigated seven specific areas of the cerebral cortex.  Results showed four areas which were thinner in children with MDD than in healthy children, two which were thicker, and one that did not vary.

These results add another small nugget of data to our (admittedly scant) understanding of mental illness—particularly in children, before the effects of years of continuous medication treatment.  They also represent the bias towards imaging studies in psychiatry, whose findings—even if statistically significant—are not always that reliable or meaningful.  (But I digress…)

An accompanying press release, however, was unrealistically enthusiastic.  It suggested that this study “offers an exciting new way to identify more objective markers of psychiatric illness in children.”  Indeed, the title of the paper itself (“Distinguishing between MDD and OCD in children by measuring regional cortical thickness”) might suggest a way to use this information in clinical practice right away.  But it’s best not to jump to these conclusions just yet.

For one, there was tremendous variability in the data, as shown in the figure at left (click for larger view).  While on average the children with MDD had a thinner right superior parietal gyrus (one of the cortical regions studied) than healthy children or children with OCD, no individual measurement was predictive of anything.

Second, the statement that we can “distinguish between depression and OCD” based on a brain scan reflects precisely the type of biological determinism and certainty (and hype?) that psychiatry has been striving for, but may never achieve (just look at the figure again).  Lay readers—and, unfortunately, many clinicians—might read the headline and believe that “if we just order an MRI for Junior, we’ll be able to get the true diagnosis.”  The positive predictive value of any test must be high enough to warrant its use in a larger population, and so far, the predictive value of most tests in psychiatry is poor.

Third, there is no a priori reason why there should be a difference between the brains (or anything else, for that matter) of patients with depression and patients with OCD, when you consider the overlap between these—and other—psychiatric conditions.  There are many shades of grey between “depression” and “OCD”:  some depressed children will certainly have OCD-like traits, and vice versa.  Treating the individual (and not necessarily the individual’s brain scan) is the best way to care for a person.

To be fair, the authors of the study, Erin Fallucca and David Rosenberg from Wayne State University in Detroit, do not state anywhere in their paper that this approach represents a “novel new diagnostic method” or make any other such sweeping claims about their findings.  In fact, they write that the differences they observed “merit further investigation” and highlight the need to look “beyond the frontal-limbic circuit.”  In other words, our current hypotheses about depression are not entirely supported by their findings (true), so we need to investigate further (also true).  And this, admittedly, is how science should proceed.

However, the history of psychiatry is dotted with tantalizing neurobiological theories or findings which find their way into clinical practice before they’ve been fully proven, or even shown any great clinical relevance.  Pertinent examples are the use of SPECT scans to diagnose ADHD, championed by Daniel Amen; quantitiative EEG to predict response to psychotropics; genotyping for metabolic enzymes; and the use of SSRIs to treat depression.  (Wait, did I say that???)

The quest to identify “biomarkers” of psychiatric illness may similarly lead us to believe we know more about a disease than we do.  A biomarker is a biological feature (an endocrine or inflammatory measure, a genotype, a biochemical response to a particular intervention) that distinguishes a person with a condition from one without.  They’re used throughout medicine for diagnosis, risk stratification and monitoring treatment response.   A true biomarker for mental illness would represent a significant leap ahead in personalized treatment.  Or would it?  What if a person’s clinical presentation differs from what the marker predicts?  “I’m sorry Mrs. Jones, but even though Katie compulsively washes her hands and counts to twelve hundreds of times a day, her right superior parietal gyrus is too thin for a diagnosis of OCD.”

Other fields of medicine don’t experience this dilemma.  If you have an elevated hsCRP and high LDL, even though you “feel fine,” you are still at elevated risk for cardiovascular disease and really ought to take preventive measures (exercise, diet, etc).  (However, see this recent editorial in the BMJ about “who should define disease.”)  But if your brain scan shows cortical thinning and you have no symptoms of depression, do you need to be treated?  Are you even at risk?

Some day (hopefully) these questions will be answered, as we gain a greater understanding of the biology of mental illness.  But until then, let’s keep research and clinical practice separate until we know what we’re doing.  Psychiatry doesn’t have to be like other fields of medicine.  Patients suffer and come to us for help; let’s open our eyes and ears before sending them off to the scanner or the lab.  In doing so, we might learn something important.


What Can Cymbalta Teach Us About Pain?

April 29, 2011

You’ve probably noticed widespread TV advertisements lately for Cymbalta, Eli Lilly’s blockbuster antidepressant.  However, these ads say nothing about depression.  Sure, some of the actors may look a little depressed (the guy at right, from the Cymbalta web site, sure looks bummed), but the ads are instead promoting Cymbalta for the treatment of chronic musculoskeletal pain, an indication that Cymbalta received in August 2010, strengthening Cymbalta’s position as the “Swiss Army knife” of psychiatric meds.  (I guess that makes Seroquel the “blunt hammer” of psych meds?)

Cymbalta (duloxetine) had already been approved for diabetic neuropathy and fibromyalgia, two other pain syndromes.  It’s a “dual-action” agent, i.e., an inhibitor of the reuptake of serotonin and norepinephrine.  Other SNRIs include Effexor, Pristiq, and Savella.  Of these, only Savella has a pain [fibromyalgia] indication.

When you consider how common the complaint of “pain” is, this approval is a potential gold mine for Eli Lilly.  Moreover, the vagueness of this complaint is also something they will likely capitalize upon.  To be sure, there are distinct types of pain—e.g., neuropathic, visceral, musculoskeletal—and a proper pain workup can determine the exact nature of pain and guide the treatment accordingly.  But in reality, overworked primary clinicians (not to mention psychiatrists, for whom hearing the word “pain” is often the extent of the physical exam) often hear the “pain” complaint and prescribe something the patient says they haven’t tried yet.  Cymbalta is looking to capture part of that market.

The analgesic mechanism of Cymbalta is (as with much in psychiatry) unknown.   Some have argued it works by relieving the depression and anxiety experienced by patients in pain.  It has also been proposed that it activates “descending” pathways from the brain, helping to dampen “ascending” pain signals from the body.  It might also block NMDA receptors or sodium channels or enhance the body’s own endorphin system.  (Click on the figure above for other potential mechanisms, from a recent article by Dharmshaktu et al., 2011.)

But the more important question is:  does it work?  There does seem to be some decent evidence for Cymbalta’s effect in fibromyalgia and diabetic neuropathy in several outcome measures, and in a variety of 12-week trials summarized in a recent Cochrane review.

The evidence for musculoskeletal pain is less convincing.  In order to obtain approval, Lilly performed two studies of Cymbalta in osteoarthritis (OA) and three studies in chronic low back pain (CLBP).  All CLBP studies showed benefit in “24-hour pain severity” but only one of the OA studies showed improvement.   The effects were not tremendous, even though they were statistically significant (see example above, click to enlarge).  The FDA panel expressed concern “regarding the homogeneity of the study population and the heterogeneity of CLBP presenting to physicians in clinical practice.”  In fact, the advisory committee’s enthusiasm for the expanded indication was somewhat muted:

Even though the committee also complained of the “paucity of sound data regarding the pharmacological mechanisms of many analgesic drugs … and the paucity of sound data regarding the underlying pathophysiology,” they ultimately voted to approve Cymbalta for “as broad an indication as possible,” in order for “the well-informed prescriber [to] have the option of trying out an analgesic product approved for one painful condition in a patient with a similar painful condition.”

Incidentally, they essentially ignored the equivocal results in the OA trials, reasoning instead that it was OK to “extrapolate the finding [of efficacy in CLBP] to other similar musculoskeletal conditions.”

In other words, it sounds like the FDA really wanted to get Cymbalta in the hands of more patients and more doctors.

As much as I dislike the practice of prescribing drugs simply because they’re available and they might work, the truth of the matter is, this is surely how Cymbalta will be used.  (In reality, it explains a lot of what we do in psychiatry, unfortunately.)  But pain is a complex entity.  We have to be certain not to jump to conclusions—like we frequently do in psychiatry—when/if we see a “success story” with Cymbalta.

To the body, 60 mg of duloxetine is 60 mg of duloxetine, whether it’s being ingested for depression or for pain.  If a patient’s fibromyalgia or low back pain is miraculously “cured” by Cymbalta, there’s no a priori reason to think that it’s doing anything different in that person than what it does in a depressed patient (even though that is entirely conceivable).  The same mechanism might be involved in both.

The same can be said for some other medications with multiple indications.  For example, we can’t necessarily posit alternate mechanisms for Abilify in a bipolar patient versus Abilify in a patient with schizophrenia.  At roughly equivalent doses, its efficacy in the two conditions might be better explained by a biochemical similarity between the two conditions.  (Or maybe everything really is bipolar!  —sorry, my apologies to Hagop Akiskal.)

Or maybe the medication is not the important thing.  Maybe the patient’s perceived need for the medication matters more than the medication itself, and 60 mg of duloxetine for pain truly is different from 60 mg duloxetine for depression.  However, if our explanations rely on perceptions and not biology, we’re entering the territory of the placebo effect, in which case we’re better off skipping duloxetine (and its side effect profile and high cost), and just using an actual placebo.

Bottom line:  We tend to lock ourselves into what we think we know about the biology of the condition we’re treating, whether pain, depression, schizophrenia, ADHD, or whatever.  When we have medications with multiple indications, we often infer that the medication must work differently in each condition.  Unless the doses are radically different (e.g., doxepin for sleep vs depression), this isn’t necessarily true.  In fact, it may be more parsimonious to say that disorders are more fundamentally alike than they are different, or that our drugs are being used for their placebo effect.

We can now add chronic pain to the long list of conditions responsive to psychoactive drugs.  Perhaps it’s also time to start looking at pain disorders as variants of psychiatric disorders, or treating pain complaints as symptoms of mental disorders.  Cymbalta’s foray into this field may be the first attempt to bridge this gap.

Addendum:  I had started this article before reading the PNAS article on antidepressants and NSAIDs, which I blogged about earlier this week.  If the article’s conclusion (namely, that antidepressants lose their efficacy when given with pain relievers) is correct, this could have implications for Cymbalta’s use in chronic pain.  Since chronic pain patients will most likely be taking regular analgesic medications in addition to Cymbalta, the efficacy of Cymbalta might be diminished.  It will be interesting to see how this plays out.


Antidepressants and “Stress” Revisited

April 13, 2011

If you have even the slightest interest in the biology of depression (or if you’ve spent any time treating depression), you’ve heard about the connection between stress and depressive illness.  There does seem to be a biological—maybe even a causative—link, and in many ways, this seems intuitive:  Stressful situations make us feel sad, hopeless, helpless, etc—many of the features of major depression—and the physiological changes associated with stress probably increase the likelihood that we will, in fact, become clinically depressed.

To cite a specific example, a steroid hormone called cortisol is elevated during stress, and—probably not coincidentally—is also usually elevated in depression.  Some researchers have attempted to treat depression by blocking the effects of cortisol in the brain.  Although we don’t (yet) treat depression this way, it is a tantalizing hypothesis, if for no reason other than the fact that it makes more intuitive sense than the “serotonin hypothesis” of depression, which has little evidence to back it up.

A recent article in Molecular Psychiatry (pdf here) adds another wrinkle to the stress hormone/depression story.  Researchers from King’s College London, led by Christoph Anacker, show that antidepressants actually promote the growth and development of new nerve cells in the hippocampus, and both processes depend on the stress hormone receptor (also known as the glucocorticoid receptor or GR).

Specifically, the group performed their experiments in a cell culture system using human hippocampal progenitor cells (this avoids some of the complications of doing such experiments in animals or humans).  They found that neither sertraline (Zoloft) alone, nor stress steroids (in this case, dexamethasone or DEX) alone, caused cells to proliferate, but when given together, proliferation occurred—in other words, the hippocampal progenitor cells started to divide rapidly.  [see figure above]

Furthermore, when they continued to incubate the cells with Zoloft, the cells “differentiated”—i.e., they turned into cells with all the characteristics of mature nerve cells.  But in this case, differentiation was inhibited by dexamethasone. [see figure at right]

To make matters more complicated, the differentiation process was also inhibited by RU486, a blocker of the receptor for dexamethasone (and other stress hormones).  What’s amazing is that RU486 prevented Zoloft-induced cell differentiation even in the absence of stress hormones.  (However, it did prevent the damaging effects of dexamethasone, consistent with what we might predict.) [see figure at left]

The take-home message here is that antidepressants and dexamethasone (i.e., stress hormones) are required for cell proliferation (first figure), but only antidepressants cause cell differentiation and maturation (second figure).  Furthermore, both processes can be inhibited by RU486, a stress hormone antagonist (third figure).

All in all, this research makes antidepressants look “good.”  (Incidentally, the researchers also got the same results with amitripytline and clomipramine, two tricyclic antidepressants, so the effect is not unique to SSRIs like Zoloft.)  However, it raises serious questions about the relationship between stress hormones and depression.  If antidepressants work by promoting the growth and development of hippocampal neurons, then this research also says that stress hormones (like dexamethasone) might be required, too—at least for part of this process (i.e., they’re required for growth/proliferation, but not for differentiation).

This also raises questions about the effects of RU486.  Readers may recall the enthusiasm surrounding RU486 a few years ago as a potential treatment for psychotic depression, promoted by Alan Schatzberg and his colleagues at Corcept Pharmaceuticals.  Their argument (a convincing one, at the time) was that if we could block the unusually high levels of cortisol seen in severe, psychotic depression, we might treat the disease more effectively.  However, clinical trials of their drug Corlux (= RU486) were unsuccessful.  The experiments in this paper show one possible explanation why:   Instead of simply blocking stress hormones, RU486 blocks the stress hormone receptor, which seems to be the key intermediary for the positive effects of antidepressants (see the third figure).

The Big Picture:   I’m well aware that this is how science progresses:  we continually refine our hypotheses as we collect new data, and sometimes we learn how medications work only after we’ve been using them successfully for many years.  (How long did it take to learn the precise mechanism of salicylic acid, also known as aspirin?  More than two millennia, at least.)  But here we have a case in which antidepressants seem to work in a fashion that is so different from what we originally thought (incidentally, the word “serotonin” is used only three times in their 13-page article!!).  Moreover, the new mechanism (making new brain cells!!) is quite significant.  And the involvement of stress hormones in this new mechanism doesn’t seem very intuitive or “clean” either.

It makes me wonder (yet again) what the heck these drugs are doing.  I’m not suggesting we call a moratorium on the further use of antidepressants until we learn exactly how they work, but I do suggest that we practice a bit of caution when using them.  At the very least, we need to change our “models” of depression.  Fast.

Overall, I’m glad this research is being done so that we can learn more about the mechanisms of antidepressant action (and develop new, more specific ones… maybe ones that target the glucocorticoid receptor).  In the meantime, we ought to pause and recognize that what we think we’re doing may be entirely wrong.  Practicing a little humility is good every once in a while, even especially for a psychopharmacologist.


%d bloggers like this: