The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.


The Painful Truth of Antidepressants

April 25, 2011

In a study published today, scientists at Rockefeller University proclaim that SSRI antidepressants (like Prozac and Celexa) may lose their efficacy when given with anti-inflammatory drugs like ibuprofen and aspirin.  Considering the high prevalence of depression and the widespread use of both SSRIs and anti-inflammatory medications, this result is bound to receive much attention.  As a matter of fact, it’s tantalizing to jump to the conclusion (as has been done in the Fox News and WSJ reports on this study) that the reason SSRIs may be so ineffective is because so many people with depression also use non-steroidal anti-inflammatory drugs (NSAIDs).

By my read of the data, it may be a bit too early to draw this conclusion.  Nevertheless, the study, by Paul Greengard, Jennifer Warner-Schmidt, and their colleagues, and published online in the Proceedings of the National Academy of Sciences, does propose some interesting mechanisms by which anti-inflammatory agents may affect antidepressant action.

The majority of the work was performed in mice, for which there are valid “models” of depression that are routinely used in preclinical studies.  In past work, Greengard’s group has shown that the expression of a small protein called p11 (which is associated with the localization and function of serotonin receptors) is correlated with “antidepressant-like” responses in mice, and probably in humans, too.  In the present study, they demonstrate that the antidepressants Prozac and Celexa cause an increase in expression of p11 in the frontal cortex of mice, and, moreover, that p11 expression is mediated by the ability of these antidepressants to cause elevations in interferon-gamma (IFN-γ) and tumor necrosis factor-alpha (TNF-α).  In other words, antidepressants enhance neural expression of these cytokines, which, in turn, increases p11 activity.

However, when mice are given NSAIDs or an analgesic (i.e., ibuprofen, naproxen, aspirin, or Tylenol), this prevents the increase in p11, as well as the increase in IFN-γ and TNF-α.  NSAIDs also prevent the “antidepressant-like” behavioral responses elicited by Celexa (as well as other antidepressants like Wellbutrin, Parnate, and TCAs) in mouse models of depression.

The group went one step further and even created a p11 “knockout” mouse.  These mice had no response to Celexa, nor did they have antidepressant-like responses to injections of IFN-γ or TNF-α.  However, the p11 knockout mice did respond to desipramine, an antidepressant that works mainly on norepinephrine, thus emphasizing the significance of serotonin in the p11-mediated response.

What does this mean for humans?  To answer this question, the group analyzed data from STAR*D, a huge multicenter antidepressant trial.  In the first stage of STAR*D, all patients (total of approximately 1500 individuals) took Celexa for a 12-week period.  The remission rate for patients who took an NSAID at any time during this 12-week period was only 45%, while those who took no NSAID remitted at a rate of 55%.

So does this mean that people taking antidepressants should avoid NSAIDs, and just deal with their pain?  Probably not. (In fact, one might ask the opposite question:  should people with chronic pain avoid SSRIs?  Unfortunately, the study did not look at whether SSRIs inhibited the pain-relieving effects of NSAIDs.)

In my opinion, some of the mouse data need to be interpreted carefully.  For instance, the mice received extremely high doses of NSAIDs (e.g., ibuprofen at 70 mg/kg/d, which corresponds to 4200 mg/d for a 60-kg man, or 21 Advil pills per day; similarly, the mice drinking aspirin received 210 mg/kg/d, or 12,600 mg = ~39 pills of regular-strength aspirin per day for a typical human).  Also, in the behavioral studies the mice received NSAIDs for an entire week but received only a single injection of Celexa (20 mg/kg, or about 1200 mg, 60 pills) immediately before the behavioral experiments.

The human data, of course, are equally suspect.  Patients in the STAR*D study were counted as “NSAID users” if they described using NSAIDs even once in the first 12 weeks of the study.  It’s hard to see how the use of ibuprofen once or twice in a three-month period might interfere with someone’s daily Celexa.  (Not to mention the fact that the “remission” data from STAR*D have come under some scrutiny themselves – see here and here).  Moreover, as the authors point out, it’s quite likely that patients with more severe forms of depression also had concurrent pain syndromes and used NSAIDs more frequently.  In other words, NSAID use might not attenuate SSRI activity, but may be a sign of depression that is more resistant to SSRIs.

In the end, however, I find the study to be quite provocative.  Certainly the correlation of antidepressant effect with expression of the p11 protein and with TNF-α and IFN-γ activity suggests a novel mechanism of antidepressant action—as well as new markers for antidepressant activity.  Moreover, the potential roles of NSAIDs in reducing antidepressant effects (or, in some cases, enhancing these effects), need to be explored.

But it raises even more unanswered questions.  For one, how do we reconcile the fact that antidepressant effects are associated with increased TNF-α and IFN-γ activity in the brain, while increases in these cytokines in the periphery are thought to cause depression?  Also, how can we explain the fact that other analgesic compounds, such as tramadol and buprenorphine, might actually have an antidepressant effect?  Finally, what does this mean for our treatment of pain symptoms in depression?  Should we avoid SSRIs and use other types of antidepressants instead?  Do NSAIDs inhibit the effects of SNRIs like Cymbalta, which has recently been FDA-approved for the treatment of chronic musculoskeletal pain (and whose users are most certainly also taking medications like NSAIDs)?

It’s great that the interface between mental illness and physical syndromes is receiving some well-deserved attention.  It’s also exciting to see that the neuroscience and pharmacology of depression and pain may overlap in critical ways that influence how we will treat these disorders in the future.  Perhaps it may also explain our failures up to now.  With future work in this area, studies like these will help us develop more appropriate antidepressant strategies for the “real world.”

[Finally, a “hat tip,” of sorts, to Fox News, which first alerted me to this article.  Unfortunately, the story, written by Dr. Manny Alvarez, was fairly low on substance but high on the “wow” factor.  It drew some broad conclusions and—my biggest pet peeve—did not refer the reader to any site or source to get more detailed information.  Alas, such is the case with much public science and medicine reporting: Alarm first, ask questions later.]