The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.


When “Adherence” Is A Dirty Word

January 16, 2012

Recently, I’ve been spending a lot of time reading the literature on “recovery” from mental illness.  Along the way, I’ve been introduced to the writings of Richard Warner and William Anthony, and peer-leaders in the field like Daniel Fisher and Pat Deegan.  Coincidentally, I also learned recently that my local county mental health system will start training patients and providers in Wellness Recovery Action Planning (“WRAP”), a peer-led illness self-management program which promotes autonomy and recovery.

In the interest of “evidence-based medicine,” the developers of WRAP have performed actual controlled trials of this intervention, comparing it to conventional mental health treatment.  In several studies, they have found that patients engaged in a WRAP program are typically more hopeful, more engaged in their recovery, and—quite surprisingly—have fewer psychiatric symptoms than those who are not.

One such paper was published just last month (pdf here).  The investigators showed that WRAP participants in public clinics throughout Ohio were more engaged in “self-advocacy” than patients who were not involved in WRAP, and that this led to improvements in quality of life and—consistent with their earlier studies—a reduction in psychiatric symptoms.  Their measure of “self-advocacy” was the Patient Self-Advocacy Scale (PSAS), “an instrument designed to measure a person’s propensity to engage in self-activism during health care encounters.”

Throughout the intervention, WRAP patients had a consistently higher PSAS score than others.  But their scores were particularly elevated in one subscale: “Mindful Non-Adherence.”

Non-adherence?  I must confess, I did a double-take.  If my years of training in modern psychiatry have taught me one thing, it is that adherence is a primary (yet elusive) goal in patients with serious mental illness.  In fact, the high rate of non-adherence has become the biggest sales pitch for new long-acting injectable antipsychotics like Invega Sustenna.

And now a paper is showing that non-adherence—i.e., the active refusal of medications or other suggestions from one’s doctor—is a good thing.  Really?

Intrigued, I looked more closely at the PSAS scale.  It was developed in 1999 by Dale Brashers of the communications department at the University of Illinois.  The scale was designed not to be a clinical tool, but rather a measure of how people manage interactions with their health care providers.  Their initial studies focused on patients in the HIV-AIDS community (e.g., in organizations like ACT UP) and health care communication patterns among patients who describe themselves as “activists.”

The PSAS scale includes three dimensions:  illness education, assertiveness, and “potential for mindful non-adherence.”  The first two are fairly self-explanatory.  But the third one is defined as “a tendency to reject treatments” or “a willingness to be nonadherent when treatments fail to meet the patient’s expectations.”  Four questions on the PSAS survey assess this potential, including #10: “Sometimes I think I have a better grasp of what I need than my doctor does” and #12: “I don’t always do what my physician or health care worker has asked me to do.”

In the WRAP study published last month, greater agreement with these questions—i.e., greater willingness to be nonadherent—resulted in a greater PSAS score.  I should point out that in a separate analysis, high non-adherence scores were not associated with better clinical outcomes, but education and assertiveness (and overall PSAS scores) were.  Nevertheless, when data suggest that patients might benefit from the active “defiance” of doctors’ orders, we physicians should take this seriously.

We can start by helping patients make reasoned treatment decisions.  The term “mindful non-adherence” implies that the patient knows something valuable, and that he or she is willing to act on this knowledge, against the wishes of the physician.  Few providers would admit that the patient has greater knowledge than the “expert” clinician.  After all, that’s why most of us engage in psychoeducation: to inform, enable, and empower our patients.

However, maybe the matters on which we “educate” our patients are ultimately irrelevant.  Maybe patients don’t want (or need) to know which parts of their brains are affected in psychosis, ADHD, or OCD, or how dopamine blockade reduces hallucinations; they just want strategies to alleviate their suffering.  The same may hold true for other areas of medicine, too.  As discussed in a recent article in the online Harvard Business Review, serious problems may arise when too much information is unloaded on patients without the guidance of a professional or, better yet, a peer who has “been there.”

Mental health care may provide the perfect arena in which to test the hypothesis that patients, when given enough information, know what’s best for themselves in the long run.  In a field where one’s own experience is really all that matters, maybe a return to patient-centered decision-making—what Pat Deegan calls the “dignity of risk” and the “right to failure”—is necessary.  At the very least, we physicians should get comfortable with the fact that, sometimes, a patient saying “no” may be the best prescription possible.