Advertisements
 

The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.

Advertisements

“Patient-Centered” Care and the Science of Psychiatry

May 30, 2012

When asked what makes for good patient care in medicine, a typical answer is that it should be “patient-centered.”  Sure, “evidence-based medicine” and expert clinical guidelines are helpful, but they only serve as the scientific foundation upon which we base our individualized treatment decisions.  What’s more important is how a disorder manifests in the patient and the treatments he or she is most likely to respond to (based on genetics, family history, biomarkers, etc).  In psychiatry, there’s the additional need to target treatment to the patient’s unique situation and context—always founded upon our scientific understanding of mental illness.

It’s almost a cliché to say that “no two people with depression [or bipolar or schizophrenia or whatever] are the same.”  But when the “same” disorder manifests differently in different people, isn’t it also possible that the disorders themselves are different?  Not only does such a question have implications for how we treat each individual, it also impacts how we interpret the “evidence,” how we use treatment guidelines, and what our diagnoses mean in the first place.

For starters, every patient wants something different.  What he or she gets is usually what the clinician wants, which, in turn, is determined by the diagnosis and established treatment guidelines:  lifelong medication treatment, referral for therapy, forced inpatient hospitalization, etc.  Obviously, our ultimate goal is to eliminate suffering by relieving one’s symptoms, but shouldn’t the route we take to get there reflect the patient’s desires?  When a patient gets what he or she wants, shouldn’t this count as good patient care, regardless of what the guidelines say?

For instance, some patients just want a quick fix (e.g., a pill, ideally without frequent office visits), because they have only a limited amount of money (or time) they’re willing to use for treatment.  Some patients need to complete “treatment” to satisfy a judge, an employer, or a family member.  Some patients visit the office simply to get a disability form filled out or satisfy some other social-service need.  Some simply want a place to vent, or to hear from a trusted professional that they’re “okay.”  Still others seek intensive, long-term therapy even when it’s not medically justified.  Patients request all sorts of things, which often differ from what the guidelines say they need.

Sometimes these requests are entirely reasonable, cost-effective, and practical.  But we psychiatrists often feel a need to practice evidence- (i.e., science-) based medicine; thus, we take treatment guidelines (and diagnoses) and try to make them apply to our patients, even when we know they want—or need—something else entirely, or won’t be able to follow through on our recommendations.  We prescribe medications even though we know the patient won’t be able to obtain the necessary lab monitoring; or we refer a patient for intensive therapy even though we know their insurance will only cover a handful of visits; we admit a suicidal patient to a locked inpatient ward even though we know the unpredictability of that environment may cause further distress; or we advise a child with ADHD and his family to undergo long-term behavioral therapy in conjunction with stimulants, when we know this resource may be unavailable.

Guidelines and diagnoses are written by committee, and, as such, rarely apply to the specifics of any individual patient.  Thus, a good clinician uses a clinical guideline simply as a tool—a reference point—to provide a foundation for an individual’s care, just as a master chef knows a basic recipe but alters it according to the tastes he wishes to bring out or which ingredients are in season.  A good clinician works outside the available guidelines for many practical reasons, not the least of which is the patient’s own belief system—what he or she thinks is wrong and how to fix it.  The same could be said for diagnoses themselves.  In truth, what’s written in the DSM is a model—a “case study,” if you will—by which real-world patients are observed and compared.  No patient ever fits a single diagnosis to a “T.”

Unfortunately, under the pressures of limited time, scarce resources, and the threat of legal action for a poor outcome, clinicians are more inclined to see patients for what they are than for who they are, and therefore adhere to guidelines even more closely than they’d like.  This corrupts treatment in many ways.  Diagnoses are given out which don’t fit (e.g., “parity” diagnoses must be given in order to maintain reimbursement).  Treatment recommendations are made which are far too costly or complex for some patients to follow.  Services like disability benefits are maintained far beyond the period they’re needed (because diagnoses “stick”).  And tremendous resources are devoted to the ongoing treatment of patients who simply want (and would benefit from) only sporadic check-ins, or who, conversely, can afford ongoing care themselves.

The entire situation calls into question the value of treatment guidelines, as well as the validity of psychiatric diagnoses.  Our patients’ unique characteristics, needs, and preferences—i.e., what helps patients to become “well”—vary far more widely than the symptoms upon which official treatment guidelines were developed.  Similarly, what motivates a person to seek treatment differs so widely from person to person, implying vastly different etiologies.

To provide optimal care to a patient, care must indeed be “patient-centered.”  But truly patient-centered care must not only sidestep the DSM and established treatment guidelines, but also, frequently, ignore diagnoses and guidelines altogether.  What does this say about the validity, relevance, and applicability of the diagnoses and guidelines at our disposal?  And what does this say about psychiatry as a science?


How Not To Be A Difficult Patient

June 5, 2011

One of the more interesting posters at last month’s American Psychiatric Association Annual Meeting was presented by Ricardo Salazar of UT San Antonio and the South Texas Psychiatric Practice-Based Research Network (PBRN).  The topic was “the Difficult Patient in Psychiatric Practice” and it surveyed psychiatrists about which patients they considered “difficult” and why.

It might sound somewhat disrespectful (and maybe a little naïve) to label a patient as “difficult.”  However, doctors are people too, and it would be even more naïve to think that doctors don’t have their own reactions to (and opinions of) the patients they treat—something referred to in psychoanalytic theory as “countertransference.”  Let’s face it:  doctors simply don’t like dealing with some patients.  (That’s why some choose private practice, to cherry-pick those whom they do like.)

Nevertheless, I think this topic needs more attention, particularly in today’s environment.  Much of what we do in mental health (both psychopharmacologically and in therapy) has a questionable evidence base, and yet the experience of clinicians and of patients is that our interventions frequently work.  I maintain that clinical benefit often results more from the interpersonal relationship between a patient and a doctor who listens and seems to understand, than from the pill that a doctor prescribes or the specific protocol that a therapist follows.  (This is yet another reason why quick-throughput psychiatry, dictated by brain scans, blood tests, and checklists, is bound to fail for most patients.)

Anyway, Dr Salazar’s study used a scale called the “Difficult Doctor-Patient Relationship Questionnaire (DDPRQ-10),” developed by Steven Hahn and colleagues in 1994.  I had not heard of this scale before.  Here are some sample questions:

1.  How much are you looking forward to this patient’s next visit after today?
3.  How manipulative is this patient?
4.  To what extent are you frustrated by this patient’s vague complaints?
6.  Do you find yourself secretly hoping this patient will not return?
8.  How time-consuming is caring for this patient?

As a patient, I might find some of these questions mildly offensive (“does my doctor secretly hope I won’t return??”), but as a doctor I must admit that some days I look at my schedule and see a name that makes me dread that hour.  (If you’re a doctor and you’re reading this and you do not agree, you’re either fooling yourself, you’re perfect, or you’re IBM’s Watson.)  Recognizing those feelings, however, often helps me to prepare for the session—and examine my own biases and faults—and such appointments often turn out to be the most satisfying (at least for the patient).

Salazar’s study showed that, on average, psychiatrists considered approximately 15% of their patients to be “difficult.”  The most common diagnoses among the “difficult” patients were schizophrenia (32%), bipolar disorder (19%), cognitive disorder (24%), and personality disorder (32%).  Patients with depression (11%) or anxiety (9%)—and, interestingly, patients who were in psychotherapy (11%)—were considered less difficult.  Not surprisingly, patients with alcohol and substance use disorders were also labeled difficult (23%), but patients with somatization (defined in this study as “unexplained physical complaints”) were less so (10%).

A fascinating review of 94 studies published between 1979 and 2004 described four reasons why patients may be considered “difficult”:  (1) chronicity– i.e., patients fail to improve over time; (2) severe, unmet dependency needs which patients then project onto the caregiver; (3) severe character pathology (especially borderline, narcissistic, and paranoid types); and (4) an inability to “reflect” (which the authors attribute to a history of insecure attachment early in life).  The authors also described three types of difficult patients:  the “unwilling care avoider” who doesn’t see himself as sick; the “ambivalent care seeker” who is often demanding and dependent, but is frequently self-destructive and self-sabotaging; and the “demanding care claimer” who is aggressive, attention-seeking and manipulative, but who sees himself as a patient only when necessary to achieve his own goals (legal, financial, or otherwise).

Of course, every patient interaction is a two-way street.  Regarding psychiatrists, the Salazar study found that young (<40 yrs old) psychiatrists, and those working in a group practice, claimed to have more difficult patients.  Another large study published in 2006 examined 1391 physicians to identify which features of doctors underlie their perceptions of patients as “frustrating.”  They found that high frustration was associated with doctors who were younger (<40 yrs old), worked >55 hrs/week, had symptoms of depression, stress, or anxiety (yes, that’s in the doctor, not the patient), and had “a greater number of patients with psychosocial problems or substance abuse.”  Two-way street, indeed.

It’s commonly said that “there’s no such thing as a stupid question.”  By the same token, I would posit that there’s no such thing as a difficult patient.  To be sure, some patients present with difficult problems, challenging histories, poor interpersonal skills, and needs that simply can’t be met with the interventions available to the physician.  But every patient suffers in his or her own way.  Doctors bring their own baggage to the interaction, too, in the form of strong opinions, personal biases, lack of knowledge, or—conversely—the perception that we know what’s going on, when in reality we do not.

When you add in the extrinsic factors that make the interaction even more strained—shorter appointments, care that is dictated by some third party rather than the doctor or the patient, poorly designed electronic medical record systems, or financial conflicts of interest that violate the doctor-patient trust—the “difficulties” just keep piling up.

It’s important that we look at every aspect of the doctor-patient interaction in order to improve the quality and efficacy of the care we provide.  Patients should not need to worry about whether they’re perceived as “difficult” or “frustrating.”  And when these perceptions do exist, we must critically examine the impact it has on their care, and what it says about the professionals we call upon to treat them.


Getting Inside The Patient’s Mind

March 4, 2011

As a profession, medicine concerns itself with the treatment of individual human beings, but primarily through a scientific or “objective” lens.  What really counts is not so much a person’s feelings or attitudes (although we try to pay attention to the patient’s subjective experience), but instead the pathology that contributes to those feelings or that experience: the malignant lesion, the abnormal lab value, the broken bone, or the infected tissue.

In psychiatry, despite the impressive inroads of biology, pharmacology, molecular genetics into our field—and despite the bold predictions that accurate molecular diagnosis is right around the corner—the reverse is true, at least from the patient’s perspective.  Patients (generally) don’t care about which molecules are responsible for their depression or anxiety; they do know that they’re depressed or anxious and want help.  Psychiatry is getting ever closer to ignoring this essential reality.

Lately I’ve come across a few great reminders of this principle.  My colleagues over at Shrink Rap recently posted an article about working with patients who are struggling with problems that resemble those that the psychiatrist once experienced.  Indeed, a debate exists within the field as to whether providers should divulge details of their own personal experiences, or whether they should remain detached and objective.  Many psychiatrists see themselves in the latter group, simply offering themselves as a sounding board for the patient’s words and restricting their involvement to medications or other therapeutic interventions that have been planned and agreed to in advance.  This may, however, prevent them from sharing information that may be vital in helping the patient make great progress.

A few weeks ago a friend sent me a link to this video produced by the Janssen pharmaceutical company (makers of Risperdal and Invega, two atypical antipsychotic medications).

The video purports to simulate the experience of a person experiencing psychotic symptoms.  While I can’t attest to its accuracy, it certainly is consistent with written accounts of psychotic experiences, and is (reassuringly!) compatible with what we screen for in the evaluation of a psychotic patient.  Almost like reading a narrative of someone with mental illness (like Andrew Solomon’s Noonday Demon, William Styron’s Darkness Visible, or An Unquiet Mind by Kay Redfield Jamison), videos and vignettes like this one may help psychiatrists to understand more deeply the personal aspect of what we treat.

I also stumbled upon an editorial in the January 2011 issue of Schizophrenia Bulletin by John Strauss, a Yale psychiatrist, entitled “Subjectivity and Severe Psychiatric Disorders.” In it, he argues that in order to practice psychiatry as a “human science” we must pay as much attention to a patient’s subjective experience as we do to the symptoms they report or the signs we observe.  But he also points out that our research tools and our descriptors—the terms we use to describe the dimensions of a person’s disease state—fail to do this.

Strauss argues that, as difficult as it sounds, we must divorce ourselves from the objective scientific tradition that we value so highly, and employ different approaches to understand and experience the subjective phenomena that our patients encounter—essentially to develop a “second kind of knowledge” (the first being the textbook knowledge that all doctors obtain through their training) that is immensely valuable in understanding a patient’s suffering.  He encourages role-playing, journaling, and other experiential tools to help physicians relate to the qualia of a patient’s suffering.

It’s hard to quantify subjective experiences for purposes of insurance billing, or for standardized outcomes measurements like surveys or questionnaires, or for large clinical trials of new pharmaceutical agents.  And because these constitute the reality of today’s medical practice, it is hard for physicians to draw their attention to the subjective experience of patients.  Nevertheless, physicians—and particularly psychiatrists—should remind themselves every so often that they’re dealing with people, not diseases or symptoms, and to challenge themselves to know what that actually means.

By the same token, patients have a right to know that their thoughts and feelings are not just heard, but understood, by their providers.  While the degree of understanding will (obviously) not be precise, patients may truly benefit from a clinician who “knows” more than meets the eye.


%d bloggers like this: