The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.


Do What You’re Taught

February 5, 2012

In my mail yesterday was an invitation to an upcoming 6-hour seminar on the topic of “Trauma, Addiction, and Grief.”  The course description included topics such as “models of addiction and trauma/information processing” and using these models to plan treatment; recognizing “masked grief reactions” and manifestations of trauma in clients; and applying several psychotherapeutic techniques to help a patient through addiction and trauma recovery.

Sound relevant?  To any psychiatrist dealing with issues of addiction, trauma, grief, anxiety, and mood—which is pretty much all of us—and interested in integrative treatments for the above, this would seem to be an entirely valid topic to learn.  And, I was pleased to learn that the program offers “continuing education” credit, too.

But upon reading the fine print, credit is not available for psychiatrists.  Instead, you can get credit if you’re one the following mental health workers:  counselor, social worker, MFT, psychologist, addiction counselor, alcoholism & drug abuse counselor, chaplain/clergy, nurse, nurse practitioner, nurse specialist, or someone seeking “certification in thanatology” (whatever that is).  But not a psychiatrist.  In other words, psychiatrists need not apply.

Well, okay, that’s not entirely correct, psychiatrists can certainly attend, and–particularly if the program is a good one—my guess is that they would clearly benefit from it.  They just won’t get credit for it.

It’s not the first time I’ve encountered this.  Why do I think this is a big deal?  Well, in all of medicine, “continuing medical education” credit, or CME, is a rough guide to what’s important in one’s specialty.  In psychiatry, the vast majority of available CME credit is in psychopharmacology.  (As it turns out, in the same batch of mail, I received two “throwaway” journals which contained offers of free CME credits for reading articles about treating metabolic syndrome in patients on antipsychotics, and managing sexual side effects of antidepressants.)  Some of the most popular upcoming CME events are the Harvard Psychopharmacology Master Class and the annual Nevada Psychopharmacology Update.  And, of course, the NEI Global Congress in October is a can’t-miss event.  Far more psychiatrists will attend these conferences than a day-long seminar on “trauma, addiction, and grief.”  But which will have the most beneficial impact on patients?

To me, a more important question is, which will have the most beneficial impact on the future of the psychiatrist?   H. Steven Moffic, MD, recently wrote an editorial in Psychiatric Times in which he complained openly that the classical “territory” of the psychiatrist—diagnosis of mental disorder, psychotherapy, and psychopharmacology—have been increasingly ceded to others.  Well, this is a perfect example.  A seminar whose content is probably entirely applicable to most psychiatric patients, being marketed primarily to non-psychiatrists.

I’ve always maintained—on this blog and in my professional life—that psychiatrists should be just as (if not more) concerned about the psychological, cultural, and social aspects of their patients and their experience as in their proper psychopharmacological management.  It’s also just good common sense, especially when viewed from the patient’s perspective.  But if psychiatrists (and our leadership) don’t advocate for the importance of this type of experience, then of course others will do this work, instead of us.  We’re making ourselves irrelevant.

I’m currently experiencing this irony in my own personal life.  I’m studying for the American Board of Psychiatry and Neurology certification exam (the “psychiatry boards”), while looking for a new job at the same time.  On the one hand, while studying for the test I’m being forced to refresh my knowledge of human development, the history of psychiatry, the theory and practice of psychotherapy, the cognitive and psychological foundations of axis I disorders, theories of personality, and many other topics.  That’s the “core” subject matter of psychiatry, which is (appropriately) what I’ll be tested on.  Simultaneously, however, the majority of the jobs I’m finding require none of that.  I feel like I’m being hired instead for my prescription pad.

Psychiatry, as the study of human experience and the treatment of a vast range of human suffering, can still be a fascinating field, and one that can offer so much more to patients.  To be a psychiatrist in this classic sense of the word, it seems more and more like one has to blaze an independent trail: obtain one’s own specialized training, recruit patients outside of the conventional means, and—unless one wishes to live on a relatively miserly income—charge cash.  And because no one seriously promotes this version of psychiatry, this individual is rapidly becoming an endangered species.

Maybe I’ll get lucky and my profession’s leadership will advocate more for psychiatrists to be better trained in (and better paid for) psychotherapy, or, at the very least, encourage educators and continuing education providers to emphasize this aspect of our training as equally relevant.  But as long as rank-and-file psychiatrists sit back and accept that our primary responsibility is to diagnose and medicate, and rabidly defend that turf at the expense of all else, then perhaps we deserve the fate that we’re creating for ourselves.


Psychopharm R&D Cutbacks II: A Response to Stahl

August 28, 2011

A lively discussion has emerged on the NEI Global blog and on Daniel Carlat’s psychiatry blog about a recent post by Stephen Stahl, NEI chairman, pop(ular) psychiatrist, and promoter of psychopharmaceuticals.  The post pertains to the exodus of pharmaceutical companies from neuroscience research (something I’ve blogged about too), and the changing face of psychiatry in the process.

Dr Stahl’s post is subtitled “Be Careful What You Ask For… You Just Might Get It” and, as one might imagine, it reads as a scathing (some might say “ranting”) reaction against several of psychiatry’s detractors: the “anti-psychiatry” crowd, the recent rules restricting pharmaceutical marketing to doctors, and those who complain about Big Pharma funding medical education.  He singles out Dr Carlat, in particular, as an antipsychiatrist, implying that Carlat believes mental illnesses are inventions of the drug industry, medications are “diabolical,” and drugs exist solely to enrich pharmaceutical companies.  [Not quite Carlat’s point of view, as  a careful reading of his book, his psychopharmacology newsletter, and, yes, his blog, would prove.]

While I do not profess to have the credentials of Stahl or Carlat, I have expressed my own opinions on this matter in my blog, and wanted to enter my opinion on the NEI post.

With respect to Dr Stahl (and I do respect him immensely), I think he must re-evaluate his influence on our profession.  It is huge, and not always in a productive way.  Case in point: for the last two months I have worked in a teaching hospital, and I can say that Stahl is seen as something of a psychiatry “god.”  He has an enormous wealth of knowledge, his writing is clear and persuasive, and the materials produced by NEI present difficult concepts in a clear way.  Stahl’s books are directly quoted—unflinchingly—by students, residents, and faculty.

But there’s the rub.  Stahl has done such a good job of presenting his (i.e., the psychopharmacology industry’s) view of things that it is rarely challenged or questioned.  The “pathways” he suggests for depression, anxiety, psychosis, cognition, insomnia, obsessions, drug addiction, medication side effects—basically everything we treat in psychiatry—are accompanied by theoretical models for how some new pharmacological agent might (or will) affect these pathways, when in fact the underlying premises or the proposed drug mechanisms—or both—may be entirely wrong.  (BTW, this is not a criticism of Stahl, this is simply a statement of fact; psychiatry as a neuroscience is decidedly still in its infancy.)

When you combine Stahl’s talent with his extensive relationships with drug companies, it makes for a potentially dangerous combination.  To cite just two examples, Stahl has written articles (in widely distributed “throwaway” journals) making compelling arguments for the use of low-dose doxepin (Silenor) and L-methylfolate (Deplin) in insomnia and depression, respectively, when the actual data suggest that their generic (or OTC) equivalents are just as effective.  Many similar Stahl productions are included as references or handouts in drug companies’ promotional materials or websites.

How can this be “dangerous”?  Isn’t Stahl just making hypotheses and letting doctors decide what to do with them?  Well, not really.  In my experience, if Stahl says something, it’s no longer a hypothesis, it becomes the truth.

I can’t tell you how many times a student (or even a professor of mine) has explained to me “Well, Stahl says drug A works this way, so it will probably work for symptom B in patient C.”  Unfortunately, we don’t have the follow-up discussion when drug A doesn’t treat symptom B; or patient C experiences some unexpected side effect (which was not predicted by Stahl’s model); or the patient improves in some way potentially unrelated to the medication.  And when we don’t get the outcome we want, we invoke yet another Stahl pathway to explain it, or to justify the addition of another agent.  And so on and so on, until something “works.”  Hey, a broken clock is still correct twice a day.

I don’t begrudge Stahl for writing his articles and books; they’re very well written, and the colorful pictures are fun to look at– it makes psychiatry almost as easy as painting by numbers.  I also (unlike Carlat) don’t get annoyed when doctors do speaking gigs to promote new drugs.  (When these paid speakers are also responsible for teaching students in an academic setting, however, that’s another issue.)  Furthermore, I accept the fact that drug companies will try to increase their profits by expanding market share and promoting their drugs aggressively to me (after all, they’re companies—what do we expect them to do??), or by showing “good will” by underwriting CME, as long as it’s independently confirmed to be without bias.

The problem, however, is that doctors often don’t ask for the data.  We don’t  ask whether Steve Stahl’s models might be wrong (or biased).  We don’t look closely at what we’re presented (either in a CME lesson or by a drug rep) to see whether it’s free from commercial influence.  And, perhaps most distressingly, we don’t listen enough to our patients to determine whether our medications actually do what Stahl tells us they’ll do.

Furthermore, our ignorance is reinforced by a diagnostic tool (the DSM) which requires us to pigeonhole patients into a small number of diagnoses that may have no biological validity; a reimbursement system that encourages a knee-jerk treatment (usually a drug) for each such diagnosis; an FDA approval process that gives the illusion that diagnoses are homogeneous and that all patients will respond the same way; and only the most basic understanding of what causes mental illness.  It creates the perfect opportunity for an authority like Stahl to come in and tell us what we need to know.  (No wonder he’s a consultant for so many pharmaceutical companies.)

As Stahl writes, the departure of Big Pharma from neuroscience research is unfortunate, as our existing medications are FAR from perfect (despite Stahl’s texts making them sound pretty darn effective).  However, this “breather” might allow us to pay more attention to our patients and think about what else—besides drugs—we can use to nurse them back to health.  Moreover, refocusing our research efforts on the underlying psychology and biology of mental illness (i.e., research untainted by the need to show a clinical drug response or to get FDA approval) might open new avenues for future drug development.

Stahl might be right that the anti-pharma pendulum has swung too far, but that doesn’t mean we can’t use this opportunity to make great strides forward in patient care.  The paychecks of some docs might suffer.  Hopefully our patients won’t.


Psychopharm R&D Cutbacks: Crisis or Opportunity?

June 19, 2011

The scientific journal Nature ran an editorial this week with a rather ominous headline: “Psychopharmacology in Crisis.”  What exactly is this “crisis” they speak of?  Is it the fact that our current psychiatric drugs are only marginally effective for many patients?  Is it the fact that they can often cause side effects that some patients complain are worse than the original disease?  No, the “crisis” is that the future of psychopharmacology is in jeopardy, as pharmaceutical companies, university labs, and government funding agencies devote fewer resources to research and development in psychopharmacology.  Whether this represents a true crisis, however, is entirely in the eye of the beholder.

In 2010, the pharmaceutical powerhouses Glaxo SmithKline (GSK) and AstraZeneca closed down R&D units for a variety of CNS disorders, a story that received much attention.  They suspended their research programs because of the high cost of bringing psychiatric drugs to market, the potential for lawsuits related to adverse events, and the heavy regulatory burdens faced by drug companies in the US and Europe.  In response, organizations like the European College of Neuropsychopharmacology (ECNP) and the Institute of Medicine in the US have convened summits to determine how to address this problem.

The “problem,” of course, for pharmaceutical companies is the potential absence of a predictable revenue stream.  Over the last several years, big pharma has found it to be more profitable not to develop novel drugs, but new niches for existing agents—a decision driven by MBAs instead of MDs and PhDs.  As Steve Hyman, NIMH director, told Science magazine last June,  “It’s hardly a rich pipeline.  It suggests a sad dearth of ideas and … lots of attempts at patent extensions and new indications for old drugs.”

Indeed, when I look back at the drug approvals of the last three or four years, there really hasn’t been much to get excited about:  antidepressants (Lexapro, Pristiq, Cymbalta) that are similar in mechanism to other drugs we’ve been using for years; new antipsychotics (Saphris, Fanapt, Latuda) that are essentially me-too drugs which don’t dramatically improve upon older treatments; existing drugs (Abilify, Seroquel XR) that have received new indications for “add-on” treatment; existing drugs (Silenor, Nuedexta, Kapvay) that have been tweaked and reformulated for new indications; and new drugs (Invega, Oleptro, Invega Sustenna) whose major attraction is a fancy, novel delivery system.

Testing and approval of the above compounds undoubtedly cost billions of dollars (investments which, by the way, are being recovered in the form of higher health care costs to you and me), but the benefit to patients as a whole has been only marginal.

The true crisis, in my mind, is that with each new drug we psychiatrists are led to believe that we’re witnessing the birth of a blockbuster.  Not to mention the fact that patients expect the same, especially with the glut of persuasive direct-to-consumer advertising (“Ask your doctor if Pristiq is right for you!”).  Most third-party payers, too, are more willing to pay for medication treatment than anything else (although—thankfully—coverage of newer agents often requires the doctor to justify his or her decision), even though many turn out to be a dud.

In the meantime, this focus on drugs neglects the person behind the illness.  Not every person who walks into my office with a complaint of “depression” is a candidate for Viibryd or Seroquel XR.  Or even a candidate for antidepressants at all.  But the overwhelming bias is that another drug trial might work.  “Who knows—maybe the next drug is the ‘right’ one for this patient!”

Recently, I’ve joked with colleagues that I’d like to see a moratorium on psychiatric drug development.  Not because our current medications meet all of our needs, or that we can get by without any further research.  Not at all.  But if we had, say, five years with NO new drugs, we might be able to catch our collective breaths, figure out exactly what we’re treating after all (maybe even have a more fruitful and unbiased discussion about what to put in the new DSM-5), and, perhaps, devote resources to nonpharmacological treatments, without getting caught up in the ongoing psychopharmacology arms race that, for many patients, focuses our attention where it doesn’t belong.

So it looks like my wish might come true.  Maybe we can use the upcoming slowdown to determine where the real needs are in psychiatry.  Maybe if we devote resources to community mental health services, to drug and alcohol treatment, pay more attention to our patients’ personality traits, lifestyle issues, co-occurring medical illnesses, and respond to their goals for treatment rather than AstraZeneca’s or Pfizer’s, we can improve the care we provide and figure out where new drugs might truly pay off.  Along the way, we can spend some time following the guidelines discussed in a recent report in the Archives of Internal Medicine, and practice “conservative prescribing”—i.e., making sensible decisions about what we prescribe and why.

Sometimes, it is true that less is more.  When Big Pharma backs out of drug development, it’s not necessarily a bad thing.  In fact, it may be precisely what the doctor ordered.


CME, CE, and What Makes A Psychiatrist

May 25, 2011

Why do psychiatrists do what they do?  How— and why— is a psychiatrist different from a psychotherapist?  I believe that most psychiatrists entered this field wanting to understand the many ways to understand and to treat what’s “abnormal,” but have instead become caught up in (or brainwashed by?) the promises of modern-day psychopharmacology.  By doing so, we’ve found ourselves pigeonholed into a role in which we prescribe drugs while others provide the more interesting (and more rewarding) psychosocial interventions.

Exceptions certainly do exist.  But psychiatrists are rapidly narrowing their focus to medication management alone.  If we continue to do so, we’d better be darn sure that what we’re doing actually works.  If it doesn’t, we may be digging ourselves a hole from which it will be difficult—if not impossible—to emerge.

How did we get to this point?  I’m a (relatively) young psychiatrist, so I’ll admit I don’t have the historical perspective of some of my mentors.  But in my brief career, I’ve seen these influences:  training programs that emphasize psychopharmacology over psychotherapy; insurance companies that reimburse for medication visits but not for therapy; patients who demand medications as a quick fix to their problems (and who either can’t access, or don’t want, other therapeutic options); and treatment settings in which an MD is needed to prescribe drugs while the “real work” is done by others.

But there’s yet another factor underlying psychiatry’s increasing separation from other behavioral health disciplines:  Continuing Medical Education, or CME.

All health care professionals must engage in some sort of professional education or “lifelong learning” to maintain their licenses.  Doctors must complete CME credits.  PAs, nurses, psychologists, social workers, and others must also complete their own Continuing Education (CE) credits, and the topics that qualify for credit differ from one discipline to the next.

The pediatrician and blogger Claudia Gold, MD, recently wrote about a program on “Infant-Parent Mental Health,” a three-day workshop she attended, which explored “how early relationships shape the brain and influence healthy emotional development.”  She wrote that the program “left me well qualified to do the work I do,” but she couldn’t receive CME credits because they only offered credit for psychologists—not for doctors.

I had a similar experience several years ago.  During my psychiatry residency, I was invited to attend a “Summit for Clinical Excellence” in Monterey, sponsored by the Ben Franklin Institute.  The BFI offers these symposia several times a year; they’re 3- or 4-day long programs consisting of lectures, discussions, and workshops on advanced mental health topics such as addictions, eating disorders, relationship issues, personality disorders, trauma, ethics, etc.—in other words, areas which fall squarely under the domain of “mental health,” but which psychiatrists often don’t treat (mainly because there are no simple “medication solutions” for many of these problems).

Even though my residency program did not give me any days off for the event (nor did they provide any financial support), I rearranged my schedule and attended anyway.  It turned out to be one of the most memorable events of my training.  I got to meet (yes, literally meet, not just sit in an audience and listen to) influential figures in mental health like Helen Fisher, Harville Hendrix, Daniel Amen, Peter Whybrow, and Bill O’Hanlon.  And because most of my co-attendees were not physicians, the discussions were not about medications, but rather about how we can best work with our patients on a human and personal level.  Indeed, the lessons I learned there (and the professional connections I made) have turned out to be extraordinarily valuable in my everyday work.  (For a link to their upcoming summits, see this link.  Incidentally, I am not affiliated with the BFI in any way.)

Unfortunately, like Dr Gold, I didn’t receive any CME credits for this event either, even though my colleagues in other fields did get credit.  A few days ago, out of curiosity, I contacted BFI and inquired about their CME policy.  I was told that “the topic [of CME] comes up every few years, and we’ve thought about it,” but they’ve decided against it for two reasons.  First, there’s just not enough interest.  (I guess psychiatrists are too busy learning about drugs to take time to learn about people or ideas.)  Second, they said that the application process for CME accreditation is expensive and time-consuming (the application packet “is three inches thick”), and the content would require “expert review,” meaning that it would probably not meet criteria for “medical” CME because of its de-emphasis of medications.

To be fair, any doctor can attend a BFI Summit, just as anyone could have attended Dr Gold’s “Infant-Parent Mental Health” program.  And even though physicians don’t receive CME credits for these programs, there are many other simple (and free, even though much of it is Pharma-supported) ways to obtain CME.

At any rate, it’s important—and not just symbolically—to look at where doctors get their training.  I want to learn about non-pharmacological, “alternative” ways to treat my patients (and to treat patients who don’t fit into the simple DSM categories—which is, well, pretty much everyone).  But to do so, it would have to be on my own dime, and without CME credit.  On the other hand, those who do receive this training (and the credit) are, in my opinion, prepared to provide much better patient care than those of us who think primarily about drugs.

At the risk of launching a “turf war” with my colleagues in other behavioral health disciplines, I make the following proposal: if psychologists lobby for the privilege to prescribe medications (a position which—for the record—I support), then I also believe that psychiatrists should lobby their own professional bodies (and the Accreditation Council for CME [ACCME]) to broaden the scope of what counts as “psychiatric CME.”  Medications are not always the answer.  Similarly, neurobiology and genetics will not necessarily lead us to better therapeutics.  And even if they do, we still have to deal with patients—i.e., human beings—and that’s a skill we’re neither taught nor encouraged to use.  I think it’s time for that to change.


Psychopharmacology And The Educated Guess

May 6, 2011

Sometimes I feel like a hypocrite.

As a practicing psychiatrist, I have an obligation to understand the data supporting my use of prescription medication.  In my attempts to do so, I’ve found some examples of clinical research that, unfortunately, are possibly irrelevant or misleading.  Many other writers and bloggers have taken this field to task (far more aggressively than I have) for clinical data that, in their eyes, are incomplete, inconclusive, or downright fraudulent.

In fact, we all like to hold our clinical researchers to an exceedingly high standard, and we complain indignantly when they don’t achieve it.

At the same time, I’ll admit I don’t always do the same in my own day-to-day practice.  In other words, I demand precision in clinical trials, but several times a day I’ll use anecdotal evidence (or even a “gut feeling”) in my prescribing practices, completely violating the rigor that I expect from the companies that market their drugs to me.

Of all fields in medicine, psychopharmacology the one where this is not only common, but it’s the status quo.

“Evidence-based” practice is about making a sound diagnosis and using published clinical data to make a rational treatment decision.  Unfortunately, subjects in clinical trials of psychotropic drugs rarely—if ever—resemble “real” patients, and the real world often throws us curve balls that force us to improvise.  If an antipsychotic is only partially effective, what do we do?  If a patient doesn’t tolerate his antidepressant, then what?  What if a drug interferes with my patient’s sleep?  Or causes a nasty tremor?  There are no hard-and-fast rules for dealing with these types of situations, and the field of psychopharmacology offers wide latitude in how to handle them.

But then it gets really interesting.  Nearly all psychiatrists have encountered the occasional bizarre symptom, the unexpected physical finding, or the unexplained lab value (if labs are being checked, that is).  Psychopharmacologists like to look at these phenomena and try to concoct an explanation based on what might be happening based on their knowledge of the drugs they prescribe.  In fact, I’ve always thought that the definition of an “expert psychopharmacologist” is someone who understands the properties of drugs well enough to make a plausible (albeit potentially wrong) molecular or neurochemical explanation of a complex human phenotype, and then prescribe a drug to fix it.

The psychiatric literature is filled with case studies of interesting encounters or “clinical pearls” that illustrate this principle at work.

For example, consider this case report in the Journal of Neuropsychiatry and Clinical Neurosciences, in which the authors describe a case of worsening mania during slow upward titration of a Seroquel dose and hypothesize that an intermediate metabolite of quetiapine might be responsible for the patient’s mania.  Here’s another one, in which Remeron is suggested as an aid to benzodiazepine withdrawal, partially due to its 5-HT3 antagonist properties.  And another small study purports to explain how nizatadine (Axid), an H2 blocker, might prevent Zyprexa-induced weight gain.  And, predictably, such “hints” have even made their way into drug marketing, as in the ads for the new antipsychotic Latuda which suggest that its 5-HT7 binding properties might be associated with improved cognition.

Of course, for “clinical pearls” par excellence, one need look no further than Stephen Stahl, particularly in his book Essential Psychopharmacology: The Prescriber’s Guide.  Nearly every page is filled with tips (and cute icons!) such as these:  “Lamictal may be useful as an adjunct to atypical antipsychotics for rapid onset of action in schizophrenia,” or “amoxapine may be the preferred tricyclic/tetracyclic antidepressant to combine with an MAOI in heroic cases due to its theoretically protective 5HT2A antagonist properties.”

These “pearls” or hypotheses are interesting suggestions, and might work, but have never been proven to be true.  At best, they are educated guesses.  In all honesty, no self-respecting psychopharmacologist would say that any of these “pearls” represents the absolute truth until we’ve replicated the findings (ideally in a proper controlled clinical trial).  But that has never stopped a psychopharmacologist from “trying it anyway.”

It has been said that, “every time we prescribe a drug to a patient, we’re conducting an experiment, with n=1.”  It’s amazing how often we throw caution to the wind and, just because we think we know how a drug might work, and can visualize in our minds all the pathways and receptors that we think our drugs are affecting, we add a drug or change a dose and profess to know what it’s doing.  Unfortunately, when we enter the realm of polypharmacy (not to mention the enormous complexity of human physiology), all bets are usually off.

What’s most disturbing is how often our assumptions are wrong—and how little we admit it.  For every published case study like the ones mentioned above, there are dozens—if not hundreds—of failed “experiments.”  (Heck, the same could be said even when we’re using something appropriately “evidence-based,” like using a second-generation antipsychotic for schizophrenia.)  In psychopharmacology, we like to take pride in our successes (“I added a touch of cyproterone, and his compulsive masturbation ceased entirely!”)  but conveniently excuse our failures (“She didn’t respond to my addition of low-dose N-acetylcysteine because of flashbacks from her childhood trauma”).  In that way, we can always be right.

Psychopharmacology is a potentially dangerous playground.  It’s important that we follow some well-established rules—like demanding rigorous clinical trials—and if we’re going to veer from this path, it’s important that we exercise the right safeguards in doing so.  At the same time, we should exercise some humility, because sometimes we have to admit we just don’t know what we’re doing.


The Perils of Checklist Psychiatry

March 16, 2011

It’s no secret that doctors in all specialties spend less and less time with patients these days.  Last Sunday’s NY Times cover article (which I wrote about here and here) gave a fairly stark example of how reimbursement incentives have given modern psychiatry a sort of assembly-line mentality:  “Come in, state your problems, and here’s your script.  Next in line!!”  Unfortunately, all the trappings of modern medicine—shrinking reimbursements, electronic medical record systems which favor checklists over narratives, and patients who frequently want a “quick fix”—contribute directly to this sort of practice.

To be fair, there are many psychiatrists who don’t work this way.  But this usually comes with a higher price tag, which insurance companies often refuse to pay.  Why?  Well, to use the common yet frustrating phrase, it’s not “evidence-based medicine.”  As it turns out, the only available evidence is for the measurement of specific symptoms (measured by a checklist) and the prescription of pills over (short) periods of time.  Paradoxically, psychiatry—which should know better—no longer sees patients as people with interesting backgrounds and multiple ongoing social and psychological dynamics, but as collections of symptoms (anywhere in the world!) which respond to drugs.

The embodiment of this mentality, of course, is the DSM-IV, the “diagnostic manual” of psychiatry, which is basically a collection of symptom checklists designed to make a psychiatric diagnosis.  Now, I know that’s a gross oversimplification, and I’m also aware that sophisticated interviewing skills can help to determine the difference between a minor disturbance in a patient’s mood or behavior and a pathological condition (i.e., betwen a symptom and a syndrome).  But often the time, or those skills, simply aren’t available, and a diagnosis is made on the basis of what’s on the list.  As a result, psychiatric diagnoses have become “diagnoses of inclusion”:  you say you have a symptom, you’ll get a diagnosis.

To make matters worse, the checklist mentality, aided by the Internet, has spawned a small industry of “diagnostic tools,” freely available to clinicians and to patients, and published in books, magazines, and web sites.  (The bestselling book The Checklist Manifesto may have contributed, too.  In it, author-surgeon Atul Gawande explains how simple checklists are useful in complex situations in which lives are on the line.  He has received much praise, but the checklists he describes help to narrow our focus, when in psychiatry it should be broadened.  In other words, checklists are great for preparing an OR for surgery, or a jetliner for takeoff, but not in identifying the underlying causes of an individual’s suffering.)

Anyway, a quick Google search for any mental health condition (or even a personality trait like shyness, irritability, or anger) will reveal dozens of free questionnaires, surveys, and checklists designed to make a tentative diagnosis.  Most give the disclaimer “this is not meant to be a diagnostic tool—please consult your physician.”

But why?  If the patient has already answered all the questions that the doctor will ask anyway in the 10 to 15 minutes allotted for their appointment, why can’t the patient just email the questionnaire directly to a doc in another state (or another country) from the convenience of their own home, enter their credit card information, and wait for a prescription in the mail?  Heck, why not eliminate the middleman and submit the questionnaire directly to the drug company for a supply of pills?

I realize I’m exaggerating here.  Questionnaires and checklists can be extremely helpful—when used responsibly—as a way to obtain a “snapshot” of a patient’s progress or of his/her active symptoms, and to suggest topics for discussion in a more thorough interview.  Also, people also have an innate desire to know how they “score” on some measure—the exercise can even be entertaining—and their results can sometimes reveal things they didn’t know about themselves.

But what makes psychiatry and psychology fascinating is the discovery of alternate, more parsimonious (or potentially more serious) explanations for a patient’s traits and behaviors; or, conversely, informing a patient that his or her “high score” is actually nothing to be worried about.  That’s where the expert comes in.  Unfortunately, experts can behave like Internet surveys, too, and when we do, the “rush to judgment” can be shortsighted, unfair, and wrong.


What Psychiatrists Treat and Why

February 20, 2011

Do we treat diseases or symptoms in psychiatry?  While this question might sound philosophical in nature, it’s actually a very practical one in terms of treatment strategies we espouse, medications and other interventions we employ, and, of course, how we pay for mental health care.  It’s also a question that lies at the heart of what psychiatry is all about.

Anyone who has been to medical school or who has watched an episode of House knows that a disease has (a) an underlying pathology, often hidden to the naked eye but which is shared by all patients with that diagnosis, and (b) signs and symptoms, which are readily apparent upon exam but which may differ in subtle ways from patient to patient.  An expert physician performing a comprehensive examination can often make a diagnosis simply on the basis of signs and symptoms.  In some cases, more sophisticated tools (lab tests, scans, etc) are required to confirm the diagnosis.  In the end, once a diagnosis is obtained, treatment can commence.

(To be sure, sometimes a diagnosis is not apparent, and a provisional or “rule-out” diagnosis is given.  The doctor may initiate treatment on an empiric basis but will refine the diagnosis on the basis of future observations, responses to treatment, and/or disease course.)

In psychiatry, which is recognized as a branch of medicine and (should) subscribe to the same principles of diagnosis and treatment, the expectations are the same.  There are a number of diseases (or disorders) listed in the DSM-IV, each theoretically with its own underlying pathology and natural history, and each recognizable by a set of signs and symptoms.  A careful psychiatric evaluation and mental status exam will reveal the true diagnosis and suggest a treatment plan to the clinician.

It sounds simple, but it doesn’t always work out this way.  Psychiatrists may disagree about a given diagnosis, or make diagnoses based on “soft” signs.  Moreover, there are very few biological or biochemical tests to “rule in” a psychiatric diagnosis.  As a result, treatment plans for psychiatric patients often include multiple approaches that don’t make sense;  for example, using an antidepressant to treat bipolar disorder, or using antipsychotics to treat anxiety or insomnia symptoms in major depression.

The psychiatrist Nassir Ghaemi at Tufts has written about this before (click here for a very accessible version of his argument and here [registration required] for a more recent dialogue in which he argues his point further).  Ghaemi argues in favor of what he calls “Hippocratic psychopharmacology.” Specifically, we should understand and respect the normal course of a disease before initiating treatment.  He also emphasizes that we not treat symptoms, but rather the disease (this is also known as Osler’s Rule, in honor of Sir William Osler, the “founder of modern medicine”).  For example, Ghaemi makes a fairly compelling argument that bipolar disorder should be treated with a mood stabilizer alone, and not with an antidepressant, or an antipsychotic, or a sedative, because those drugs treat symptoms which should resolve as a person goes through the natural course of the disease.  In other words, we miss the diagnostic forest by focusing on the symptomatic trees.

The problem is, this is a compelling argument only if there is such a diagnosis as “bipolar disorder.”  Or, to be more specific, a clear, unitary entity with a distinct pathophysiological basis that gives rise to the symptoms that we see as mania and depression, and which all “bipolar” patients share.  And I don’t believe this assumption has been borne out.

My personal bias is that bipolar disorder does exist.  So do major depression, schizophrenia, panic disorder, anorexia nervosa, ADHD, and (almost) all the other diagnoses listed in the DSM-IV.  And a deeper understanding of the pathophysiology of each might help us to develop targeted treatments that will be far more effective than what have now.  But we’re not there yet.  In the case of bipolar disorder, lithium is a very effective drug, but it doesn’t work in everyone with “bipolar.”  Why not?  Perhaps “bipolar disorder” is actually several different disorders.  Not just formes frustes of the same condition but separate entities altogether, with entirely different pathophysiologies which might appear roughly the same on the outside (sort of like obesity or alcoholism).  Of course, there are also many diagnosed with “bipolar” who might really have no pathology at all– so it is no surprise that they don’t respond to a mood stabilizer (I won’t elaborate on this possibility here, maybe some other time).

The committee in charge of writing the DSM-5 is almost certainly facing this conundrum.  One of the “holy grails” of 21st century psychiatry (which I wrote about here) is to identify biochemical or genetic markers that predict or diagnose psychiatric disease, and it was hoped that the next version of the DSM would include these markers amongst its diagnostic criteria.   Unfortunately, this isn’t happening, at least not with DSM-5.  In fact, what we’re likely to get is a reshuffling and expansion of diagnostic criteria.  Which just makes matters worse:  how can we follow Osler’s advice to treat the disease and not the symptom when the definition of disease will change with the publication of a new handbook?

As a practicing psychiatrist, I’d love to be able to make a sound and accurate diagnosis and to use this diagnosis to inform my treatment, practicing in the true Hippocratic tradition and following Osler’s Rule, which has benefited my colleagues in other fields of medicine.  I also recognize that this approach would respect Dr Ghaemi’s attempt at bringing some order and sensibility to psychiatric practice.  Unfortunately, this is hard to do because (a) we still don’t know the underlying cause(s) of psychiatric disorders, and (b) restricting myself to pathophysiology and diagnosis means ignoring the psychosocial and environmental factors that are (in many ways) even more important to patients than what “disease” they have.

It has frequently been said that medicine is an art, not a science, and psychiatry is probably the best example of this truism.  Let’s not stop searching for the biological basis of mental illness, but also be aware that it may not be easy to find.  Until then, whether we treat “diagnoses” or “symptoms” is a matter of style.  Yes, the insurance company wants a diagnosis in order to provide reimbursement, but the patient wants management of his or her symptoms in order to live a more satisfying life.