Do Antipsychotics Treat PTSD?

August 23, 2011

Do antipsychotics treat PTSD?  It depends.  That seems to be the best response I can give, based on the results of two recent studies on this complex disorder.  A better question, though, might be: what do antipsychotics treat in PTSD?

One of these reports, a controlled, double-blinded study of the atypical antipsychotic risperidone (Risperdal) for the treatment of “military service-related PTSD,” was featured in a New York Times article earlier this month.  The NYT headline proclaimed, somewhat unceremoniously:  “Antipsychotic Use is Questioned for Combat Stress.”  And indeed, the actual study, published in the Journal of the American Medical Association (JAMA), demonstrated that a six-month trial of risperidone did not improve patients’ scores in a scale of PTSD symptoms, when compared to placebo.

But almost simultaneously, another paper was published in the online journal BMC Psychiatry, stating that Abilify—a different atypical antipsychotic—actually did help patients with “military-related PTSD with major depression.”

So what are we to conclude?  Even though there are some key differences between the studies (which I’ll mention below), a brief survey of the headlines might leave the impression that the two reports “cancel each other out.”  In reality, I think it’s safe to say that neither study contributes very much to our treatment of PTSD.  But it’s not because of the equivocal results.  Instead, it’s a consequence of the premises upon which the two studies were based.

PTSD, or post-traumatic stress disorder, is an incredibly complicated condition.  The diagnosis was first given to Vietnam veterans who, for years after their service, experienced symptoms of increased physiological arousal, avoidance of stimuli associated with their wartime experience, and continual re-experiencing (in the form of nightmares or flashbacks) of the trauma they experienced or observed.  It’s essentially a re-formulation of conditions that were, in earlier years, labeled “shell shock” or “combat fatigue.”

Since the introduction of this disorder in 1980 (in DSM-III), the diagnostic umbrella of PTSD has grown to include victims of sexual and physical abuse, traumatic accidents, natural disasters, terrorist attacks (like the September 11 massacre), and other criminal acts.  Some have even argued that poverty or unfortunate psychosocial circumstances may also qualify as the “traumatic” event.

Not only are the types of stressors that cause PTSD widely variable, but so are the symptoms that ultimately develop.  Some patients complain of minor but persistent symptoms, while others experience infrequent but intense exacerbations.  Similarly, the neurobiology of PTSD is still poorly understood, and may vary from person to person.  And we’ve only just begun to understand protective factors for PTSD, such as the concept of “resilience.”

Does it even make sense to say that one drug can (or cannot) treat such a complex disorder?  Take, for instance, the scale used in the JAMA article to measure patients’ PTSD symptoms.  The PTSD score they used as the outcome measure was the Clinician-Administered PTSD Scale, or CAPS, considered the “gold standard” for PTSD diagnosis.  But the CAPS includes 30 items, ranging from sleep disturbances to concentration difficulties to “survivor guilt”:

It doesn’t take a cognitive psychologist or neuroscientist to recognize that these 30 domains—all features of what we consider “clinical” PTSD—could be explained by just as many, if not more, neural pathways, and may be experienced in entirely different ways, depending upon on one’s psychological makeup and the nature of one’s past trauma.

In other words, saying that Risperdal is “not effective” for PTSD is like saying that acupuncture is not effective for chronic pain, or that a low-carb diet is not an effective way to lose weight.  Statistically speaking, these interventions might not help most patients, but in some, they may indeed play a crucial role.  We just don’t understand the disorders well enough.

[By the way, what about the other study, which reported that Abilify was helpful?  Well, this study was a retrospective review of patients who were prescribed Abilify, not a randomized, placebo-controlled trial.  And it did not use the CAPS, but the PCL-M, a shorter survey of PTSD symptoms.  Moreover, it only included 27 of the 123 veterans who agreed to take Abilify, and I cannot, for the life of me, figure out why the other 96 were excluded from their analysis.]

Anyway, the bottom line is this:  PTSD is a complicated, multifaceted disorder—probably a combination of disorders, similar to much of what we see in psychiatry.  To say that one medication “works” or another “doesn’t work” oversimplifies the condition almost to the point of absurdity.  And for the New York Times to publicize such a finding, only gives more credence to the misconception that a prescription medication is (or has the potential to be) the treatment of choice for all patients with a given diagnosis.

What we need is not another drug trial for PTSD, but rather a better understanding of the psychological and neurobiological underpinnings of the disease, a comprehensive analysis of which symptoms respond to which drug, which aspects of the disorder are not amenable to medication management, and how individuals differ in their experience of the disorder and in the tools (pharmacological and otherwise) they can use to overcome their despair.  Anything else is a failure to recognize the human aspects of the disease, and an issuance of false hope to those who suffer.


Critical Thinking and Drug Advertising

August 14, 2011

One of the advantages of teaching medical students is that I can keep abreast of changes in medical education.  It’s far too easy for a doctor (even just a few years out of training) to become complacent and oblivious to changes in the modern medical curriculum.  So I was pleasantly surprised earlier this week when a fourth-year medical student told me that his recent licensing examination included a vignette which tested his ability to interpret data from a pharmaceutical company advertisement.  Given that most patients (and, indeed, most doctors) now get their information from such sources, it was nice to see that this is now part of a medical student’s education.

For those of you unfamiliar with the process, the US Medical Licensing Examination (USMLE) is a three-step examination that all medical students must take in order to obtain a medical license in the United States.  Most students take steps 1 and 2 during medical school, while step 3 is taken during residency.

Effective this month, the drug-ad questions will appear in the Step 2 examination.  Obviously, I don’t have access to the particular ad that my med student saw, but here’s a sample item taken from the USMLE website (click to enlarge):


It’s attractive and seems concise.  It’s certainly easier to read—some might even say more “fun”—than a dry, boring journal article or data table.  But is it informative?  What would a doctor need to know to confidently prescribe this new drug?  That’s the emphasis of this new type of test question.  Specifically, the two questions pertaining to this item ask the student (1) to identify which statement is most strongly supported by information in the ad, and (2) which type of research design would give the best data in support of using this drug.

It’s good to know that students are being encouraged to ask such questions of themselves (and, more importantly, one would hope, of the people presenting them with such information).  For comparison, here are two “real-world” examples of promotional advertising I have received for two recently launched psychiatric drugs:


Again, nice to look at.  But essentially devoid of information.  Okay, maybe that’s unfair:  Latuda was found to be effective in “two studies for each dose,” and the Oleptro ad claims that “an eight-week study showed that depression symptoms improved for many people taking Oleptro.”  But what does “effective” mean?  What does “improved” mean?  Where’s the data?  How do these drugs compare to medications we’ve been using for years?  Those are the questions that we need to ask, not only to save costs (new drugs are expensive) but also to prevent exposing our patients to adverse effects that only emerge after a period of time on a drug.

(To be fair, it is quite easy to obtain this information on the drug company’s web sites, or by asking the respective drug reps.  But first impressions count for a lot, and how many providers actually ask for the info?  Or can understand it once they do get it??)

The issue of drug advertising and its influence on doctors has received a good degree of attention lately.  An article in PLoS Medicine last year found that exposure to pharmaceutical company information was frequently (although not always) associated with more prescriptions, higher health care costs, or lower prescribing quality.  Similarly, a report last May in the Archives of Otolaryngology evaluated 50 drug ads in otolaryngology (ENT) journals and found that only 14 (28%) of those claims were based on “strong evidence.”  And the journal Emergency Medicine Australasia went one step further last February and banned all drug company advertising, claiming that “marketing of drugs by the pharmaceutical industry, whose prime aim is to bias readers towards prescribing a particular product, is fundamentally at odds with the mission of medical journals.”

The authors of the PLoS article even wrote the editors of the Lancet, one of the world’s top medical journals, to ask if they’d be willing to ban drug ads, too.  Unfortunately, banning drug advertising may not solve the problem either.  As discussed in an excellent article by Harriet Washington in this summer’s American Scholar, drug companies have great influence over the research that gets funded, carried out, and published, regardless of advertising.  Washington writes: “there exist many ways to subvert the clinical-trial process for marketing purposes, and the pharmaceutical industry seems to have found them all.”

As I’ve written before, I have no philosophical—or practical—opposition to pharmaceutical companies, commercial R&D, or drug advertising.  But I am opposed to the blind acceptance of messages that are the direct product of corporate marketing departments, Madison Avenue hucksters, and drug-company shills.  It’s nice to know that the doctors of tomorrow are being taught to ask the right questions, to become aware of bias, and to develop stronger critical thinking skills.  Hopefully this will help them to make better decisions for their patients, rather than serve as unwitting conduits for big pharma’s more wasteful wares.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.


Addiction Medicine: A New Specialty Or More Of The Same?

July 14, 2011

In an attempt to address a significant—and unmet—need in contemporary health care, the American Board of Addiction Medicine (ABAM) has accredited ten new residency programs in “addiction medicine.”  Details can be found in this article in the July 10 New York Times.  This new initiative will permit young doctors who have completed medical school and an initial internship year to spend an additional year learning about the management of addictive disease.

To be sure, there’s a definite need for trained addiction specialists.  Nora Volkow, director of the National Institute on Drug Abuse (NIDA), says that the lack of knowledge about substance abuse among physicians is “a very serious problem,” and I have certainly found this to be true.  Addictions to drugs and alcohol are devastating (and often life-threatening) conditions that many doctors are ill-prepared to understand—much less treat—and such disorders frequently complicate the management of many medical and psychiatric conditions.

Having worked in the addiction field, however (and having had my own personal experiences in the recovery process), I’m concerned about the precedent that these programs might set for future generations of physicians treating addictive illness.

As much as I respect addiction scientists and agree that the neurochemical basis of addiction deserves greater study, I disagree (in part) with the countless experts who have pronounced for the last 10-20 years that addiction is “a brain disease.”  In my opinion, addiction is a brain disease in the same way that “love” is a rush of dopamine or “anxiety” is a limbic system abnormality.  In other words: yes, addiction clearly does involve the brain, but overcoming one’s addiction (which means different things to different people) is a process that transcends the process of simply taking a pill, correcting one’s biochemistry, or fixing a mutant gene.  In some cases it requires hard work and immense will power; in other cases, a grim recognition of one’s circumstances (“hitting bottom”) and a desire to change; and in still other cases, a “spiritual awakening.”  None of these can be prescribed by a doctor.

In fact, the best argument against the idea of addiction as a biological illness is simple experience.  Each of us has heard of the alcoholic who got sober by going to meetings; or the heroin addict who successfully quit “cold turkey”; or the hard-core cocaine user who stopped after a serious financial setback or the threat of losing his job, marriage, or both.  In fact, these stories are actually quite common.  By comparison, no one overcomes diabetes after experiencing “one too many episodes of ketoacidosis,” and no one resolves their hypertension by establishing a relationship with a Higher Power.

That’s not to say that pharmacological remedies have no place in the treatment of addiction.  Methadone and buprenorphine (Suboxone) are legal, prescription substitutes for heroin and other opioids, and they have allowed addicts to live respectable, “functional” lives.  Drugs like naltrexone or Topamax might curb craving for alcohol in at least some alcoholic patients (of course, when you’re talking about the difference between 18 beers/day and 13 beers/day, you might correctly ask, “what’s the point?”), and other pharmaceuticals might do the same for such nasty things as cocaine, nicotine, gambling, or sugar & flour.

But we in medicine tend to overemphasize the pharmacological solution.  My own specialty of psychiatry is the best example of this:  we have taken extremely rich, complicated, and variable human experiences and phenotypes and distilled them into a bland, clinical lexicon replete with “symptoms” and “disorders,” and prescribe drugs that supposedly treat those disorders—on the basis of studies that rarely resemble the real world—while at the same time frequently ignoring the very real personal struggles that each patient endures.  (Okay, time to get off my soapbox.)

A medical specialty focusing on addictions is a fantastic idea and holds tremendous promise for those who suffer from these absolutely catastrophic conditions.  But ONLY if it transcends the “medical” mindset and instead sees these conditions as complex psychological, spiritual, motivational, social, (mal)adaptive, life-defining—and, yes, biochemical—phenomena that deserve comprehensive and multifaceted care.  As with much in psychiatry, there will be some patients whose symptoms or “brain lesions” are well defined and who respond well to a simple medication approach (a la the “medical model”), but the majority of patients will have vastly more complicated reasons for using, and an equally vast number of potential solutions they can pursue.

Whether this can be taught in a one-year Addiction Medicine residency remains to be seen.  Some physicians, for example, call themselves “addiction specialists” simply by completing an 8-hour-long online training course to prescribe Suboxone to heroin and Oxycontin abusers.  (By the way, Reckitt Benckiser, the manufacturer of Suboxone, is not a drug company, but is better known by its other major products:  Lysol, Mop & Glo, Sani Flush, French’s mustard, and Durex condoms.)  Hopefully, an Addiction Medicine residency will be more than a year-long infomercial for the latest substitution and “anti-craving” agents from multi-national conglomerates.

Nevertheless, the idea that new generations of young doctors will be trained specifically in the diagnosis and management of addictive disorders is a very welcome one indeed.  The physicians who choose this specialty will probably do so for a very particular reason, perhaps—even though this is by no means essential—due to their own personal experience or the experience of a loved one.  I simply hope that their teachers remind them that addiction is incredibly complicated, no two patients become “addicted” for the same reasons, and successful treatment often relies upon ignoring the obvious and digging more deeply into one’s needs, worries, concerns, anxieties, and much, much more.  This has certainly been my experience in psychiatry, and I’d hate to think that TWO medical specialties might be corrupted by an aggressive focus on a medication-centric, “one-size-fits-all” approach to the complexity of human nature.


I Just Don’t Know What (Or Whom) To Believe Anymore

July 2, 2011

de-lu-sion [dih-loo-zhuhn] Noun.  1. An idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality, typically a symptom of mental disorder.

The announcement this week of disciplinary action against three Harvard Medical School psychiatrists (which you can read about here and here and here and here) for violating that institution’s conflict-of-interest policy comes at a pivotal time for psychiatry.  Or at least for my own perceptions of it.

As readers of this blog know, I can be cynical, critical, and skeptical about the medicine I practice on a daily basis.  This arises from two biases that have defined my approach to medicine from Day One:  (1) a respect for the patient’s point of view (which, in many ways, arose out of my own personal experiences), and (2) a need to see and understand the evidence (probably a consequence of my years of graduate work in basic molecular neuroscience before becoming a psychiatrist).

Surprisingly, I have found these attributes to be in short supply among many psychiatrists—even among the people we consider to be our leaders in the field.  And Harvard’s action against Biederman, Spencer, and Wilens might unfortunately just be the tip of the iceberg.

I entered medical school in the late 1990s.  I recall one of my preclinical lectures at Cornell, in which the chairman of our psychiatry department, Jack Barchas, spoke with breathless enthusiasm about the future of psychiatry.  He expounded passionately about how the coming era would bring deeper knowledge of the biological mechanisms of mental illness and new, safer, more effective medications that would vastly improve our patients’ lives.

My other teachers and mentors were just as optimistic.  The literature at the time was filled with studies of new pharmaceuticals (the atypical antipsychotics, primarily), molecular and neuroimaging discoveries, and novel research into genetic markers of illness.  As a student, it was hard not to be caught up in the excitement of the coming revolution in biological psychiatry.

But I now wonder whether we may have been deluding ourselves.  I have no reason to think that Dr Barchas was lying to us in that lecture at Cornell, but those who did the research about which he pontificated may not have been giving us the whole story.  In fact, we’re now learning that those “revolutionary” new drugs were not quite as revolutionary as they appeared.  Drug companies routinely hid negative results and designed their studies to make the new drugs appear more effective.  They glossed over data about side effects, and frequently drug companies would ghostwrite books and articles that appeared to come from their (supposedly unbiased) academic colleagues.

This went on for a long time.  And for all those years, these same academics taught the current generation of psychiatrists like me, and lectured widely (for pay, of course) to psychiatrists in the community.

In my residency years in the mid-2000s, for instance, each one of my faculty members (with only one exception that I’m aware of) spoke for drug companies or was being paid to do research on drugs that we were actively prescribing in the clinic and on the wards.  (I didn’t know this at the time, of course; I learned this afterward.)  And this was undoubtedly the case in other top-tier academic centers throughout the country, having a trickle-down effect on the practice of psychiatry worldwide.

Now, there’s nothing wrong with academics doing research or being paid to do it.  For me, the problem is that those two “pillars” by which I practice medicine (i.e., respect for the patient’s well-being, and a desire for hard evidence) were not the priorities of much of this clinical research.  Patients weren’t always getting better with these new drugs (certainly not in the long run), and the data were finessed and refined in ways that embellished the main message.  This was, of course, exacerbated by the big paychecks many of my academic mentors received.  Money has a remarkable way of influencing what people say and how (and how often) they say it.

But how is a student—or a practicing doc in the community who is several decades out of medical school—supposed to know this?  In my opinion, those who teach medical students and psychiatry residents probably should not be on a pharma payroll or give promotional talks for drugs.  These “academic leaders” are supposed to be fair, neutral, thoughtful authorities who make recommendations on patient outcomes data and nothing else.  Isn’t that why we have academic medical centers in the first place?   (Hey, at least we know that drug reps are paid handsome salaries & bonuses by drug companies… But don’t we expect university professors to be different?)

Just as a series of little white lies can snowball into an enormous unintended deception, I’m afraid that the last 10-20 years of cumulative tainted messages (sometimes deliberate, sometimes not) about the “promises” of psychiatry have created a widespread shared delusion about what we can offer our patients.  And if that’s too much of an exaggeration, then we might at least agree that our field now suffers a crisis of confidence in our leaders.  As Daniel Carlat commented in a story about the Harvard action: “When I get on the phone now and talk to a colleague about a study… [I ask] ‘was this industry funded, and can we trust the study?'”

It may be too late to avoid irreparable damage to this field or our confidence in it.  But at least some of this is coming to light.  If nothing else, we’re taking a cue from our area of clinical expertise, and challenging the delusional thought processes that have driven our actions for many, many years.


Big Brother in Your Medicine Cabinet

June 29, 2011

If there’s one thing I’ve learned from working as a doctor, it is that “what the doctor ordered” is not always what the patient gets.  Sure, I’ve encountered the usual obstacles—like pharmacy “benefit” (ha!) managers whose restrictive formularies don’t cover the medications ordered by their physicians—but I’ve also been amazed by the number of patients who don’t take medications as prescribed.  In psychiatry, the reasons are numerous:  patients may take their SSRI “only when I feel depressed,” they double their dose of a benzodiazepine “because I like the way it makes me feel,” they stop taking two or three of their six medications out of sheer confusion, or they take a medication for entirely different purposes than those for which it was originally prescribed.  (If I had a nickel for every patient who takes Seroquel “to help me sleep,” I’d be a very rich man.)

In the interest of full disclosure, this is not limited to my patients.  Even in my own life, I found it hard to take my antidepressant daily (it really wasn’t doing anything for me, and I was involved in other forms of treatment and lifestyle change that made a much bigger difference).  And after a tooth infection last summer, it was a real challenge to take my penicillin three times a day.  I should know better.  Didn’t I learn about this in med school??

This phenomenon used to be called “noncompliance,” a term which has been replaced by the more agreeable term, “nonadherence.”  It’s rampant.  It is estimated to cost the US health care system hundreds of billions of dollars annually.  But how serious is it to human health?  The medical community—with the full support of Big Pharma, mind you—wants you to believe that it is very serious indeed.  In fact, as the New York Times reported last week, we now have a way to calculate a “risk score” for patients who are likely to skip their medications.  Developed by the FICO company, the “Medication Adherence Score” can predict “which patients are at highest risk for skipping or incorrectly using” their medications.

FICO?  Where have you heard of them before?  Yes, that’s right, they’re the company who developed the credit score:  that three-digit number which determines whether you’re worthy of getting a credit card, a car loan, or a home mortgage.  And now they’re using their clout and influence actuarial skills to tell whether you’re likely to take your meds correctly.

To be sure, some medications are important to take regularly, such as antiretrovirals for HIV, anticoagulants, antiarrhythmics, etc, because of the risk of severe consequences after missed doses.  As a doctor, I entered this profession to improve lives—and oftentimes medications are the best way for my patients to thrive.  [Ugh, I just can’t use that word anymore… Kaiser Permanente has ruined it for me.]

But let’s consider psychiatry, shall we?  Is a patient going to suffer by skipping Prozac or Neurontin for a few days?  Or giving them up altogether to see an acupuncturist instead?  That’s debatable.

Anyway, FICO describes their score as a way to identify patients who would “benefit from follow-up phone calls, letters, and emails to encourage proper use of medication.”  But you can see where this is going, can’t you?  It’s not too much of a stretch to see the score being used to set insurance premiums and access (or lack thereof) to name-brand medications.  Hospitals and clinics might also use it to determine which patients to accept and which to avoid.

Independently (and coincidentally?), the National Consumers League inaugurated a program last month called “Script Your Future,” which asks patients to make “pledges” to do things in the future (like “walk my daughter down the aisle” or “always be there for my best friend”) that require—or so it is implied—adherence to their life-saving medications.  Not surprisingly, funds for the campaign come from a coalition including “health professional groups, chronic disease groups, health insurance plans, pharmaceutical companies, [and] business organizations.”  In other words: people who want you to take drugs.

The take-home message to consumers patients, of course, is that your doctors, drug companies, and insurers care deeply about you and truly believe that adherence to your medication regimen is the key to experiencing the joy of seeing your children graduate from college or retiring to that villa in the Bahamas.  Smile, take our drugs, and be happy.  (And don’t ask questions!)

If a patient doesn’t want to take a drug, that’s the patient’s choice—which, ultimately, must always be respected (even if ends up shortening that patient’s life).  At the same time, it’s the doctor’s responsibility to educate the patient, figure out the reasons for this “nonadherence,” identify the potential dangers, and help the patient find suitable alternatives.  Perhaps there’s a language barrier, a philosophical opposition to drugs, a lack of understanding of the risks and benefits, or an unspoken cultural resistance to Western allopathic medicine.  Each of these has its merits, and needs to be discussed with the patient.

Certainly, if there are no alternatives available, and a patient still insists on ignoring an appropriate and justifiable medical recommendation, we as a society have to address how to hold patients accountable, so as not to incur greater costs to society down the road (I’m reminded here of Anne Fadiman’s excellent book The Spirit Catches You And You Fall Down).  At the same time, though, we might compensate for those increased costs by not overprescribing, overtreating, overpathologizing, and then launching campaigns to make patients complicit in (and responsible for!) these decisions.

Giving patients a “score” to determine whether they’re going to take their meds is the antithesis of good medicine.  Good medicine requires discussion, interaction, understanding, and respect.  Penalizing patients for not following doctors’ orders creates an adversarial relationship that we can do without.


%d bloggers like this: