Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.

Addiction Medicine: A New Specialty Or More Of The Same?

July 14, 2011

In an attempt to address a significant—and unmet—need in contemporary health care, the American Board of Addiction Medicine (ABAM) has accredited ten new residency programs in “addiction medicine.”  Details can be found in this article in the July 10 New York Times.  This new initiative will permit young doctors who have completed medical school and an initial internship year to spend an additional year learning about the management of addictive disease.

To be sure, there’s a definite need for trained addiction specialists.  Nora Volkow, director of the National Institute on Drug Abuse (NIDA), says that the lack of knowledge about substance abuse among physicians is “a very serious problem,” and I have certainly found this to be true.  Addictions to drugs and alcohol are devastating (and often life-threatening) conditions that many doctors are ill-prepared to understand—much less treat—and such disorders frequently complicate the management of many medical and psychiatric conditions.

Having worked in the addiction field, however (and having had my own personal experiences in the recovery process), I’m concerned about the precedent that these programs might set for future generations of physicians treating addictive illness.

As much as I respect addiction scientists and agree that the neurochemical basis of addiction deserves greater study, I disagree (in part) with the countless experts who have pronounced for the last 10-20 years that addiction is “a brain disease.”  In my opinion, addiction is a brain disease in the same way that “love” is a rush of dopamine or “anxiety” is a limbic system abnormality.  In other words: yes, addiction clearly does involve the brain, but overcoming one’s addiction (which means different things to different people) is a process that transcends the process of simply taking a pill, correcting one’s biochemistry, or fixing a mutant gene.  In some cases it requires hard work and immense will power; in other cases, a grim recognition of one’s circumstances (“hitting bottom”) and a desire to change; and in still other cases, a “spiritual awakening.”  None of these can be prescribed by a doctor.

In fact, the best argument against the idea of addiction as a biological illness is simple experience.  Each of us has heard of the alcoholic who got sober by going to meetings; or the heroin addict who successfully quit “cold turkey”; or the hard-core cocaine user who stopped after a serious financial setback or the threat of losing his job, marriage, or both.  In fact, these stories are actually quite common.  By comparison, no one overcomes diabetes after experiencing “one too many episodes of ketoacidosis,” and no one resolves their hypertension by establishing a relationship with a Higher Power.

That’s not to say that pharmacological remedies have no place in the treatment of addiction.  Methadone and buprenorphine (Suboxone) are legal, prescription substitutes for heroin and other opioids, and they have allowed addicts to live respectable, “functional” lives.  Drugs like naltrexone or Topamax might curb craving for alcohol in at least some alcoholic patients (of course, when you’re talking about the difference between 18 beers/day and 13 beers/day, you might correctly ask, “what’s the point?”), and other pharmaceuticals might do the same for such nasty things as cocaine, nicotine, gambling, or sugar & flour.

But we in medicine tend to overemphasize the pharmacological solution.  My own specialty of psychiatry is the best example of this:  we have taken extremely rich, complicated, and variable human experiences and phenotypes and distilled them into a bland, clinical lexicon replete with “symptoms” and “disorders,” and prescribe drugs that supposedly treat those disorders—on the basis of studies that rarely resemble the real world—while at the same time frequently ignoring the very real personal struggles that each patient endures.  (Okay, time to get off my soapbox.)

A medical specialty focusing on addictions is a fantastic idea and holds tremendous promise for those who suffer from these absolutely catastrophic conditions.  But ONLY if it transcends the “medical” mindset and instead sees these conditions as complex psychological, spiritual, motivational, social, (mal)adaptive, life-defining—and, yes, biochemical—phenomena that deserve comprehensive and multifaceted care.  As with much in psychiatry, there will be some patients whose symptoms or “brain lesions” are well defined and who respond well to a simple medication approach (a la the “medical model”), but the majority of patients will have vastly more complicated reasons for using, and an equally vast number of potential solutions they can pursue.

Whether this can be taught in a one-year Addiction Medicine residency remains to be seen.  Some physicians, for example, call themselves “addiction specialists” simply by completing an 8-hour-long online training course to prescribe Suboxone to heroin and Oxycontin abusers.  (By the way, Reckitt Benckiser, the manufacturer of Suboxone, is not a drug company, but is better known by its other major products:  Lysol, Mop & Glo, Sani Flush, French’s mustard, and Durex condoms.)  Hopefully, an Addiction Medicine residency will be more than a year-long infomercial for the latest substitution and “anti-craving” agents from multi-national conglomerates.

Nevertheless, the idea that new generations of young doctors will be trained specifically in the diagnosis and management of addictive disorders is a very welcome one indeed.  The physicians who choose this specialty will probably do so for a very particular reason, perhaps—even though this is by no means essential—due to their own personal experience or the experience of a loved one.  I simply hope that their teachers remind them that addiction is incredibly complicated, no two patients become “addicted” for the same reasons, and successful treatment often relies upon ignoring the obvious and digging more deeply into one’s needs, worries, concerns, anxieties, and much, much more.  This has certainly been my experience in psychiatry, and I’d hate to think that TWO medical specialties might be corrupted by an aggressive focus on a medication-centric, “one-size-fits-all” approach to the complexity of human nature.

I Just Don’t Know What (Or Whom) To Believe Anymore

July 2, 2011

de-lu-sion [dih-loo-zhuhn] Noun.  1. An idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality, typically a symptom of mental disorder.

The announcement this week of disciplinary action against three Harvard Medical School psychiatrists (which you can read about here and here and here and here) for violating that institution’s conflict-of-interest policy comes at a pivotal time for psychiatry.  Or at least for my own perceptions of it.

As readers of this blog know, I can be cynical, critical, and skeptical about the medicine I practice on a daily basis.  This arises from two biases that have defined my approach to medicine from Day One:  (1) a respect for the patient’s point of view (which, in many ways, arose out of my own personal experiences), and (2) a need to see and understand the evidence (probably a consequence of my years of graduate work in basic molecular neuroscience before becoming a psychiatrist).

Surprisingly, I have found these attributes to be in short supply among many psychiatrists—even among the people we consider to be our leaders in the field.  And Harvard’s action against Biederman, Spencer, and Wilens might unfortunately just be the tip of the iceberg.

I entered medical school in the late 1990s.  I recall one of my preclinical lectures at Cornell, in which the chairman of our psychiatry department, Jack Barchas, spoke with breathless enthusiasm about the future of psychiatry.  He expounded passionately about how the coming era would bring deeper knowledge of the biological mechanisms of mental illness and new, safer, more effective medications that would vastly improve our patients’ lives.

My other teachers and mentors were just as optimistic.  The literature at the time was filled with studies of new pharmaceuticals (the atypical antipsychotics, primarily), molecular and neuroimaging discoveries, and novel research into genetic markers of illness.  As a student, it was hard not to be caught up in the excitement of the coming revolution in biological psychiatry.

But I now wonder whether we may have been deluding ourselves.  I have no reason to think that Dr Barchas was lying to us in that lecture at Cornell, but those who did the research about which he pontificated may not have been giving us the whole story.  In fact, we’re now learning that those “revolutionary” new drugs were not quite as revolutionary as they appeared.  Drug companies routinely hid negative results and designed their studies to make the new drugs appear more effective.  They glossed over data about side effects, and frequently drug companies would ghostwrite books and articles that appeared to come from their (supposedly unbiased) academic colleagues.

This went on for a long time.  And for all those years, these same academics taught the current generation of psychiatrists like me, and lectured widely (for pay, of course) to psychiatrists in the community.

In my residency years in the mid-2000s, for instance, each one of my faculty members (with only one exception that I’m aware of) spoke for drug companies or was being paid to do research on drugs that we were actively prescribing in the clinic and on the wards.  (I didn’t know this at the time, of course; I learned this afterward.)  And this was undoubtedly the case in other top-tier academic centers throughout the country, having a trickle-down effect on the practice of psychiatry worldwide.

Now, there’s nothing wrong with academics doing research or being paid to do it.  For me, the problem is that those two “pillars” by which I practice medicine (i.e., respect for the patient’s well-being, and a desire for hard evidence) were not the priorities of much of this clinical research.  Patients weren’t always getting better with these new drugs (certainly not in the long run), and the data were finessed and refined in ways that embellished the main message.  This was, of course, exacerbated by the big paychecks many of my academic mentors received.  Money has a remarkable way of influencing what people say and how (and how often) they say it.

But how is a student—or a practicing doc in the community who is several decades out of medical school—supposed to know this?  In my opinion, those who teach medical students and psychiatry residents probably should not be on a pharma payroll or give promotional talks for drugs.  These “academic leaders” are supposed to be fair, neutral, thoughtful authorities who make recommendations on patient outcomes data and nothing else.  Isn’t that why we have academic medical centers in the first place?   (Hey, at least we know that drug reps are paid handsome salaries & bonuses by drug companies… But don’t we expect university professors to be different?)

Just as a series of little white lies can snowball into an enormous unintended deception, I’m afraid that the last 10-20 years of cumulative tainted messages (sometimes deliberate, sometimes not) about the “promises” of psychiatry have created a widespread shared delusion about what we can offer our patients.  And if that’s too much of an exaggeration, then we might at least agree that our field now suffers a crisis of confidence in our leaders.  As Daniel Carlat commented in a story about the Harvard action: “When I get on the phone now and talk to a colleague about a study… [I ask] ‘was this industry funded, and can we trust the study?'”

It may be too late to avoid irreparable damage to this field or our confidence in it.  But at least some of this is coming to light.  If nothing else, we’re taking a cue from our area of clinical expertise, and challenging the delusional thought processes that have driven our actions for many, many years.

Big Brother in Your Medicine Cabinet

June 29, 2011

If there’s one thing I’ve learned from working as a doctor, it is that “what the doctor ordered” is not always what the patient gets.  Sure, I’ve encountered the usual obstacles—like pharmacy “benefit” (ha!) managers whose restrictive formularies don’t cover the medications ordered by their physicians—but I’ve also been amazed by the number of patients who don’t take medications as prescribed.  In psychiatry, the reasons are numerous:  patients may take their SSRI “only when I feel depressed,” they double their dose of a benzodiazepine “because I like the way it makes me feel,” they stop taking two or three of their six medications out of sheer confusion, or they take a medication for entirely different purposes than those for which it was originally prescribed.  (If I had a nickel for every patient who takes Seroquel “to help me sleep,” I’d be a very rich man.)

In the interest of full disclosure, this is not limited to my patients.  Even in my own life, I found it hard to take my antidepressant daily (it really wasn’t doing anything for me, and I was involved in other forms of treatment and lifestyle change that made a much bigger difference).  And after a tooth infection last summer, it was a real challenge to take my penicillin three times a day.  I should know better.  Didn’t I learn about this in med school??

This phenomenon used to be called “noncompliance,” a term which has been replaced by the more agreeable term, “nonadherence.”  It’s rampant.  It is estimated to cost the US health care system hundreds of billions of dollars annually.  But how serious is it to human health?  The medical community—with the full support of Big Pharma, mind you—wants you to believe that it is very serious indeed.  In fact, as the New York Times reported last week, we now have a way to calculate a “risk score” for patients who are likely to skip their medications.  Developed by the FICO company, the “Medication Adherence Score” can predict “which patients are at highest risk for skipping or incorrectly using” their medications.

FICO?  Where have you heard of them before?  Yes, that’s right, they’re the company who developed the credit score:  that three-digit number which determines whether you’re worthy of getting a credit card, a car loan, or a home mortgage.  And now they’re using their clout and influence actuarial skills to tell whether you’re likely to take your meds correctly.

To be sure, some medications are important to take regularly, such as antiretrovirals for HIV, anticoagulants, antiarrhythmics, etc, because of the risk of severe consequences after missed doses.  As a doctor, I entered this profession to improve lives—and oftentimes medications are the best way for my patients to thrive.  [Ugh, I just can’t use that word anymore… Kaiser Permanente has ruined it for me.]

But let’s consider psychiatry, shall we?  Is a patient going to suffer by skipping Prozac or Neurontin for a few days?  Or giving them up altogether to see an acupuncturist instead?  That’s debatable.

Anyway, FICO describes their score as a way to identify patients who would “benefit from follow-up phone calls, letters, and emails to encourage proper use of medication.”  But you can see where this is going, can’t you?  It’s not too much of a stretch to see the score being used to set insurance premiums and access (or lack thereof) to name-brand medications.  Hospitals and clinics might also use it to determine which patients to accept and which to avoid.

Independently (and coincidentally?), the National Consumers League inaugurated a program last month called “Script Your Future,” which asks patients to make “pledges” to do things in the future (like “walk my daughter down the aisle” or “always be there for my best friend”) that require—or so it is implied—adherence to their life-saving medications.  Not surprisingly, funds for the campaign come from a coalition including “health professional groups, chronic disease groups, health insurance plans, pharmaceutical companies, [and] business organizations.”  In other words: people who want you to take drugs.

The take-home message to consumers patients, of course, is that your doctors, drug companies, and insurers care deeply about you and truly believe that adherence to your medication regimen is the key to experiencing the joy of seeing your children graduate from college or retiring to that villa in the Bahamas.  Smile, take our drugs, and be happy.  (And don’t ask questions!)

If a patient doesn’t want to take a drug, that’s the patient’s choice—which, ultimately, must always be respected (even if ends up shortening that patient’s life).  At the same time, it’s the doctor’s responsibility to educate the patient, figure out the reasons for this “nonadherence,” identify the potential dangers, and help the patient find suitable alternatives.  Perhaps there’s a language barrier, a philosophical opposition to drugs, a lack of understanding of the risks and benefits, or an unspoken cultural resistance to Western allopathic medicine.  Each of these has its merits, and needs to be discussed with the patient.

Certainly, if there are no alternatives available, and a patient still insists on ignoring an appropriate and justifiable medical recommendation, we as a society have to address how to hold patients accountable, so as not to incur greater costs to society down the road (I’m reminded here of Anne Fadiman’s excellent book The Spirit Catches You And You Fall Down).  At the same time, though, we might compensate for those increased costs by not overprescribing, overtreating, overpathologizing, and then launching campaigns to make patients complicit in (and responsible for!) these decisions.

Giving patients a “score” to determine whether they’re going to take their meds is the antithesis of good medicine.  Good medicine requires discussion, interaction, understanding, and respect.  Penalizing patients for not following doctors’ orders creates an adversarial relationship that we can do without.

Psychopharm R&D Cutbacks: Crisis or Opportunity?

June 19, 2011

The scientific journal Nature ran an editorial this week with a rather ominous headline: “Psychopharmacology in Crisis.”  What exactly is this “crisis” they speak of?  Is it the fact that our current psychiatric drugs are only marginally effective for many patients?  Is it the fact that they can often cause side effects that some patients complain are worse than the original disease?  No, the “crisis” is that the future of psychopharmacology is in jeopardy, as pharmaceutical companies, university labs, and government funding agencies devote fewer resources to research and development in psychopharmacology.  Whether this represents a true crisis, however, is entirely in the eye of the beholder.

In 2010, the pharmaceutical powerhouses Glaxo SmithKline (GSK) and AstraZeneca closed down R&D units for a variety of CNS disorders, a story that received much attention.  They suspended their research programs because of the high cost of bringing psychiatric drugs to market, the potential for lawsuits related to adverse events, and the heavy regulatory burdens faced by drug companies in the US and Europe.  In response, organizations like the European College of Neuropsychopharmacology (ECNP) and the Institute of Medicine in the US have convened summits to determine how to address this problem.

The “problem,” of course, for pharmaceutical companies is the potential absence of a predictable revenue stream.  Over the last several years, big pharma has found it to be more profitable not to develop novel drugs, but new niches for existing agents—a decision driven by MBAs instead of MDs and PhDs.  As Steve Hyman, NIMH director, told Science magazine last June,  “It’s hardly a rich pipeline.  It suggests a sad dearth of ideas and … lots of attempts at patent extensions and new indications for old drugs.”

Indeed, when I look back at the drug approvals of the last three or four years, there really hasn’t been much to get excited about:  antidepressants (Lexapro, Pristiq, Cymbalta) that are similar in mechanism to other drugs we’ve been using for years; new antipsychotics (Saphris, Fanapt, Latuda) that are essentially me-too drugs which don’t dramatically improve upon older treatments; existing drugs (Abilify, Seroquel XR) that have received new indications for “add-on” treatment; existing drugs (Silenor, Nuedexta, Kapvay) that have been tweaked and reformulated for new indications; and new drugs (Invega, Oleptro, Invega Sustenna) whose major attraction is a fancy, novel delivery system.

Testing and approval of the above compounds undoubtedly cost billions of dollars (investments which, by the way, are being recovered in the form of higher health care costs to you and me), but the benefit to patients as a whole has been only marginal.

The true crisis, in my mind, is that with each new drug we psychiatrists are led to believe that we’re witnessing the birth of a blockbuster.  Not to mention the fact that patients expect the same, especially with the glut of persuasive direct-to-consumer advertising (“Ask your doctor if Pristiq is right for you!”).  Most third-party payers, too, are more willing to pay for medication treatment than anything else (although—thankfully—coverage of newer agents often requires the doctor to justify his or her decision), even though many turn out to be a dud.

In the meantime, this focus on drugs neglects the person behind the illness.  Not every person who walks into my office with a complaint of “depression” is a candidate for Viibryd or Seroquel XR.  Or even a candidate for antidepressants at all.  But the overwhelming bias is that another drug trial might work.  “Who knows—maybe the next drug is the ‘right’ one for this patient!”

Recently, I’ve joked with colleagues that I’d like to see a moratorium on psychiatric drug development.  Not because our current medications meet all of our needs, or that we can get by without any further research.  Not at all.  But if we had, say, five years with NO new drugs, we might be able to catch our collective breaths, figure out exactly what we’re treating after all (maybe even have a more fruitful and unbiased discussion about what to put in the new DSM-5), and, perhaps, devote resources to nonpharmacological treatments, without getting caught up in the ongoing psychopharmacology arms race that, for many patients, focuses our attention where it doesn’t belong.

So it looks like my wish might come true.  Maybe we can use the upcoming slowdown to determine where the real needs are in psychiatry.  Maybe if we devote resources to community mental health services, to drug and alcohol treatment, pay more attention to our patients’ personality traits, lifestyle issues, co-occurring medical illnesses, and respond to their goals for treatment rather than AstraZeneca’s or Pfizer’s, we can improve the care we provide and figure out where new drugs might truly pay off.  Along the way, we can spend some time following the guidelines discussed in a recent report in the Archives of Internal Medicine, and practice “conservative prescribing”—i.e., making sensible decisions about what we prescribe and why.

Sometimes, it is true that less is more.  When Big Pharma backs out of drug development, it’s not necessarily a bad thing.  In fact, it may be precisely what the doctor ordered.

Lexapro, Hot Flashes, and Doing What Works

June 15, 2011

One of the most common—and distressing—symptoms of menopause is the “hot flash.”  As many as 85% of perimenopausal women complain of hot flashes, characterized by a sensation of intense heat, a flushed appearance, perspiration, and pressure in the head.  An effective remedy for hot flashes over the years has been hormone replacement therapy, but many women shun this treatment because of the increased risk of breast cancer, heart disease, and stroke.  In its place, antidepressants like SSRIs and SNRIs have become more commonly prescribed for hot flashes.  Many women report great improvement in symptoms, both anecdotally and in some small open-label trials, with antidepressant therapy.

But do antidepressants actually do anything at all?

Jim Edwards covers this story in a post today on bnet’s “Placebo Effect” blog. Edwards describes a study published in the Journal of the American Medical Association (JAMA) in January 2011 (PDF here).  The study showed the clear benefit of Lexapro (an SSRI made by Forest Labs) relative to placebo in a randomized clinical trial of more than 200 menopausal women with hot flashes.  However, Edwards also reports that a brand new study (which he calls “elegant”) published in the journal Menopause found NO effect of Lexapro.  This second study measured hot flashes not by patient report, but instead by a “battery-powered hot flash detector” worn by women participating in the research.

Does Edwards conclude that the first study was bogus?  Well, not quite.  Edwards argues that the integrity of the JAMA study was dubious from the start because its lead author, Ellen Freeman, received money (honoraria and research support) from Forest Labs, while the paper in Menopause was not tainted by drug company money.  (Note: he neglected to point out that the author of the second study, Robert Freedman, holds a patent, US # 60,741,376, on the “hot flash detector” used in his study.  Yeah, that’s “elegant.”)

Now, I understand that pharmaceutical company funding has a potential to bias research (sometimes a great deal), even when the researchers swear by their objectivity.  But in this case, Edwards’ axe-grinding seems to have obscured some more relevant arguments.  In his zeal to criticize Freeman for her nefarious Forest ties, he ignores the fact that patients often do report a benefit of Lexapro.  A more relevant (and convincing) argument might have been: What makes Lexapro that much better than a generic SSRI—which would be significantly cheaper—in the treatment of hot flashes?  But no, that question was overlooked.

It’s also important to consider the methods used in the Menopause study.  Freedman and his colleagues used “objective” measures of hot flashes (using a device patented by the author, remember) instead of patients’ self-report.  What did these ambulatory monitors measure?  “Humidity on the chest”—that’s it.  (Hmmm… maybe the Exmovere Corporation could build an “Exmobaby garment” for menopausal women??)  Lexapro had no significant effect on this objective measurement.

But the problem is, hot flashes are subjective experiences.  Just like depressed mood, fatigue, pain, gastrointestinal upset, and many other symptoms we treat in medicine.  There’s probably a physiological explanation, but we don’t know what it is.  I’m sorry, but it seems presumptuous (if not downright arrogant) to say that a biometric device is an “accurate” detector of hot flashes, regardless of what the woman reports.  It’s like saying that a person is depressed because his ethanolamine phosphate level is high, or that another has OCD because she has a thicker right superior parietal gyrus in an MRI scan.

Anyway, back to Edwards’ blog post:  His opening sentence, dripping with obvious sarcasm, is “Never mind the evidence; just treat patients’ complaints.”  He then proceeds to completely downplay (if not ridicule) the fact that women frequently report a benefit of Lexapro and other SSRIs.

I wonder whether Edwards has paid any attention to what we’ve been doing in psychiatry for the last several decades.  Trust me, I would love to understand the biological basis of my patients’ symptoms—whether depression, psychosis, anxiety, or hot flashes—in order to develop more “targeted” medical treatment.  But the evidence is just not there (yet?).  In the meantime, we have to use what we’ve got.  If a woman reports improvement on Lexapro without any side effects (in other words, if the benefit exceeds the risk), I’ll prescribe it.

Let me be clear.  I’m not defending Lexapro:  if there’s a cheaper generic alternative available we should use it.  Similarly, I’m not defending Ellen Freeman: pharmaceutical funding should be fully disclosed and, moreover, it does skew what gets published (or not).  And I’m not criticizing Dr Freedman’s Hot Flash Detector (why does that sound like something out of a 1920’s Sears Catalog?): objective measures of subjective complaints help us to understand complicated pathophysiology more clearly.

But if patients benefit from a treatment (and aren’t harmed by it), we owe it to them to provide it.  Arguments like “the research is biased,” “it’s not scientific enough,” or “doctors don’t know how it works anyway” are valid, and should not be ignored, but should also not keep us from prescribing treatments that alleviate our patients’ suffering.

Abilify for Bipolar Maintenance: More Hard Questions

May 31, 2011

Much attention has been drawn to a recent PLoS Medicine article criticizing the evidence base for the use of Abilify as maintenance treatment for bipolar disorder.  The major points emphasized by most critics are, first, that the FDA approved Abilify for this purpose in 2005 on the basis of flawed and scanty evidence and, secondly, that the literature since that time has failed to point out the deficiencies in the original study.

While the above may be true, I believe these criticisms miss a more important point.  Instead of lambasting the FDA or lamenting the poor quality of clinical research, we psychiatrists need to use this as an opportunity to take a closer look at what we treat, why we treat, and how we treat.

Before elaborating, let me summarize the main points of the PLoS article.  The authors point out that FDA approval of Abilify was based on only one “maintenance” trial by Keck et al published in 2007.  The trial included only 161 patients (only 7 of whom, or 1.3% of the total 567 who started the study, were followed throughout 26 weeks of stabilization and 74 follow-up weeks of maintenance).  It also consisted of patients who had already been stabilized on Abilify; thus, it was “enriched” for patients who had already shown a good response to this drug.  Furthermore, the “placebo failures” consisted of patients who were abruptly withdrawn from Abilify and placed on placebo; their relapses might thus be attributed to the researchers’ “randomized discontinuation” design rather than the failure of placebo.  (For more commentary, including follow-up from Bristol-Myers Squibb, Abilify’s manufacturer, please see this excellent post on Pharmalot.)

These are all valid arguments.  But as I read the PLoS paper and the ongoing discussion ever since, I can’t help but think, so what??  First of all, most psychiatrists probably don’t know about the PLoS paper.  And even if they did, the major questions for me would be:  would the criticism of the Keck et al. study change the way psychiatrists practice?  Should it?

Let’s think about psychiatric illness for a moment.  Most disorders are characterized by an initial, abrupt onset or “episode.”  These acute episodes are usually treated with medications (plus or minus psychotherapy or other psychosocial interventions), often resulting in rapid symptomatic improvement—or, at the very least, stabilization of those symptoms.

One big, unanswered (and, unfortunately, under-asked) question in psychiatry is, then what?  Once a person is stabilized (which in some cases means nothing more than “he’s no longer a danger to himself or others”), what do we do?  We don’t know how long to treat patients, and there are no guidelines for when to discontinue medications.  Instead we hear the common refrain:  depression, schizophrenia, and bipolar disorder, are lifelong illnesses—”just like hypertension or diabetes”—and should be treated as such.

But is that true?  At the risk of sounding like a heretic (and, indeed, I’d be laughed out of residency if I had ever asked this question), are there some cases of bipolar disorder—or schizophrenia, or depression, for that matter—which only require brief periods of psychopharmacological treatment, or none at all?

The conventional wisdom is that, once a person is stabilized, we should just continue treatment.  And why not?  What doctor is going to take his patient off Abilify—or any other mood stabilizer or antipsychotic which has been effective in the acute phase—and risk a repeat mood episode?  None.  And if he does, would he attribute the relapse to the disease, or to withdrawal of the drug?  Probably to the disease.

For another example of what I’m talking about, consider Depakote.  Depakote has been used for decades and is regarded as a “prototypical” mood stabilizer.  Indeed, some of my patients have taken Depakote for years and have remained stable, highly functional, and without evidence of mood episodes.  But Depakote was never approved for the maintenance treatment of bipolar disorder (for a brilliant review of this, which raises some of the same issues as the current Abilify brouhaha, read this article by The Last Psychiatrist).  In fact, the one placebo-controlled study of Depakote for maintenance treatment of bipolar disorder showed that it’s no better than placebo.  So why do doctors use it? Because it works (in the acute phase.)  Why do patients take it?  Again, because it works—oh, and their doctors tell them to continue taking it.  As the old saying goes, “if it ain’t broke, don’t fix it.”

However, what if it is broke[n]?  Some patients indeed fail Depakote monotherapy and require additional “adjunctive” medication (which, BTW, has provided another lucrative market for the atypical antipsychotics).  In such cases, most psychiatrists conclude that the patient’s disease is worsening and they add the second agent.  Might it be, however, that after the patient’s initial “response” to Depakote, the medication wasn’t doing anything at all?

To be sure, the Abilify study may have been more convincing if it was larger, followed patients for a longer time, and had a dedicated placebo arm consisting of patients who had not been on Abilify in the initial stage.  But I maintain that, regardless of the outcome of such an “improved” trial, most doctors would still use Abilify for maintenance treatment anyway, and convince themselves that it works—even if the medication is doing absolutely nothing to the underlying biology of the disease.

The bottom line is that it’s easy to criticize the FDA for approving a drug on the basis of a single, flawed study.  It’s also easy to criticize a pharmaceutical company for cutting corners and providing “flawed” data for FDA review.  But when it comes down to it, the real criticism should be directed at a field of medicine which endorses the “biological” treatment of a disorder (or group of disorders) whose biochemical basis and natural history are not fully understood, which creates post hoc explanations of its successes and failures based on that lack of understanding, and which is unwilling to look itself in the mirror and ask if it can do better.

The Balance of Information

May 19, 2011

How do doctors learn about the drugs they prescribe?  It’s an important question, but one without a straightforward answer.  For doctors like me—who have been in practice for more than a few years—the information we learned in medical school may have already been replaced by something new.  We find ourselves prescribing drugs we’ve never heard of before.  How do we know whether they work?  And whom do we trust to give us this information?

I started to think about this question as I wrote my recent post on Nuedexta, a new drug for the treatment of pseudobulbar affect.  I knew nothing about the drug, so I had to do some research.  One of my internet searches led me to an active discussion on a site called (SDN).  SDN is a website for medical students, residents, and other medical professionals, and it features objective discussions of interesting cases, new medications, and career issues.  There, I found a thread devoted to Nuedexta; this thread contained several posts by someone calling himself “Doogie Howser”—and he seemed to have a lot of detailed information about this brand-new drug.

Further internet sleuthing led me to a message board on Yahoo Finance for Avanir Pharmaceuticals, the company which makes Nuedexta.  In one of the threads on this board, it was suggested that the “Doogie Howser” posts were actually written by someone I’ll call “TS.”  Judging by the other posts by this person, “TS” clearly owns stock in Avanir.  While “TS” never admitted to writing the SDN posts, there was much gloating that someone had been able to post pro-Nuedexta information on a healthcare website in a manner that sounded authoritative.

Within 24 hours of posting my article, someone posted a link to my article on the same Yahoo Finance website. I received several hundred “hits” directly from that link.  Simultaneously (and ever since), I’ve received numerous comments on that article, some of which include detailed information about Nuedexta, reminiscent of the posts written by “Doogie Howser.” Others appear to be written by “satisfied patients” taking this drug.  But I’m skeptical. I don’t know whether these were actual patients or Avanir investors (or employees); the IP address of one of the pro-Nuedexta commenters was registered to a public-relations firm in Arizona. Nevertheless, I have kept the majority of the posts on the blog, except those that contained personal attacks (and yes, I received those, too).

The interesting thing is, nothing “TS”/”Doogie Howser” said about Nuedexta was factually incorrect.  And most of the posts I received were not “wrong” either (although they have been opinionated and one-sided).  But that’s precisely what concerns me. The information was convincing, even though—if my hunch is correct—the comments were written for the sake of establishing market share, not for the sake of improving patient care.

The more worrisome issue is this: access to information seems to be lopsided.  Industry analysts (and even everyday investors) can have an extremely sophisticated understanding of new drugs on the market, more sophisticated, at times, than many physicians.  And they can use this sophistication to their advantage. Some financial websites and investor publications can read like medical journals.  Apparently, money is a good motivator to obtain such information and use it convincingly.  Quality patient outcomes? Not so much.

So what about the doctor who doesn’t have this information but must decide whether to prescribe a new medication?  Well, there are a few objective, unbiased sources of information about new drugs (The Medical Letter and The Carlat Report among them).  Doctors can also ask manufacturers for the Prescribing Information (“PI”) or do their own due diligence to learn about new treatments.  But they often don’t have the time to do this, and other resources (like the internet) are far more accessible.

However, they’re more accessible for everyone.  When the balance of information about new treatments is tipped in favor of drug manufacturers, salespeople, and investors—all of whom have financial gain as their top priority—and not in favor of doctors and patients (whose lives may be at stake), an interesting “battle of wits” is bound to ensue.  When people talk a good game, and sound very much like they know what they’re talking about, their motives must always be questioned.  Unfortunately—and especially under the anonymity of the internet—those motives can sometimes be hard to determine.  In response, we clinicians must be even more critical and objective, and not necessarily believe everything we hear.

Psychopharmacology And The Educated Guess

May 6, 2011

Sometimes I feel like a hypocrite.

As a practicing psychiatrist, I have an obligation to understand the data supporting my use of prescription medication.  In my attempts to do so, I’ve found some examples of clinical research that, unfortunately, are possibly irrelevant or misleading.  Many other writers and bloggers have taken this field to task (far more aggressively than I have) for clinical data that, in their eyes, are incomplete, inconclusive, or downright fraudulent.

In fact, we all like to hold our clinical researchers to an exceedingly high standard, and we complain indignantly when they don’t achieve it.

At the same time, I’ll admit I don’t always do the same in my own day-to-day practice.  In other words, I demand precision in clinical trials, but several times a day I’ll use anecdotal evidence (or even a “gut feeling”) in my prescribing practices, completely violating the rigor that I expect from the companies that market their drugs to me.

Of all fields in medicine, psychopharmacology the one where this is not only common, but it’s the status quo.

“Evidence-based” practice is about making a sound diagnosis and using published clinical data to make a rational treatment decision.  Unfortunately, subjects in clinical trials of psychotropic drugs rarely—if ever—resemble “real” patients, and the real world often throws us curve balls that force us to improvise.  If an antipsychotic is only partially effective, what do we do?  If a patient doesn’t tolerate his antidepressant, then what?  What if a drug interferes with my patient’s sleep?  Or causes a nasty tremor?  There are no hard-and-fast rules for dealing with these types of situations, and the field of psychopharmacology offers wide latitude in how to handle them.

But then it gets really interesting.  Nearly all psychiatrists have encountered the occasional bizarre symptom, the unexpected physical finding, or the unexplained lab value (if labs are being checked, that is).  Psychopharmacologists like to look at these phenomena and try to concoct an explanation based on what might be happening based on their knowledge of the drugs they prescribe.  In fact, I’ve always thought that the definition of an “expert psychopharmacologist” is someone who understands the properties of drugs well enough to make a plausible (albeit potentially wrong) molecular or neurochemical explanation of a complex human phenotype, and then prescribe a drug to fix it.

The psychiatric literature is filled with case studies of interesting encounters or “clinical pearls” that illustrate this principle at work.

For example, consider this case report in the Journal of Neuropsychiatry and Clinical Neurosciences, in which the authors describe a case of worsening mania during slow upward titration of a Seroquel dose and hypothesize that an intermediate metabolite of quetiapine might be responsible for the patient’s mania.  Here’s another one, in which Remeron is suggested as an aid to benzodiazepine withdrawal, partially due to its 5-HT3 antagonist properties.  And another small study purports to explain how nizatadine (Axid), an H2 blocker, might prevent Zyprexa-induced weight gain.  And, predictably, such “hints” have even made their way into drug marketing, as in the ads for the new antipsychotic Latuda which suggest that its 5-HT7 binding properties might be associated with improved cognition.

Of course, for “clinical pearls” par excellence, one need look no further than Stephen Stahl, particularly in his book Essential Psychopharmacology: The Prescriber’s Guide.  Nearly every page is filled with tips (and cute icons!) such as these:  “Lamictal may be useful as an adjunct to atypical antipsychotics for rapid onset of action in schizophrenia,” or “amoxapine may be the preferred tricyclic/tetracyclic antidepressant to combine with an MAOI in heroic cases due to its theoretically protective 5HT2A antagonist properties.”

These “pearls” or hypotheses are interesting suggestions, and might work, but have never been proven to be true.  At best, they are educated guesses.  In all honesty, no self-respecting psychopharmacologist would say that any of these “pearls” represents the absolute truth until we’ve replicated the findings (ideally in a proper controlled clinical trial).  But that has never stopped a psychopharmacologist from “trying it anyway.”

It has been said that, “every time we prescribe a drug to a patient, we’re conducting an experiment, with n=1.”  It’s amazing how often we throw caution to the wind and, just because we think we know how a drug might work, and can visualize in our minds all the pathways and receptors that we think our drugs are affecting, we add a drug or change a dose and profess to know what it’s doing.  Unfortunately, when we enter the realm of polypharmacy (not to mention the enormous complexity of human physiology), all bets are usually off.

What’s most disturbing is how often our assumptions are wrong—and how little we admit it.  For every published case study like the ones mentioned above, there are dozens—if not hundreds—of failed “experiments.”  (Heck, the same could be said even when we’re using something appropriately “evidence-based,” like using a second-generation antipsychotic for schizophrenia.)  In psychopharmacology, we like to take pride in our successes (“I added a touch of cyproterone, and his compulsive masturbation ceased entirely!”)  but conveniently excuse our failures (“She didn’t respond to my addition of low-dose N-acetylcysteine because of flashbacks from her childhood trauma”).  In that way, we can always be right.

Psychopharmacology is a potentially dangerous playground.  It’s important that we follow some well-established rules—like demanding rigorous clinical trials—and if we’re going to veer from this path, it’s important that we exercise the right safeguards in doing so.  At the same time, we should exercise some humility, because sometimes we have to admit we just don’t know what we’re doing.

What Can Cymbalta Teach Us About Pain?

April 29, 2011

You’ve probably noticed widespread TV advertisements lately for Cymbalta, Eli Lilly’s blockbuster antidepressant.  However, these ads say nothing about depression.  Sure, some of the actors may look a little depressed (the guy at right, from the Cymbalta web site, sure looks bummed), but the ads are instead promoting Cymbalta for the treatment of chronic musculoskeletal pain, an indication that Cymbalta received in August 2010, strengthening Cymbalta’s position as the “Swiss Army knife” of psychiatric meds.  (I guess that makes Seroquel the “blunt hammer” of psych meds?)

Cymbalta (duloxetine) had already been approved for diabetic neuropathy and fibromyalgia, two other pain syndromes.  It’s a “dual-action” agent, i.e., an inhibitor of the reuptake of serotonin and norepinephrine.  Other SNRIs include Effexor, Pristiq, and Savella.  Of these, only Savella has a pain [fibromyalgia] indication.

When you consider how common the complaint of “pain” is, this approval is a potential gold mine for Eli Lilly.  Moreover, the vagueness of this complaint is also something they will likely capitalize upon.  To be sure, there are distinct types of pain—e.g., neuropathic, visceral, musculoskeletal—and a proper pain workup can determine the exact nature of pain and guide the treatment accordingly.  But in reality, overworked primary clinicians (not to mention psychiatrists, for whom hearing the word “pain” is often the extent of the physical exam) often hear the “pain” complaint and prescribe something the patient says they haven’t tried yet.  Cymbalta is looking to capture part of that market.

The analgesic mechanism of Cymbalta is (as with much in psychiatry) unknown.   Some have argued it works by relieving the depression and anxiety experienced by patients in pain.  It has also been proposed that it activates “descending” pathways from the brain, helping to dampen “ascending” pain signals from the body.  It might also block NMDA receptors or sodium channels or enhance the body’s own endorphin system.  (Click on the figure above for other potential mechanisms, from a recent article by Dharmshaktu et al., 2011.)

But the more important question is:  does it work?  There does seem to be some decent evidence for Cymbalta’s effect in fibromyalgia and diabetic neuropathy in several outcome measures, and in a variety of 12-week trials summarized in a recent Cochrane review.

The evidence for musculoskeletal pain is less convincing.  In order to obtain approval, Lilly performed two studies of Cymbalta in osteoarthritis (OA) and three studies in chronic low back pain (CLBP).  All CLBP studies showed benefit in “24-hour pain severity” but only one of the OA studies showed improvement.   The effects were not tremendous, even though they were statistically significant (see example above, click to enlarge).  The FDA panel expressed concern “regarding the homogeneity of the study population and the heterogeneity of CLBP presenting to physicians in clinical practice.”  In fact, the advisory committee’s enthusiasm for the expanded indication was somewhat muted:

Even though the committee also complained of the “paucity of sound data regarding the pharmacological mechanisms of many analgesic drugs … and the paucity of sound data regarding the underlying pathophysiology,” they ultimately voted to approve Cymbalta for “as broad an indication as possible,” in order for “the well-informed prescriber [to] have the option of trying out an analgesic product approved for one painful condition in a patient with a similar painful condition.”

Incidentally, they essentially ignored the equivocal results in the OA trials, reasoning instead that it was OK to “extrapolate the finding [of efficacy in CLBP] to other similar musculoskeletal conditions.”

In other words, it sounds like the FDA really wanted to get Cymbalta in the hands of more patients and more doctors.

As much as I dislike the practice of prescribing drugs simply because they’re available and they might work, the truth of the matter is, this is surely how Cymbalta will be used.  (In reality, it explains a lot of what we do in psychiatry, unfortunately.)  But pain is a complex entity.  We have to be certain not to jump to conclusions—like we frequently do in psychiatry—when/if we see a “success story” with Cymbalta.

To the body, 60 mg of duloxetine is 60 mg of duloxetine, whether it’s being ingested for depression or for pain.  If a patient’s fibromyalgia or low back pain is miraculously “cured” by Cymbalta, there’s no a priori reason to think that it’s doing anything different in that person than what it does in a depressed patient (even though that is entirely conceivable).  The same mechanism might be involved in both.

The same can be said for some other medications with multiple indications.  For example, we can’t necessarily posit alternate mechanisms for Abilify in a bipolar patient versus Abilify in a patient with schizophrenia.  At roughly equivalent doses, its efficacy in the two conditions might be better explained by a biochemical similarity between the two conditions.  (Or maybe everything really is bipolar!  —sorry, my apologies to Hagop Akiskal.)

Or maybe the medication is not the important thing.  Maybe the patient’s perceived need for the medication matters more than the medication itself, and 60 mg of duloxetine for pain truly is different from 60 mg duloxetine for depression.  However, if our explanations rely on perceptions and not biology, we’re entering the territory of the placebo effect, in which case we’re better off skipping duloxetine (and its side effect profile and high cost), and just using an actual placebo.

Bottom line:  We tend to lock ourselves into what we think we know about the biology of the condition we’re treating, whether pain, depression, schizophrenia, ADHD, or whatever.  When we have medications with multiple indications, we often infer that the medication must work differently in each condition.  Unless the doses are radically different (e.g., doxepin for sleep vs depression), this isn’t necessarily true.  In fact, it may be more parsimonious to say that disorders are more fundamentally alike than they are different, or that our drugs are being used for their placebo effect.

We can now add chronic pain to the long list of conditions responsive to psychoactive drugs.  Perhaps it’s also time to start looking at pain disorders as variants of psychiatric disorders, or treating pain complaints as symptoms of mental disorders.  Cymbalta’s foray into this field may be the first attempt to bridge this gap.

Addendum:  I had started this article before reading the PNAS article on antidepressants and NSAIDs, which I blogged about earlier this week.  If the article’s conclusion (namely, that antidepressants lose their efficacy when given with pain relievers) is correct, this could have implications for Cymbalta’s use in chronic pain.  Since chronic pain patients will most likely be taking regular analgesic medications in addition to Cymbalta, the efficacy of Cymbalta might be diminished.  It will be interesting to see how this plays out.

%d bloggers like this: