Abilify for Bipolar Maintenance: More Hard Questions

May 31, 2011

Much attention has been drawn to a recent PLoS Medicine article criticizing the evidence base for the use of Abilify as maintenance treatment for bipolar disorder.  The major points emphasized by most critics are, first, that the FDA approved Abilify for this purpose in 2005 on the basis of flawed and scanty evidence and, secondly, that the literature since that time has failed to point out the deficiencies in the original study.

While the above may be true, I believe these criticisms miss a more important point.  Instead of lambasting the FDA or lamenting the poor quality of clinical research, we psychiatrists need to use this as an opportunity to take a closer look at what we treat, why we treat, and how we treat.

Before elaborating, let me summarize the main points of the PLoS article.  The authors point out that FDA approval of Abilify was based on only one “maintenance” trial by Keck et al published in 2007.  The trial included only 161 patients (only 7 of whom, or 1.3% of the total 567 who started the study, were followed throughout 26 weeks of stabilization and 74 follow-up weeks of maintenance).  It also consisted of patients who had already been stabilized on Abilify; thus, it was “enriched” for patients who had already shown a good response to this drug.  Furthermore, the “placebo failures” consisted of patients who were abruptly withdrawn from Abilify and placed on placebo; their relapses might thus be attributed to the researchers’ “randomized discontinuation” design rather than the failure of placebo.  (For more commentary, including follow-up from Bristol-Myers Squibb, Abilify’s manufacturer, please see this excellent post on Pharmalot.)

These are all valid arguments.  But as I read the PLoS paper and the ongoing discussion ever since, I can’t help but think, so what??  First of all, most psychiatrists probably don’t know about the PLoS paper.  And even if they did, the major questions for me would be:  would the criticism of the Keck et al. study change the way psychiatrists practice?  Should it?

Let’s think about psychiatric illness for a moment.  Most disorders are characterized by an initial, abrupt onset or “episode.”  These acute episodes are usually treated with medications (plus or minus psychotherapy or other psychosocial interventions), often resulting in rapid symptomatic improvement—or, at the very least, stabilization of those symptoms.

One big, unanswered (and, unfortunately, under-asked) question in psychiatry is, then what?  Once a person is stabilized (which in some cases means nothing more than “he’s no longer a danger to himself or others”), what do we do?  We don’t know how long to treat patients, and there are no guidelines for when to discontinue medications.  Instead we hear the common refrain:  depression, schizophrenia, and bipolar disorder, are lifelong illnesses—”just like hypertension or diabetes”—and should be treated as such.

But is that true?  At the risk of sounding like a heretic (and, indeed, I’d be laughed out of residency if I had ever asked this question), are there some cases of bipolar disorder—or schizophrenia, or depression, for that matter—which only require brief periods of psychopharmacological treatment, or none at all?

The conventional wisdom is that, once a person is stabilized, we should just continue treatment.  And why not?  What doctor is going to take his patient off Abilify—or any other mood stabilizer or antipsychotic which has been effective in the acute phase—and risk a repeat mood episode?  None.  And if he does, would he attribute the relapse to the disease, or to withdrawal of the drug?  Probably to the disease.

For another example of what I’m talking about, consider Depakote.  Depakote has been used for decades and is regarded as a “prototypical” mood stabilizer.  Indeed, some of my patients have taken Depakote for years and have remained stable, highly functional, and without evidence of mood episodes.  But Depakote was never approved for the maintenance treatment of bipolar disorder (for a brilliant review of this, which raises some of the same issues as the current Abilify brouhaha, read this article by The Last Psychiatrist).  In fact, the one placebo-controlled study of Depakote for maintenance treatment of bipolar disorder showed that it’s no better than placebo.  So why do doctors use it? Because it works (in the acute phase.)  Why do patients take it?  Again, because it works—oh, and their doctors tell them to continue taking it.  As the old saying goes, “if it ain’t broke, don’t fix it.”

However, what if it is broke[n]?  Some patients indeed fail Depakote monotherapy and require additional “adjunctive” medication (which, BTW, has provided another lucrative market for the atypical antipsychotics).  In such cases, most psychiatrists conclude that the patient’s disease is worsening and they add the second agent.  Might it be, however, that after the patient’s initial “response” to Depakote, the medication wasn’t doing anything at all?

To be sure, the Abilify study may have been more convincing if it was larger, followed patients for a longer time, and had a dedicated placebo arm consisting of patients who had not been on Abilify in the initial stage.  But I maintain that, regardless of the outcome of such an “improved” trial, most doctors would still use Abilify for maintenance treatment anyway, and convince themselves that it works—even if the medication is doing absolutely nothing to the underlying biology of the disease.

The bottom line is that it’s easy to criticize the FDA for approving a drug on the basis of a single, flawed study.  It’s also easy to criticize a pharmaceutical company for cutting corners and providing “flawed” data for FDA review.  But when it comes down to it, the real criticism should be directed at a field of medicine which endorses the “biological” treatment of a disorder (or group of disorders) whose biochemical basis and natural history are not fully understood, which creates post hoc explanations of its successes and failures based on that lack of understanding, and which is unwilling to look itself in the mirror and ask if it can do better.


Biomarker Envy III: Medial Prefrontal Cortex

May 28, 2011

Well, what do you know…. No sooner did I publish my last post about the “depression biomarker” discovered by a group of Japanese scientists, than yet another article appeared, describing a completely different biomarker.  This time, however, instead of simply diagnosing depression, the goal was to identify who’s at risk of relapse.  And the results are rather tantalizing… Could this be the real deal?

The paper, to be published in the journal Biological Psychiatry, by Norman Farb, Adam Anderson, and colleagues at the University of Toronto, had a simple research design.  They recruited 16 patients with a history of depression, but who were currently in remission (i.e., symptom-free for at least five months), as well as 16 control subjects.  They performed functional MRI (fMRI) imaging on all 32 participants while exposing them to an emotional stressor: specifically, they showed the subjects “sad” or “neutral” film clips while they were in the MRI scanner.

Afterward, they followed all 16 depressed patients for a total of 18 months.  Ten of these patients relapsed during this period.  When the group went back to look for fMRI features that distinguished the relapsers from the non-relapsers, they found that the relapsers, while viewing the “sad” film clips, had greater activity in the medial prefrontal cortex (mPFC).  The non-relapsers, on the other hand, showed greater activation in the visual cortex when viewing the same emotional trigger.

Even though the number of patients was very small (16 total), the predictive power of the tests was actually quite high (see the figure at right – click for a larger version).  It’s certainly conceivable that a test like this one might be used in the future to determine who needs more aggressive treatment—even if our checklists show that a depressed patient is in remission.  As an added bonus, it has better face validity than simply measuring a chemical in the bloodstream: in other words, it makes sense that a depressed person’s brain responds differently to sad stimuli, and that we might use this to predict outcomes.

As with most neuroimaging research, the study itself was fairly straightforward.  Making some sense out of the results, however, is another story.  (Especially if you like salmon.)

The researchers had predicted, based on previous studies, that patients who are prone to relapse might show greater activity in the ventromedial prefrontal cortex (VMPFC) and lower activity in the dorsolateral PFC (DLPFC).  But that’s not what they found.  Instead, relapsers had greater activity in the mPFC (which is slightly different from the VMPFC).  Moreover, non-relapsers had greater activity in the visual cortex (specifically the calcarine sulcus).

What might this mean?  The authors hypothesize that mPFC activity may lead to greater “ruminative thought” (i.e., worrying, brooding).  In fact, they did show that mPFC activation was correlated with scores on the RSQ-R, a psychological test of ruminative thought patterns.  Regarding the increased visual cortex activity, the authors suggest that this may be protective against further depressive episodes.  They surmise that it may be a “compensatory response” which might reflect “an attitude of acceptance or observation, rather than interpretation and analysis.”

In other words, to grossly oversimplify:  if you’re in recovery from depression, it’s not a good idea to ruminate, worry, and brood over your losses, or to internalize someone else’s sadness (even if it’s just a 45-second clip from the movie “Terms of Endearment”—which, by the way, was the “sad stimulus” in this experiment).  Instead, to prevent another depressive episode, you should strengthen your visual skills and use your visual cortex to observe and accept (i.e., just watch the darn movie!).

This all seems plausible, and the explanation certainly “fits” with the data.  But different conclusions can also be drawn.  Maybe those “recovered” patients who had less mPFC activity were simply “numb” to any emotional stimuli.  (All patients were taking antidepressants at the time of the fMRI study, which some patients report as having a “numbing” effect on emotions.)  Moreover, it has been said that depression can sometimes be beneficial; maybe the elevated mPFC activity in relapsers was an ongoing attempt to process the “sad” inputs in a more productive way?  As for the protective effect of visual cortex activity, maybe it isn’t about “acceptance” or “non-judgment” at all, but something else entirely?  Maybe those patients just enjoyed watching Shirley Maclaine and Jack Nicholson.

Nevertheless, the more psychologically minded among us might gladly embrace their explanations.  After all, it just seems “right” to say:  “Rumination is bad, acceptance and mindfulness (NB:  the authors did not use this term) is good.”  However, their “mediation analysis” showed that rumination scores did not predict relapse, and acceptance scores did not predict prolonged remission.  In other words, even though these psychological measures were correlated with the MRI findings, the psychological test results didn’t predict outcome.  Only the MRI findings did.

This leads to an interesting take-home message.  The results seem to support a psychological approach to maintaining remission—i.e., teaching acceptance and mindfulness, and avoiding ruminative tendencies—but this is only part of the solution.  Activity in the mPFC and the visual cortex might underlie pro-depressive and anti-depressive tendencies, respectively, in depressed patients, via mechanisms that are entirely unknown (and, dare I say it, entirely biologic?).

[An interesting footnote:  the risk of relapse was not correlated with medications.  Out of the ten who relapsed, three were still taking antidepressants.  Of the other seven, four were engaged in mindfulness-based cognitive therapy (MBCT), while the others were taking a placebo.]

Anyway, this paper describes an interesting finding with potential real-world application.  Although it’s a small study, it’s loaded with testable follow-up hypotheses.  I sincerely hope they continue to fire up the scanner, find some patients, and test them.  Who knows—we might just find something worth using.


Biomarker Envy II: Ethanolamine Phosphate

May 27, 2011

In my inbox yesterday was a story describing a new biological test for a psychiatric disorder.  Hallelujah!  Is this the holy grail we’ve all been waiting for?

Specifically, scientists at Human Metabolome Technologies (HMT) and Japan’s Keio University presented data earlier this week at a scientific conference in Tokyo, showing that they could diagnose depression by measuring levels of a chemical—ethanolamine phosphate—in patients’ blood.

Let me repeat that once again, for emphasis:  Japanese scientists now have a blood test to diagnose depression!

Never mind all that messy “talk-to-the-patient” stuff.  And you can throw away your tired old DSM-IV, because this is the new world: biological diagnosis!!  The press release describing the research even suggests that the test “could improve early detection rates of depression if performed during regular medical checkups.”  That’s right:  next time you see your primary doc, he or she might order—along with your routine CBC and lipid panel—an ethanolamine phosphate test.  If it comes back positive, congratulations!  You’re depressed!

If you can detect the skepticism in my voice, good.  Because even if this “biomarker” for depression turns out to be 100% accurate (which it is not—see below), its use runs entirely against how we should be practicing person-centered (not to be confused with “personalized”) medicine.  As a doctor, I want to hear your experiences and feelings, and help you with those symptoms, not run a blood test and order a drug.

[Incidentally, the Asahi press release made me chuckle when it stated: “About 90 percent of doctors base their diagnosis of depression on experience and varying factors.”  What about the other 10%?  Magic?]

As it turns out, I think there’s a lot to suggest that this particular blood test may not yet be ready for prime time.  For one, the work has not yet been published (and deciphering scientific results from a press release is always a risky proposition).  Secondly, the test was not 100% accurate; it failed to identify depression in 18% of cases, and falsely labeled healthy people as “depressed” 5% of the time.  (That’s a sensitivity of 82% and a specificity of 95%, for those of you playing along at home.)

Further, what the heck is ethanolamine phosphate, and why would it be low in depressed people?  Is it a chemical secreted by the “happiness centers” of the brain?  Does it predict the onset or worsening of a depressive episode?  Is it somehow affected by antidepressant treatment?  As far as I can tell from a quick literature search, there has been no report—or even a suggestion—of ethanolamine (or any of its metabolites) being involved in the pathogenesis of mood disorders.  Then again, maybe I didn’t get the Japanese translation just right.

Anyway, where this “marker” came from is anybody’s guess.  It’s entirely possible (although I can’t be sure, because the Japanese group has not yet published their findings) that the researchers measured the blood levels of dozens of molecules and found the “best” results with this one.  We sometimes call this a “fishing expedition.”  Obviously, the finding has to be replicated, and if it was, in fact, just a lucky result, further research will bear that out.

But Dr Yoshiaki Ohashi, board director and chief security officer at HMT (“chief security officer”? does he wear a badge and sit at the front desk during the overnight shift, too?) maintains that the findings “will make it easier for an objective, biological diagnosis of depressive patients.”

Wow.  In 2011.  (And just in time for DSM-5.)

What if he’s right?  How would you feel if you went to a routine doctor’s visit next week, got an order for blood work, and a secretary called you a few days later to tell you that you have depression?  Even if you don’t feel depressed?

Were there other motives for developing such a test?  Probably.  One of the press releases quotes the Japanese Ministry of Health as saying that “only one quarter of the people who need treatment” actually get it.  So maybe this blood test is simply a way to offer treatment to more people expand the market for antidepressants—even to those who don’t want treatment.  And then, of course, HMT probably wants a piece of the pie.  HMT is already developing a commercial test to measure ethanolamine phosphate levels; obviously, widespread adoption of this test would translate into big bucks for HMT, indeed.

So while many other questions remain to be answered, I must say I’m not holding my breath. Biological screening tests for psychiatric disorders have no face validity (in other words, if a test is positive but a person shows no signs or symptoms, then what?) and a positive result may expose patients to “preventive” treatments that are costly and cause unwanted side effects.

In my opinion, the best way (if any) to use a biomarker is in a “confirmatory” or “rule-out” function.  Is that demoralized, ruminative, potentially suicidal patient in your office simply going through a rough period in her life?  Or is she clinically depressed?  Will she respond to medications, or is this something that will simply “pass”?  In cases like this, measuring ethanolamine phosphate (or another similar marker) might be helpful.

But I don’t think we’ll ever be able to screen for psychiatric illness the same way a primary care doc might screen for, say, breast cancer or diabetes.  To do so would redefine the entire concept of “mental” illness (perhaps making it “neurological” illness instead?).  It also takes the person out of the picture.  At the end of the day, it’s always the patient’s thoughts, words, and experiences that count.  Ignoring those—and focusing instead on a chemical in the bloodstream—would be an unfortunate path to tread.


CME, CE, and What Makes A Psychiatrist

May 25, 2011

Why do psychiatrists do what they do?  How— and why— is a psychiatrist different from a psychotherapist?  I believe that most psychiatrists entered this field wanting to understand the many ways to understand and to treat what’s “abnormal,” but have instead become caught up in (or brainwashed by?) the promises of modern-day psychopharmacology.  By doing so, we’ve found ourselves pigeonholed into a role in which we prescribe drugs while others provide the more interesting (and more rewarding) psychosocial interventions.

Exceptions certainly do exist.  But psychiatrists are rapidly narrowing their focus to medication management alone.  If we continue to do so, we’d better be darn sure that what we’re doing actually works.  If it doesn’t, we may be digging ourselves a hole from which it will be difficult—if not impossible—to emerge.

How did we get to this point?  I’m a (relatively) young psychiatrist, so I’ll admit I don’t have the historical perspective of some of my mentors.  But in my brief career, I’ve seen these influences:  training programs that emphasize psychopharmacology over psychotherapy; insurance companies that reimburse for medication visits but not for therapy; patients who demand medications as a quick fix to their problems (and who either can’t access, or don’t want, other therapeutic options); and treatment settings in which an MD is needed to prescribe drugs while the “real work” is done by others.

But there’s yet another factor underlying psychiatry’s increasing separation from other behavioral health disciplines:  Continuing Medical Education, or CME.

All health care professionals must engage in some sort of professional education or “lifelong learning” to maintain their licenses.  Doctors must complete CME credits.  PAs, nurses, psychologists, social workers, and others must also complete their own Continuing Education (CE) credits, and the topics that qualify for credit differ from one discipline to the next.

The pediatrician and blogger Claudia Gold, MD, recently wrote about a program on “Infant-Parent Mental Health,” a three-day workshop she attended, which explored “how early relationships shape the brain and influence healthy emotional development.”  She wrote that the program “left me well qualified to do the work I do,” but she couldn’t receive CME credits because they only offered credit for psychologists—not for doctors.

I had a similar experience several years ago.  During my psychiatry residency, I was invited to attend a “Summit for Clinical Excellence” in Monterey, sponsored by the Ben Franklin Institute.  The BFI offers these symposia several times a year; they’re 3- or 4-day long programs consisting of lectures, discussions, and workshops on advanced mental health topics such as addictions, eating disorders, relationship issues, personality disorders, trauma, ethics, etc.—in other words, areas which fall squarely under the domain of “mental health,” but which psychiatrists often don’t treat (mainly because there are no simple “medication solutions” for many of these problems).

Even though my residency program did not give me any days off for the event (nor did they provide any financial support), I rearranged my schedule and attended anyway.  It turned out to be one of the most memorable events of my training.  I got to meet (yes, literally meet, not just sit in an audience and listen to) influential figures in mental health like Helen Fisher, Harville Hendrix, Daniel Amen, Peter Whybrow, and Bill O’Hanlon.  And because most of my co-attendees were not physicians, the discussions were not about medications, but rather about how we can best work with our patients on a human and personal level.  Indeed, the lessons I learned there (and the professional connections I made) have turned out to be extraordinarily valuable in my everyday work.  (For a link to their upcoming summits, see this link.  Incidentally, I am not affiliated with the BFI in any way.)

Unfortunately, like Dr Gold, I didn’t receive any CME credits for this event either, even though my colleagues in other fields did get credit.  A few days ago, out of curiosity, I contacted BFI and inquired about their CME policy.  I was told that “the topic [of CME] comes up every few years, and we’ve thought about it,” but they’ve decided against it for two reasons.  First, there’s just not enough interest.  (I guess psychiatrists are too busy learning about drugs to take time to learn about people or ideas.)  Second, they said that the application process for CME accreditation is expensive and time-consuming (the application packet “is three inches thick”), and the content would require “expert review,” meaning that it would probably not meet criteria for “medical” CME because of its de-emphasis of medications.

To be fair, any doctor can attend a BFI Summit, just as anyone could have attended Dr Gold’s “Infant-Parent Mental Health” program.  And even though physicians don’t receive CME credits for these programs, there are many other simple (and free, even though much of it is Pharma-supported) ways to obtain CME.

At any rate, it’s important—and not just symbolically—to look at where doctors get their training.  I want to learn about non-pharmacological, “alternative” ways to treat my patients (and to treat patients who don’t fit into the simple DSM categories—which is, well, pretty much everyone).  But to do so, it would have to be on my own dime, and without CME credit.  On the other hand, those who do receive this training (and the credit) are, in my opinion, prepared to provide much better patient care than those of us who think primarily about drugs.

At the risk of launching a “turf war” with my colleagues in other behavioral health disciplines, I make the following proposal: if psychologists lobby for the privilege to prescribe medications (a position which—for the record—I support), then I also believe that psychiatrists should lobby their own professional bodies (and the Accreditation Council for CME [ACCME]) to broaden the scope of what counts as “psychiatric CME.”  Medications are not always the answer.  Similarly, neurobiology and genetics will not necessarily lead us to better therapeutics.  And even if they do, we still have to deal with patients—i.e., human beings—and that’s a skill we’re neither taught nor encouraged to use.  I think it’s time for that to change.


How Much Should Addiction Treatment Cost?

May 22, 2011

Drug and alcohol abuse are widespread social, behavioral, and—if we are to believe the National Institutes of Health and most addiction professionals—medical problems.  In fact, addiction medicine has evolved into its own specialty, and a large number of other allied health professionals have become engaged in the treatment of substance abuse and dependence.

If addiction is a disease, then we should be able to develop ways to treat addictions effectively, and the costs of accepted treatments can be used to determine how we provide (and reimburse for) these services.  Unfortunately, unlike virtually every other (non-psychiatric) disease process—and despite tremendous efforts to develop ways to treat addictions effectively—there are still no universally accepted approaches for management of addictive disorders.  And the costs of treating an addict can range from zero to tens (or hundreds) of thousands of dollars.

I started thinking of this issue after reading a recent article on abcnews.com, in which addiction psychiatrist Stefan Kruszewski, MD, criticized addiction treatment programs for their tendency to take people off one addictive substance and replace it with another one (e.g., from heroin to Suboxone; or from alcohol to a combination of a benzodiazepine, an antidepressant, and an antipsychotic), often at a very high cost.  When seen through the eyes of a utilization reviewer, this seems unwise, expensive, and wasteful.

I agree with Dr Kruszewski, but for a slightly different reason.  To me, current treatment approaches falsely “medicalize” addiction and avoid the more significant psychological (or even spiritual) meaning of our patients’ addictive behaviors.  [See my posts “Misplaced Priorities in Addiction Treatment” and “When Does Treatment End.”]  They also cost a lot of money:  Suboxone induction, for instance, can cost hundreds of dollars, and the medication itself can cost several hundred more per month.  Likewise, the amounts being spent to develop new pharmacotherapies for cocaine and stimulant addiction are very high indeed.

Residential treatment programs—particularly the famous ones like Cirque Lodge, Sierra Tucson, and The Meadows—are also extremely expensive.  I, myself, worked for a time as a psychiatrist for a long-term residential drug and alcohol treatment program.  Even though we tried to err on the side of avoiding medications unless absolutely necessary (and virtually never discharged patients on long-term treatments like Suboxone or methadone), our services were quite costly:  upwards of $30,000 for a four-month stay, plus $5000/month for “aftercare” services.  (NB:  Since my departure, the center has closed, due in part to financial concerns.)

There are cheaper programs, like state- and county-sponsored detox centers for those with no ability to pay, as well as free or low-cost longer-term programs like the Salvation Army.  There are also programs like Phoenix House, a nonprofit network of addiction treatment programs with a variety of services—most of which are based on the “therapeutic community” approach—which are free to participants, paid for by public and private funding.

And then, of course, are the addicts who quit “cold turkey”—sometimes with little or no support at all—and those who immerse themselves in a mutual support program like Alcoholics Anonymous (AA).  AA meetings can be found almost everywhere, and they’re free.  Even though the success rate of AA is probably quite low (probably less than 10%, although official numbers don’t exist), the fact of the matter is that some people do recover completely without paying a dime.

How to explain this discrepancy?  The treatment “industry,” when challenged on this point, will argue that the success rate of AA alone is abysmal, and without adequate long-term care (usually in a group setting), relapse is likely, if not guaranteed.  This may in fact be partially true; it has been shown, for instance, that the likelihood of long-term sobriety does correlate with duration of treatment.

But at what cost?  Why should anyone pay $20,000 to $50,000 for a month at a premiere treatment center like Cirque Lodge or Promises Malibu?  Lindsay Lohan and Britney Spears can afford it, but few else—and virtually no insurance plans—can.

And the services offered by these “premiere” treatment programs sound like a spa menu, rather than a treatment protocol:  acupuncture, biofeedback, equine therapy, massage, chiropractic, art therapy, nature hikes, helicopter rides, gourmet meals or private chef services, “light and sound neurotherapy,” EMDR, craniosacral therapy, reiki training, tai chi, and many others.

Unfortunately, the evidence that any one of these services improves a patient’s chance of long-term sobriety is essentially nil.  Moreover, if addiction is purely a medical illness, then learning how to ride a horse should do absolutely nothing to help someone kick a cocaine habit.  Furthermore, medical insurance should not pay for those services (or, for that matter, for group therapy or a therapeutic-community approach).

Nevertheless, some recovering addicts may genuinely claim that they owe their sobriety to some of these experiences:  trauma recovery treatment, experiential therapy, “male bonding” activities (hat tip to the Prescott House), and yes, even the helicopter rides.

The bottom line is, we still don’t know how to treat addiction, or even what it really is in the first place.  Experts have their own ideas, and those in recovery have their own explanations.  My opinion is that, in the end, treatment must be individualized.  For every alcoholic who gets sober by attending daily AA meetings, or through religious conversion, there’s another addict who has tried and failed AA numerous times, and who must enroll in multiple programs (costing tens of thousands of dollars) to achieve remission.

What are we as a society willing to pay for?  Or should we simply maintain the free-market status quo, in which some can pay big bucks to sober up with celebrities on the beaches of Malibu, while others must detox on the bathroom floor and stagger to the AA meetings down the street?  Until we determine how best to tailor treatment to the individual, there’s no shortage of people who are willing to try just about anything to get help—and a lot of money to be made (and spent) along the way.


The Balance of Information

May 19, 2011

How do doctors learn about the drugs they prescribe?  It’s an important question, but one without a straightforward answer.  For doctors like me—who have been in practice for more than a few years—the information we learned in medical school may have already been replaced by something new.  We find ourselves prescribing drugs we’ve never heard of before.  How do we know whether they work?  And whom do we trust to give us this information?

I started to think about this question as I wrote my recent post on Nuedexta, a new drug for the treatment of pseudobulbar affect.  I knew nothing about the drug, so I had to do some research.  One of my internet searches led me to an active discussion on a site called studentdoctor.net (SDN).  SDN is a website for medical students, residents, and other medical professionals, and it features objective discussions of interesting cases, new medications, and career issues.  There, I found a thread devoted to Nuedexta; this thread contained several posts by someone calling himself “Doogie Howser”—and he seemed to have a lot of detailed information about this brand-new drug.

Further internet sleuthing led me to a message board on Yahoo Finance for Avanir Pharmaceuticals, the company which makes Nuedexta.  In one of the threads on this board, it was suggested that the “Doogie Howser” posts were actually written by someone I’ll call “TS.”  Judging by the other posts by this person, “TS” clearly owns stock in Avanir.  While “TS” never admitted to writing the SDN posts, there was much gloating that someone had been able to post pro-Nuedexta information on a healthcare website in a manner that sounded authoritative.

Within 24 hours of posting my article, someone posted a link to my article on the same Yahoo Finance website. I received several hundred “hits” directly from that link.  Simultaneously (and ever since), I’ve received numerous comments on that article, some of which include detailed information about Nuedexta, reminiscent of the posts written by “Doogie Howser.” Others appear to be written by “satisfied patients” taking this drug.  But I’m skeptical. I don’t know whether these were actual patients or Avanir investors (or employees); the IP address of one of the pro-Nuedexta commenters was registered to a public-relations firm in Arizona. Nevertheless, I have kept the majority of the posts on the blog, except those that contained personal attacks (and yes, I received those, too).

The interesting thing is, nothing “TS”/”Doogie Howser” said about Nuedexta was factually incorrect.  And most of the posts I received were not “wrong” either (although they have been opinionated and one-sided).  But that’s precisely what concerns me. The information was convincing, even though—if my hunch is correct—the comments were written for the sake of establishing market share, not for the sake of improving patient care.

The more worrisome issue is this: access to information seems to be lopsided.  Industry analysts (and even everyday investors) can have an extremely sophisticated understanding of new drugs on the market, more sophisticated, at times, than many physicians.  And they can use this sophistication to their advantage. Some financial websites and investor publications can read like medical journals.  Apparently, money is a good motivator to obtain such information and use it convincingly.  Quality patient outcomes? Not so much.

So what about the doctor who doesn’t have this information but must decide whether to prescribe a new medication?  Well, there are a few objective, unbiased sources of information about new drugs (The Medical Letter and The Carlat Report among them).  Doctors can also ask manufacturers for the Prescribing Information (“PI”) or do their own due diligence to learn about new treatments.  But they often don’t have the time to do this, and other resources (like the internet) are far more accessible.

However, they’re more accessible for everyone.  When the balance of information about new treatments is tipped in favor of drug manufacturers, salespeople, and investors—all of whom have financial gain as their top priority—and not in favor of doctors and patients (whose lives may be at stake), an interesting “battle of wits” is bound to ensue.  When people talk a good game, and sound very much like they know what they’re talking about, their motives must always be questioned.  Unfortunately—and especially under the anonymity of the internet—those motives can sometimes be hard to determine.  In response, we clinicians must be even more critical and objective, and not necessarily believe everything we hear.


Biomarker Envy I: Cortical Thickness

May 13, 2011

In the latest attempt to look for biological correlates or predictors of mental illness, a paper in this month’s Archives of General Psychiatry shows that children with major depressive disorder (MDD) have thinner cortical layers than “healthy” children, or children with obsessive-compulsive disorder (OCD).  Specifically, researchers performed brain MRI scans on 78 children with or without a diagnosis, and investigated seven specific areas of the cerebral cortex.  Results showed four areas which were thinner in children with MDD than in healthy children, two which were thicker, and one that did not vary.

These results add another small nugget of data to our (admittedly scant) understanding of mental illness—particularly in children, before the effects of years of continuous medication treatment.  They also represent the bias towards imaging studies in psychiatry, whose findings—even if statistically significant—are not always that reliable or meaningful.  (But I digress…)

An accompanying press release, however, was unrealistically enthusiastic.  It suggested that this study “offers an exciting new way to identify more objective markers of psychiatric illness in children.”  Indeed, the title of the paper itself (“Distinguishing between MDD and OCD in children by measuring regional cortical thickness”) might suggest a way to use this information in clinical practice right away.  But it’s best not to jump to these conclusions just yet.

For one, there was tremendous variability in the data, as shown in the figure at left (click for larger view).  While on average the children with MDD had a thinner right superior parietal gyrus (one of the cortical regions studied) than healthy children or children with OCD, no individual measurement was predictive of anything.

Second, the statement that we can “distinguish between depression and OCD” based on a brain scan reflects precisely the type of biological determinism and certainty (and hype?) that psychiatry has been striving for, but may never achieve (just look at the figure again).  Lay readers—and, unfortunately, many clinicians—might read the headline and believe that “if we just order an MRI for Junior, we’ll be able to get the true diagnosis.”  The positive predictive value of any test must be high enough to warrant its use in a larger population, and so far, the predictive value of most tests in psychiatry is poor.

Third, there is no a priori reason why there should be a difference between the brains (or anything else, for that matter) of patients with depression and patients with OCD, when you consider the overlap between these—and other—psychiatric conditions.  There are many shades of grey between “depression” and “OCD”:  some depressed children will certainly have OCD-like traits, and vice versa.  Treating the individual (and not necessarily the individual’s brain scan) is the best way to care for a person.

To be fair, the authors of the study, Erin Fallucca and David Rosenberg from Wayne State University in Detroit, do not state anywhere in their paper that this approach represents a “novel new diagnostic method” or make any other such sweeping claims about their findings.  In fact, they write that the differences they observed “merit further investigation” and highlight the need to look “beyond the frontal-limbic circuit.”  In other words, our current hypotheses about depression are not entirely supported by their findings (true), so we need to investigate further (also true).  And this, admittedly, is how science should proceed.

However, the history of psychiatry is dotted with tantalizing neurobiological theories or findings which find their way into clinical practice before they’ve been fully proven, or even shown any great clinical relevance.  Pertinent examples are the use of SPECT scans to diagnose ADHD, championed by Daniel Amen; quantitiative EEG to predict response to psychotropics; genotyping for metabolic enzymes; and the use of SSRIs to treat depression.  (Wait, did I say that???)

The quest to identify “biomarkers” of psychiatric illness may similarly lead us to believe we know more about a disease than we do.  A biomarker is a biological feature (an endocrine or inflammatory measure, a genotype, a biochemical response to a particular intervention) that distinguishes a person with a condition from one without.  They’re used throughout medicine for diagnosis, risk stratification and monitoring treatment response.   A true biomarker for mental illness would represent a significant leap ahead in personalized treatment.  Or would it?  What if a person’s clinical presentation differs from what the marker predicts?  “I’m sorry Mrs. Jones, but even though Katie compulsively washes her hands and counts to twelve hundreds of times a day, her right superior parietal gyrus is too thin for a diagnosis of OCD.”

Other fields of medicine don’t experience this dilemma.  If you have an elevated hsCRP and high LDL, even though you “feel fine,” you are still at elevated risk for cardiovascular disease and really ought to take preventive measures (exercise, diet, etc).  (However, see this recent editorial in the BMJ about “who should define disease.”)  But if your brain scan shows cortical thinning and you have no symptoms of depression, do you need to be treated?  Are you even at risk?

Some day (hopefully) these questions will be answered, as we gain a greater understanding of the biology of mental illness.  But until then, let’s keep research and clinical practice separate until we know what we’re doing.  Psychiatry doesn’t have to be like other fields of medicine.  Patients suffer and come to us for help; let’s open our eyes and ears before sending them off to the scanner or the lab.  In doing so, we might learn something important.


Community Psychiatry And Its Unintended Consequences

May 10, 2011

What impact can psychiatry have on the health of a community?

For three years, I have worked part-time in a non-profit psychopharmacology clinic, treating a wide range of individuals from a poor, underserved urban area.  In a recent post, I wrote that many of the complaints endorsed by patients from this population may be perceived as symptoms of a mental illness.  At one point or another (if not chronically), people complain of “anxiety,” “depression,” “insomnia,” “hopelessness,” etc.—even if these complaints simply reflect their response to environmental stressors, and not an underlying mental illness.

However, because diagnostic criteria are so nonspecific, these complaints can easily lead to a psychiatric diagnosis, especially when the diagnostic evaluation is limited to a self-report questionnaire and a 15- or 20-minute intake appointment.

Personally, I struggle with two opposing biases:  On the one hand, I want to believe that mental illness is a discrete entity, a pathological deviation from “normal,” and presents differently (longer duration, greater intensity, etc) from one’s expected reaction to a situation, however distressing that situation may be.  On the other hand, if I take people’s complaints literally, everyone who walks into my office can be diagnosed as mentally ill.

Where do we draw the line?  The question is an important one.  The obvious answer is to use clinical judgment and experience to distinguish “illness” from “health.”  But this boundary is vague, even under ideal circumstances.  It breaks down entirely when patients have complicated, confusing, chaotic histories (or can’t provide one) and our institutions are designed for the rapid diagnosis and treatment of symptoms rather than the whole person.  As a result, patients may be given a diagnosis where a true disorder doesn’t exist.

This isn’t always detrimental.  Sometimes it gives patients access to interventions from which they truly benefit—even if it’s just a visit with a clinician every couple of months and an opportunity to talk.  Often, however, our tendency to diagnose and to pathologize creates new problems, unintended diversions, and potentially dire consequences.

The first consequence is the overuse of powerful (and expensive) medications which, at best, may provide no advantage over a placebo and, at worst, may cause devastating side effects, not to mention extreme costs to our overburdened health care system.  Because Medicaid and Medicare reimbursements are better for medication management than for other non-pharmacological interventions, “treatment” often consists of brief “med check” visits every one to six months, with little time for follow-up or exploring alternative approaches.  I have observed colleagues seeing 30 or 40 patients in a day, sometimes prescribing multiple antipsychotics with little justification, frequently in combination with benzodiazepines or other sedatives, and asking for follow-up appointments at six-month intervals.  How this is supposed to improve one’s health, I cannot fathom.

Second, overdiagnosis and overtreatment diverts resources from where they are truly needed.  For instance, the number of patients who can access a mental health clinic like ours but who do not have a primary care physician is staggering.  Moreover, patients with severe, persistent mental illness (and who might be a danger to themselves or others when not treated) often don’t have access to assertive, multidisciplinary treatment.  Instead, we’re spending money on medications and office visits by large numbers of patients for whom diagnoses are inaccurate, and medications provide dubious benefit.

Third, this overdiagnosis results in a massive population of “disabled,” causing further strain on scarce resources.  The increasing number of patients on disability due to mental illness has long been chronicled.  Some argue that the disability is itself a consequence of medication.  It is also possible that some people may abuse the system to obtain certain resources.  More commonly, however, I believe that the failure of the system (i.e., we clinicians) to perform an adequate evaluation—and our inclination to jump to a diagnosis—has swollen the disability ranks to an unsustainably high level.

Finally—and perhaps most distressingly—is the false hope that a psychiatric diagnosis communicates to a patient.  I believe it can be quite disempowering for a person to hear that his normal response to a situation which is admittedly dire represents a mental illness.  A diagnosis may provide a transient sense of relief (or, at the very least, alleviate one’s guilt), but it also tells a person that he is powerless to change his situation, and that a medication can do it for him.  Worse, it makes him dependent upon a “system” whose underlying motives aren’t necessarily in the interest of empowering the neediest, weakest members of our society.  I agree with a quote in a recent BBC Health story that a lifetime on disability “means that many people lose their sense of self-worth, identity, and esteem.”  Again, not what I set out to do as a psychiatrist.

With these consequences, why does the status quo persist?  For any observer of the American health care system, the answer seems clear:  vested interests, institutional inertia, a clear lack of creative thought.  To make matters worse, none of the examples described above constitute malpractice; they are, rather, the standard of practice.

As a lone clinician, I am powerless to reverse this trend.  That’s not to say I haven’t tried.  Unfortunately, my attempts to change the way we practice have been met with resistance at many levels:  county mental health administrators who have not returned detailed letters and emails asking to discuss more cost-effective strategies for care; fellow clinicians who have looked with suspicion (if not derision) upon my suggestions to rethink our approach; and—most painfully to me—supervisors who have labeled me a bigot for wanting to deprive people of the diagnoses and medications our (largely minority) patients “need.”

The truth is, my goal is not to deprive anyone.  Rather, it is to encourage, to motivate, and to empower.  Diagnosing illness where it doesn’t exist, prescribing medications for convenience and expediency, and believing that we are “helping” simply because we have little else to offer, unfortunately do none of the above.


Psychopharmacology And The Educated Guess

May 6, 2011

Sometimes I feel like a hypocrite.

As a practicing psychiatrist, I have an obligation to understand the data supporting my use of prescription medication.  In my attempts to do so, I’ve found some examples of clinical research that, unfortunately, are possibly irrelevant or misleading.  Many other writers and bloggers have taken this field to task (far more aggressively than I have) for clinical data that, in their eyes, are incomplete, inconclusive, or downright fraudulent.

In fact, we all like to hold our clinical researchers to an exceedingly high standard, and we complain indignantly when they don’t achieve it.

At the same time, I’ll admit I don’t always do the same in my own day-to-day practice.  In other words, I demand precision in clinical trials, but several times a day I’ll use anecdotal evidence (or even a “gut feeling”) in my prescribing practices, completely violating the rigor that I expect from the companies that market their drugs to me.

Of all fields in medicine, psychopharmacology the one where this is not only common, but it’s the status quo.

“Evidence-based” practice is about making a sound diagnosis and using published clinical data to make a rational treatment decision.  Unfortunately, subjects in clinical trials of psychotropic drugs rarely—if ever—resemble “real” patients, and the real world often throws us curve balls that force us to improvise.  If an antipsychotic is only partially effective, what do we do?  If a patient doesn’t tolerate his antidepressant, then what?  What if a drug interferes with my patient’s sleep?  Or causes a nasty tremor?  There are no hard-and-fast rules for dealing with these types of situations, and the field of psychopharmacology offers wide latitude in how to handle them.

But then it gets really interesting.  Nearly all psychiatrists have encountered the occasional bizarre symptom, the unexpected physical finding, or the unexplained lab value (if labs are being checked, that is).  Psychopharmacologists like to look at these phenomena and try to concoct an explanation based on what might be happening based on their knowledge of the drugs they prescribe.  In fact, I’ve always thought that the definition of an “expert psychopharmacologist” is someone who understands the properties of drugs well enough to make a plausible (albeit potentially wrong) molecular or neurochemical explanation of a complex human phenotype, and then prescribe a drug to fix it.

The psychiatric literature is filled with case studies of interesting encounters or “clinical pearls” that illustrate this principle at work.

For example, consider this case report in the Journal of Neuropsychiatry and Clinical Neurosciences, in which the authors describe a case of worsening mania during slow upward titration of a Seroquel dose and hypothesize that an intermediate metabolite of quetiapine might be responsible for the patient’s mania.  Here’s another one, in which Remeron is suggested as an aid to benzodiazepine withdrawal, partially due to its 5-HT3 antagonist properties.  And another small study purports to explain how nizatadine (Axid), an H2 blocker, might prevent Zyprexa-induced weight gain.  And, predictably, such “hints” have even made their way into drug marketing, as in the ads for the new antipsychotic Latuda which suggest that its 5-HT7 binding properties might be associated with improved cognition.

Of course, for “clinical pearls” par excellence, one need look no further than Stephen Stahl, particularly in his book Essential Psychopharmacology: The Prescriber’s Guide.  Nearly every page is filled with tips (and cute icons!) such as these:  “Lamictal may be useful as an adjunct to atypical antipsychotics for rapid onset of action in schizophrenia,” or “amoxapine may be the preferred tricyclic/tetracyclic antidepressant to combine with an MAOI in heroic cases due to its theoretically protective 5HT2A antagonist properties.”

These “pearls” or hypotheses are interesting suggestions, and might work, but have never been proven to be true.  At best, they are educated guesses.  In all honesty, no self-respecting psychopharmacologist would say that any of these “pearls” represents the absolute truth until we’ve replicated the findings (ideally in a proper controlled clinical trial).  But that has never stopped a psychopharmacologist from “trying it anyway.”

It has been said that, “every time we prescribe a drug to a patient, we’re conducting an experiment, with n=1.”  It’s amazing how often we throw caution to the wind and, just because we think we know how a drug might work, and can visualize in our minds all the pathways and receptors that we think our drugs are affecting, we add a drug or change a dose and profess to know what it’s doing.  Unfortunately, when we enter the realm of polypharmacy (not to mention the enormous complexity of human physiology), all bets are usually off.

What’s most disturbing is how often our assumptions are wrong—and how little we admit it.  For every published case study like the ones mentioned above, there are dozens—if not hundreds—of failed “experiments.”  (Heck, the same could be said even when we’re using something appropriately “evidence-based,” like using a second-generation antipsychotic for schizophrenia.)  In psychopharmacology, we like to take pride in our successes (“I added a touch of cyproterone, and his compulsive masturbation ceased entirely!”)  but conveniently excuse our failures (“She didn’t respond to my addition of low-dose N-acetylcysteine because of flashbacks from her childhood trauma”).  In that way, we can always be right.

Psychopharmacology is a potentially dangerous playground.  It’s important that we follow some well-established rules—like demanding rigorous clinical trials—and if we’re going to veer from this path, it’s important that we exercise the right safeguards in doing so.  At the same time, we should exercise some humility, because sometimes we have to admit we just don’t know what we’re doing.


Mental Illness and Social Realities

May 2, 2011

Does the definition of “mental illness” differ from place to place?  Is there a difference between “depression” in a poor individual and in one of means?  Are the symptoms identical?  What about the neurobiology?  The very concept of a psychiatric “disease” implies that certain core features of one’s illness transcend the specifics of a person’s social or cultural background.  Nevertheless, we know that disorders look quite different, depending on the setting in which they arise.  This is why people practice psychiatry, not computers or checklists.  (Not yet, at least.)

However, sometimes a person’s environment can elicit reactions and behaviors that might appear—even to a trained observer—as mental illness.  If unchecked, this may create an epidemic of “disease” where true disease does not exist.  And the consequences could be serious.

—–

For the last three years, I have had the pleasure of working part-time in a community mental health setting.  Our clinic primarily serves patients on Medicaid and Medicare, in a gritty, crime-ridden expanse of a major city.  Our patients are, for the most part, impoverished, poorly educated, have little or no access to primary care services, and live in communities ravaged by substance abuse, crime, unemployment, familial strife, and a deep, pervasive sense of hopelessness.

Even though our resources are extremely limited, I can honestly say that I have made a difference in the lives of hundreds, if not thousands, of individuals.  But the experience has led me to question whether we are too quick to make psychiatric diagnoses for the sake of convenience and expediency, rather than on the basis of a fair, objective, and thorough evaluation.

Almost predictably, patients routinely present with certain common complaints:  anxiety, “stress,” insomnia, hopelessness, fear, worry, poor concentration, cognitive deficits, etc.  Each of these could be considered a feature of a deeper underlying disorder, such as an anxiety disorder, major depression, psychosis, thought disorder, or ADHD.  Alternatively, they might also simply reflect the nature of the environment in which the patients live, or the direct effects of other stressors that are unfortunately too familiar in this population.

Given the limitations of time, personnel, and money, we don’t usually have the opportunity for a thorough evaluation, collaborative care with other professionals, and frequent follow-up.  But psychiatric diagnostic criteria are vague, and virtually everyone who walks into my office endorses symptoms for which it would be easy to justify a diagnosis.  The “path of least” resistance” is often to do precisely that, and move to the next person in the long waiting-room queue.

This tendency to “knee-jerk” diagnosis is even greater when patients have already had some interaction—however brief—with the mental health system:  for example, a patient who visited a local crisis clinic and was given a diagnosis of “bipolar disorder” (on the basis of a 5-minute evaluation) and a 14-day supply of Zyprexa, and told to “go see a psychiatrist”; or the patient who mentioned “anxiety” to the ER doc in our county hospital (note: he has no primary care MD), was diagnosed with panic disorder, and prescribed PRN Ativan.

We all learned in our training (if not from a careful reading of the DSM-IV) that a psychiatric diagnosis should be made only when other explanations for symptoms can be ruled out.  Psychiatric treatment, moreover, should be implemented in the safest possible manner, and include close follow-up to monitor patients’ response to these interventions.

But in my experience, once a patient has received a diagnosis, it tends to stick.  I frequently feel an urge to un-diagnose patients, or, at the very least, to have a discussion with them about their complaints and develop a course of treatment—which might involve withholding medications and implementing lifestyle changes or other measures.  Alas, this takes time (and money—at least in the short run).  Furthermore, if a person already believes she has a disorder (even if it’s just “my mother says I must be bipolar because I have mood swings all the time!!!”), or has experienced the sedative, “calming,” “relaxing” effect of Seroquel or Klonopin, it’s difficult to say “no.”

There are consequences of a psychiatric diagnosis.  It can send a powerful message.  It might absolve a person of his responsibility to make changes in his life—changes which he might indeed have the power to make.  Moreover, while some see a diagnosis as stigmatizing, others may see it as a free ticket to powerful (and potentially addictive) medications, as well as a variety of social services, from a discounted annual bus pass, to in-home support services, to a lifetime of Social Security disability benefits.  Very few people consciously abuse the system for their own personal gain, but the system is set up to keep this cycle going.  For many, “successful” treatment means staying in that cycle for the rest of their lives.

—–

The patients who seek help in a community mental health setting are, almost without exception, suffering in many ways.  That’s why they come to see us.  Some clinics do provide a wide assortment of services, including psychotherapy, case management, day programs, and the like.  For the truly mentally ill, these can be a godsend.

For many who seek our services, however, the solutions that would more directly address their suffering—like safer streets, better schools, affordable housing, stable families, less access to illicit drugs, etc.—are difficult or costly to implement, and entirely out of our hands.  In cases such as these, it’s unfortunately easier to diagnose a disease, prescribe a drug which (in the words of one of my colleagues) “allows them to get through just one more night,” and make poor, unfortunate souls even more dependent on a system which sees them as hopeless and unable to emerge from the chaos of their environment.

In my opinion, that’s not psychiatry.  But it’s being practiced every day.


%d bloggers like this: