Whatever Works?

January 29, 2012

My iPhone’s Clock Radio app wakes me each day to the live stream of National Public Radio.  Last Monday morning, I emerged from my post-weekend slumber to hear Alix Spiegel’s piece on the serotonin theory of depression.  In my confused, half-awake state, I heard Joseph Coyle, professor of psychiatry at Harvard, remark: “the ‘chemical imbalance’ is sort of last-century thinking; it’s much more complicated than that.”

Was I dreaming?  It was, admittedly, a surreal experience.  It’s not every day that I wake up to the voice of an Ivy League professor lecturing me in psychiatry (those days are long over, thank Biederman god).  Nor did I ever expect a national news program to challenge existing psychiatric dogma.  As I cleared my eyes, though, I realized, this is the real deal.  And it was refreshing, because this is what many of us have been thinking all along.  The serotonin hypothesis of depression is kaput.

Understandably, this story has received lots of attention (see here and here and here and here and here).  I don’t want to jump on the “I-told-you-so” bandwagon, but instead to offer a slightly different perspective.

A few disclaimers:  first and foremost, I agree that the “chemical imbalance” theory has overrun our profession and has commandeered the public’s understanding of mental illness—so much so that the iconic image of the synaptic cleft containing its neurotransmitters has become ensconced in the national psyche.  Secondly, I do prescribe SSRIs (serotonin-reuptake inhibitors), plus lots of other psychiatric medications, which occasionally do work.  (And, in the interest of full disclosure, I’ve taken three of them myself.  They did nothing for me.)

To the extent that psychiatrists talk about “chemical imbalances,” I can see why this could be misconstrued as “lying” to patients.  Ronald Pies’ eloquent article in Psychiatric Times last summer describes the chemical-imbalance theory as “a kind of urban legend,” which no “knowledgeable, well-trained psychiatrist” would ever believe.  But that doesn’t matter.  Thanks to us, the word is out there.  The damage has already been done.  So why, then, do psychiatrists (even the “knowledgeable, well-trained” ones) continue to prescribe SSRI antidepressants to patients?

Because they work.

Okay, maybe not 100% of the time.  Maybe not even 40% of the time, according to antidepressant drug trials like STAR*D.  Experience shows, however, that they work often enough for patients to come back for more.  In fact, when discussed in the right context, their potential side effects described in detail, and prescribed by a compassionate and apparently intelligent and trusted professional, antidepressants probably “work” far more than they do in the drug trials.

But does that make it right to prescribe them?  Ah, that’s an entirely different question.  Consider the following:  I may not agree with the serotonin theory, but if I prescribe an SSRI to a patient with depression, and they report a benefit, experience no obvious side effects, pay only $4/month for the medication, and (say) $50 for a monthly visit with me, is there anything wrong with that?  Plenty of doctors would say, no, not at all.  But what if my patient (justifiably so) doesn’t believe in the serotonin hypothesis and I prescribe anyway?  What if my patient experiences horrible side effects from the drug?  What if the drug costs $400/month instead of $4?  What if I charge the patient $300 instead of $50 for each return visit?  What if I decide to “augment” my patient’s SSRI with yet another serotonin agent, or an atypical antipsychotic, causing hundreds of dollars more, and potentially causing yet more side effects?  Those are the aspects that we don’t often think of, and constitute the unfortunate “collateral damage” of the chemical-imbalance theory.

Indeed, something’s “working” when a patient reports success with an antidepressant; exactly why and how it “works” is less certain.  In my practice, I tell my patients that, at individual synapses, SSRIs probably increase extracellular serotonin levels (at least in the short-term), but we don’t know what that means for your whole brain, much less for your thoughts or behavior.  Some other psychiatrists say that “a serotonin boost might help your depression” or “this drug might act on pathways important for depression.”   Are those lies?  Jeffrey Lacasse and Jonathan Leo write that “telling a falsehood to patients … is a serious violation of informed consent.”  But the same could be said for psychotherapy, religion, tai chi, ECT, rTMS, Reiki, qigong, numerology, orthomolecular psychiatry, somatic re-experiencing, EMDR, self-help groups, AA, yoga, acupuncture, transcendental meditation, and Deplin.  We recommend these things all the time, not knowing exactly how they “work.”

Most of those examples are rather harmless and inexpensive (except for Deplin—it’s expensive), but, like antidepressants, all rest on shaky ground.  So maybe psychiatry’s problem is not the “falsehood” itself, but the consequences of that falsehood.  This faulty hypothesis has spawned an entire industry capitalizing on nothing more than an educated guess, costing our health care system untold millions of dollars, saddling huge numbers of patients with bothersome side effects (or possibly worse), and—most distressingly to me—giving people an incorrect and ultimately dehumanizing solution to their emotional problems.  (What’s dehumanizing about getting better, you might ask?  Well, nothing, except when “getting better” means giving up one’s own ability to manage his/her situation and instead attribute their success to a pill.)

Dr Pies’ article in Psychiatric Times closed with an admonition from psychiatrist Nassir Ghaemi:  “We must not be drawn into a haze of promiscuous eclecticism in our treatment; rather, we must be guided by well-designed studies and the best available evidence.”  That’s debatable.  If we wait for “evidence” for all sorts of interventions that, in many people, do help, we’ll never get anywhere.  A lack of “evidence” certainly hasn’t eliminated religion—or, for that matter, psychoanalysis—from the face of the earth.

Thus, faulty theory or not, there’s still a place for SSRI medications in psychiatry, because some patients swear by them.  Furthermore—and with all due respect to Dr Ghaemi—maybe we should be a bit more promiscuous in our eclecticism.  Medication therapy should be offered side-by-side with competent psychosocial treatments including, but not limited to, psychotherapy, group therapy, holistic-medicine approaches, family interventions, and job training and other social supports.  Patients’ preferences should always be respected, along with safeguards to protect patient safety and prevent against excessive cost.  We may not have good scientific evidence for certain selections on this smorgasbord of options, but if patients keep coming back, report successful outcomes, and enter into meaningful and lasting recovery, that might be all the evidence we need.


Where Doctors Get Their Information

January 24, 2012

Doctors spend four years in medical school, still more years in residency, and some devote even more years to fellowship training.   All of this work is done under direct supervision, and throughout the process, trainees learn from their teachers, mentors, and supervisors.  But medicine changes very rapidly.  After all of this training—i.e., once the doctor is “out in the real world”—how does he or she keep up with the latest developments?

Medical journals are the most obvious place to start.  Many doctors subscribe to popular journals like the New England Journal of Medicine or JAMA, or they get journals as a perk of membership in their professional society (for example, the American Journal of Psychiatry for members of the APA).  But the price of journals—and professional society memberships—can accumulate quickly, as can the stacks of unread issues on doctors’ desks.

A second source is continuing medical education credit.  “CMEs” are educational units that doctors are required to obtain in order to keep their medical license.  Some CME sources are excellent, although most CMEs are absurdly easy to obtain (e.g., you watch an online video; answer a few multiple-choice questions about a brief article; or show up for the morning session of a day-long conference, sign your name, then head out the door for a round of golf), making their educational value questionable.  Also, lots of CMEs are funded by pharmaceutical or medical device manufacturers (see here), where bias can creep in.

Direct communication with drug companies—e.g., drug sales reps—can also be a source of information.  Some universities and health-care organizations have “cracked down” on this interaction, citing inappropriate sales techniques and undue influence on doctors.  While docs can still contact the medical departments (or “medical science liaisons”) of big drug companies, this source of info appears to be running dry.

So what’s left?  Medical textbooks?  They’re usually several years out of date, even at the time of publication.  Medical libraries?  Unless you’re affiliated with a teaching hospital, those libraries are off-limits.  “Throwaway” journals?  Every specialty has them—they arrive in the mail, usually unrequested, and contain several topical articles and lots of advertising; but these articles generally aren’t peer-reviewed, and the heavy advertising tends to bias their content.  Medical websites?  Same thing.  (WebMD, for instance, is heavily funded by industry—a point that has not escaped the attention of watchdog senator Charles Grassley.)

Thus, the doctor in the community (think of the psychiatrist in a small group practice in your hometown) is essentially left alone, in the cold, without any unbiased access to the latest research.  This dilemma has become starkly apparent to me in the last several months.  Since last summer, I have worked primarily in a community hospital.  Because it is not an academic institution, it does not provide its employees or trainees access to the primary literature (and yes, that includes psychiatry residents).  I, on the other hand, have been fortunate enough to have had a university affiliation for most of my years of practice, so I can access the literature.  If I need to look up the details of a recent study, or learn about new diagnostic procedures for a given disorder, or prepare for an upcoming talk, I can find just about anything I need.  But this is not the case for my colleagues.  Instead, they rely on textbooks, throwaway journals, or even Wikipedia.  (BTW, Wikipedia isn’t so bad, according to a recent study out of Australia.  But I digress…)

Obviously, if one uses “free” resources to obtain medical information, that info is likely to be as unbiased as the last “free” Cymbalta dinner he or she attended.  Many doctors don’t recognize this.

When it comes to journals, it gets potentially more interesting.  All of the top medical journals are available online.  And, like many online newspapers and magazines, articles are available for a fee.  But the fees are astronomical—typically $30 or $35 per article—which essentially prohibits any doc from buying more than one or two, let alone doing exhaustive research on a given subject.

Interestingly, some articles are freely available (“open access” is the industry term).  You can try this yourself:  go to pubmed.gov and search for a topic like “bipolar disorder” or “schizophrenia.”  You’ll get thousands of results.  Some results are accompanied by the “Free Article” tag.  You can guess which articles most docs will choose to read.

Why are some articles free while others aren’t?  What’s the catch?  Well, sometimes there is no catch.  For one, the National Institutes of Health (NIH) requires any research done with its funding to be freely available within six months of a paper’s publication.  This makes sense: NIH funds are our tax dollars, so it’s only fair that we get to see the data.  (But even this is coming under attack, since the publishers want to protect their content—and revenue stream.)

Interestingly, though, some journals also have a “pay-for-open-access” policy, in which an author can pay a higher publication fee to make his/her article freely available.  In other words, if I publish a (non-NIH-funded) study but want it to reach a wider audience than simply those ivory-tower types with access to fully-stocked libraries, I can just pay extra.  That’s right, some publishers give me the option to pay to attract readers like community docs, the lay public, journalists, and others (not to mention potential investors in a company with which I’m affiliated).  The policy for Elsevier, one of the world’s largest academic publishers, on such “sponsored articles” can be found here.

You can see where this might lead.  Call me cynical, but paying for more eyeballs sounds a lot like advertising.  Of course, these are peer-reviewed articles, so they do meet some standards of scientific integrity.  (Or do they?  A recent article suggests that “narrative reviews” often misrepresent or overstate claims of medication efficacy.  See also this summary of the article by Neuroskeptic.)

Anyway, the take-home message is, unfortunately, one that we’ve heard all too often.  Science is supposed to be pristine, objective, and unbiased, but it’s clearly not.  Even when you take out the obvious advertising, the drug-rep showmanship, and the pharma-funded CME, there are still ways for a product-specific message to make its way to a doctor’s eyes and ears.  And if our medical journals supposedly represent the last bastion of scientific integrity—the sacred repository of truth in a world of direct-to-consumer advertising, biased KOLs, and Big Pharma largesse—we should be particularly cautious when they fail to serve that purpose.


The Unfortunate Therapeutic Myopia of the EMR

January 19, 2012

There’s a lot you can say about an electronic medical record (EMR).  Some of it is good: it’s more legible than a written chart, it facilitates billing, and it’s (usually) readily accessible.  On the other hand, EMRs are often cumbersome and confusing, they encourage “checklist”-style medicine, and they contain a lot of useless or duplicate information.  But a recent experience in my child/adolescent clinic opened my eyes to where an EMR might really mislead us.

David, a 9 year-old elementary school student, has been coming to the clinic every month for the last three years.  He carries a diagnosis of “bipolar disorder,” manifested primarily as extreme shifts in mood, easy irritability, insomnia, and trouble controlling his temper, both in the classroom and at home.  Previous doctors had diagnosed “oppositional defiant disorder,” then ADHD, then bipolar.  He had had a trial of psychostimulants with no effect, as well as some brief behavioral therapy.  Somewhere along the way, a combination of clonidine and Risperdal was started, and those have been David’s meds for the last year.

The information in the above paragraph came from my single interaction with David and his mom.  It was the first time I had seen David; he was added to my schedule at the last minute because the doctor he had been seeing for the last four months—a locum tenens doc—was unavailable.

Shortly before the visit, I had opened David’s EMR record to review his case, but it was not very informative.  Our EMR only allows one note to be open at a time, and I saw the same thing—”bipolar, stable, continue current meds”—and some other text, apparently cut & pasted, in each of his last 3-4 notes.  This was no big surprise; EMRs are full of cut & pasted material, plus lots of other boilerplate stuff that is necessary for legal & billing purposes but can easily be ignored.  The take-home message, at the time, was that David had been fairly stable for at least the last few months and probably just needed a refill.

During the appointment, I took note that David was a very pleasant child, agreeable and polite.  Mom said he had been “doing well.”  But I also noticed that, throughout the interview, David’s mom was behaving strangely—her head bobbed rhythmically side to side, and her arms moved in a writhing motion.  She spoke tangentially and demonstrated some acute (and extreme) shifts in emotion, at one point even crying suddenly, with no obvious trigger.

I asked questions about their home environment, David’s access to drugs and alcohol, etc., and I learned that mom used Vicodin, Soma, and Xanax.  She admitted that they weren’t prescribed to her—she bought them from friends.  Moreover, she reported that she “had just taken a few Xanax to get out the door this morning” which, she said, “might explain why I’m acting like this.”  She also shared with me that she had been sent to jail four years ago on an accusation of child abuse (she had allegedly struck her teenage daughter during an argument), at which time David and his brothers were sent to an emergency children’s shelter for four nights.

Even though I’m not David’s regular doctor, I felt that these details were relevant to his case.  It was entirely possible, in my opinion, that David’s home environment—a mother using prescription drugs inappropriately, a possible history of trauma—had contributed to his mood lability and “temper dysregulation,” something that a “bipolar” label might mask.

But I’m not writing this to argue that David isn’t “bipolar.”  Instead, I wish to point out that I obtained these details simply by observing the interaction between David and his mom over the course of ~30 minutes, and asking a few questions, and not by reading his EMR record.  In fact, after the appointment I reviewed the last 12 months of his EMR record, which showed dozens of psychiatrists’ notes, therapists’ notes, case manager’s notes, demographic updates, and “treatment plans,” and all of it was generally the same:  diagnosis, brief status updates, LOTS of boilerplate mumbo-jumbo, pages and pages of checkboxes, a few mentions of symptoms.  Nothing about David’s home situation or mom’s past.  In fact, nothing about mom at all.  I could not have been the first clinician to have had concerns about David’s home environment, but if such information was to be found in his EMR record, I had no idea where.

Medical charts—particularly in psychiatry—are living documents.  To any physician who has practiced for more than a decade or so, simply opening an actual, physical, paper chart can be like unfolding a treasure map:  you don’t know what you’ll find, but you know that there may be riches to be revealed.   Sometimes, while thumbing through the chart, a note jumps out because it’s clearly detailed or something relevant is highlighted or “flagged” (in the past, I learned how to spot the handwriting of the more perceptive and thorough clinicians).  Devices like Post-It notes or folded pages provide easy—albeit low-tech—access to relevant information.  Also, a thick paper chart means a long (or complicated) history in treatment, necessitating a more thorough review.  Sometimes the absence of notes over a period of time indicates a period of decompensation, a move, or, possibly a period of remission.  All of this is available, literally, at one’s fingertips.

EMRs are far more restrictive.  In David’s case, the EMR was my only source of information—apart from David himself.  And for David, it seemed sterile, bland, just a series of “check-ins” of a bipolar kid on Risperdal.  There was probably more info somewhere in there, but it was too difficult and non-intuitive to access.  Hence, the practice (adopted by most clinicians) of just opening up the patient’s most recent note—and that’s it.

Unfortunately, this leads to a therapeutic myopia that may change how we practice medicine.  EMRs, when used this way, are here-and-now.  They have become the medical equivalent of Facebook.  When I log on to the EMR, I see my patient’s most recent note—a “status update,” so to speak—but not much else.  It takes time and effort to search through a patient’s profile for more relevant historical info—and that’s if you know where to look.  After working with seven different EMRs in the last six years, I can say that they’re all pretty similar in this regard.  And if an electronic chart is only going to be used for its most recent note, there’s no incentive to be thorough.

Access to information is great.  But the “usability” of EMRs is so poor that we have easy access only to what the last clinician thought was important.  Or better yet, what he or she decided to document.  The rest—like David’s home life, the potential impact of his mother’s behavior on his symptoms, and environmental factors that require our ongoing attention, all of which may be far more meaningful than David’s last Risperdal dose—must be obtained “from scratch.”  If it is obtained at all.


When “Adherence” Is A Dirty Word

January 16, 2012

Recently, I’ve been spending a lot of time reading the literature on “recovery” from mental illness.  Along the way, I’ve been introduced to the writings of Richard Warner and William Anthony, and peer-leaders in the field like Daniel Fisher and Pat Deegan.  Coincidentally, I also learned recently that my local county mental health system will start training patients and providers in Wellness Recovery Action Planning (“WRAP”), a peer-led illness self-management program which promotes autonomy and recovery.

In the interest of “evidence-based medicine,” the developers of WRAP have performed actual controlled trials of this intervention, comparing it to conventional mental health treatment.  In several studies, they have found that patients engaged in a WRAP program are typically more hopeful, more engaged in their recovery, and—quite surprisingly—have fewer psychiatric symptoms than those who are not.

One such paper was published just last month (pdf here).  The investigators showed that WRAP participants in public clinics throughout Ohio were more engaged in “self-advocacy” than patients who were not involved in WRAP, and that this led to improvements in quality of life and—consistent with their earlier studies—a reduction in psychiatric symptoms.  Their measure of “self-advocacy” was the Patient Self-Advocacy Scale (PSAS), “an instrument designed to measure a person’s propensity to engage in self-activism during health care encounters.”

Throughout the intervention, WRAP patients had a consistently higher PSAS score than others.  But their scores were particularly elevated in one subscale: “Mindful Non-Adherence.”

Non-adherence?  I must confess, I did a double-take.  If my years of training in modern psychiatry have taught me one thing, it is that adherence is a primary (yet elusive) goal in patients with serious mental illness.  In fact, the high rate of non-adherence has become the biggest sales pitch for new long-acting injectable antipsychotics like Invega Sustenna.

And now a paper is showing that non-adherence—i.e., the active refusal of medications or other suggestions from one’s doctor—is a good thing.  Really?

Intrigued, I looked more closely at the PSAS scale.  It was developed in 1999 by Dale Brashers of the communications department at the University of Illinois.  The scale was designed not to be a clinical tool, but rather a measure of how people manage interactions with their health care providers.  Their initial studies focused on patients in the HIV-AIDS community (e.g., in organizations like ACT UP) and health care communication patterns among patients who describe themselves as “activists.”

The PSAS scale includes three dimensions:  illness education, assertiveness, and “potential for mindful non-adherence.”  The first two are fairly self-explanatory.  But the third one is defined as “a tendency to reject treatments” or “a willingness to be nonadherent when treatments fail to meet the patient’s expectations.”  Four questions on the PSAS survey assess this potential, including #10: “Sometimes I think I have a better grasp of what I need than my doctor does” and #12: “I don’t always do what my physician or health care worker has asked me to do.”

In the WRAP study published last month, greater agreement with these questions—i.e., greater willingness to be nonadherent—resulted in a greater PSAS score.  I should point out that in a separate analysis, high non-adherence scores were not associated with better clinical outcomes, but education and assertiveness (and overall PSAS scores) were.  Nevertheless, when data suggest that patients might benefit from the active “defiance” of doctors’ orders, we physicians should take this seriously.

We can start by helping patients make reasoned treatment decisions.  The term “mindful non-adherence” implies that the patient knows something valuable, and that he or she is willing to act on this knowledge, against the wishes of the physician.  Few providers would admit that the patient has greater knowledge than the “expert” clinician.  After all, that’s why most of us engage in psychoeducation: to inform, enable, and empower our patients.

However, maybe the matters on which we “educate” our patients are ultimately irrelevant.  Maybe patients don’t want (or need) to know which parts of their brains are affected in psychosis, ADHD, or OCD, or how dopamine blockade reduces hallucinations; they just want strategies to alleviate their suffering.  The same may hold true for other areas of medicine, too.  As discussed in a recent article in the online Harvard Business Review, serious problems may arise when too much information is unloaded on patients without the guidance of a professional or, better yet, a peer who has “been there.”

Mental health care may provide the perfect arena in which to test the hypothesis that patients, when given enough information, know what’s best for themselves in the long run.  In a field where one’s own experience is really all that matters, maybe a return to patient-centered decision-making—what Pat Deegan calls the “dignity of risk” and the “right to failure”—is necessary.  At the very least, we physicians should get comfortable with the fact that, sometimes, a patient saying “no” may be the best prescription possible.


“Explanation” vs. “Exploration” in Mental Illness

January 11, 2012

A quickie post here today:  I invite you all to go check out my most recent contribution to the “Couch in Crisis” blog at Psychiatric Times online, entitled “Symptoms and What They Mean.”

http://www.psychiatrictimes.com/blog/couchincrisis/content/article/10168/2017035

Free registration at Psychiatric Times is required.  Cheers!


Two New Ways To Get Sued

January 6, 2012

The last week hasn’t been a very uplifting one for psychiatrists who pay attention to the news.  For as much as we complain about shrinking reimbursements, the undue influence of Big Pharma, and government meddling in our therapeutic work, we psychiatrists now have two new reasons to be concerned.

And, maybe, to lawyer up.

I. APA Threatens Blogger

Most readers who follow this blog will certainly have seen this story already, after first being reported in Allen Frances’ Psychology Today blog.  So I know I’m just preaching to the choir here, but frankly, in my opinion, this story cannot receive too much attention.

As you probably know, American Psychiatric Publishing, a branch of the APA, threatened to sue a British blogger, Suzy Chapman, for her blog “dsm5watch.”  They argued that the use of “dsm5” in her blog title constituted trademark infringement.  She has moved her content to “dxrevisionwatch” and describes her reasons for doing so here.

I had been following the “dsm5watch” blog since February 2011 via my RSS feed, and have linked to its content in some of my posts.  It was first launched way back in December 2009.  I thought it was a fair, balanced way for readers to keep abreast of the DSM-5 development process (for a while, I actually thought it was published by the APA!!).  Granted, many of the posts were about CFS/ME (chronic fatigue syndrome/myalgic encephalomyelitis), and the blog often mentioned the DSM-5 controversy, but nothing that hadn’t been published anywhere else.

In my humble opinion, shutting it down was simply a misguided, heavy-handed move by the APA.  Why “misguided”?  As psychotherapist and author Gary Greenberg wrote in his blog Thursday, “the APA is a corporation that, like any other, will do anything to protect itself from harm…. And it spends a lot of time imagining dangers.”

Suzy Chapman, congratulations, you are the “bad object” of the APA’s paranoid projection.

This entire fiasco has the potential to become a huge embarrassment to the field of psychiatry.  I guess I can understand why the APA might wish to protect its intellectual property, but the idea of “picking on the little guy”—especially when the “little guy” is simply keeping readers informed about developments in our field of (supposedly) intellectual, scientific endeavor—makes me ashamed to think that these men and women speak for me.

II. Patients Sue Doctors for Creating “Valium Addicts”

This article, too, has made the rounds on several blogs and news sites, and while it was published in a UK tabloid well-known for several anti-medication stories in the past, I think the message it sends is an important one.

Benzodiazepines, or “benzos” (which include Valium, Xanax, Klonopin, and Ativan), are some of the most widely prescribed drugs in the US and Great Britain, and among the most addictive.  Tolerance to the anxiolytic effects of benzos develops very rapidly, so people often request higher doses; but overdose can be deadly due to respiratory depression, and the withdrawal syndrome—which can include seizures and delirium—can also be life-threatening.

Benzos have been popular since the 1960s.  They replaced the barbiturates, made popular by the Rolling Stones as “Mother’s Little Helpers” back in 1966.  Their rapid onset and calming effect—much like that of alcohol—and their ability to potentiate the effects of other drugs, like opiates, often leads to use, abuse, and addiction.

[Not to get too tangential here, but last week’s episode of “Real Housewives of Beverly Hills” (hey, it’s one of my wife’s favorite shows, and we have only one TV) featured Brandi in a Xanax-and-alcohol-fueled daze, enjoying a mai tai with her girlfriends at a Lanai resort.  Oh, and she had trouble keeping her right nipple in her cocktail dress.  Is it any wonder why people request benzos by name???]

Anyway, to get back on track:  Benzos are effective drugs.  And their utility and versatility—not to mention their street value—gives them a cachet that’s hard to exaggerate.   More importantly, the potential dangers, which are compounded in patients with a high tolerance, mean that they really should be prescribed for very short intervals, if at all.

But the responsible use of benzos requires effort on behalf of the prescriber.  It takes time to explain to the patient the risks of tolerance and withdrawal.  It also takes time to teach other methods of managing anxiety.  Doctors (and, increasingly, patients) just don’t have that kind of time—or don’t want to find it.  Moreover, they (we) find it difficult to say “no” to patients when they describe something working so well.

Hence, it’s not uncommon for doctors to see patients taking 4 mg of Xanax or 8 mg of Klonopin daily, and still complaining of anxiety or restlessness or “jitteriness” and asking for more.  Patients on these regimens rarely want to stop them (even when told of the long-term dangers), and when they do, the withdrawal process is not one to be taken lightly.  (The Ashton Manualavailable online—is the authoritative resource for managing benzo withdrawal.)

Do I believe it’s fair to sue doctors who turn their patients into “benzo addicts”?  That’s a difficult question, particularly because of the tricky nature of the word “addict.”  If we instead talk about making patients physically dependent on benzos, then the question can be reframed as:  Should we blame doctors for creating a physiological state in a patient which has the potential to be life-threatening if not managed properly?

Before answering “Hell yes!” it must be understood that just about everything we do in psychopharmacology (if not all of medicine) “has the potential to be life-threatening if not managed properly.”  The real issue is, how likely is an adverse outcome, and how well does the doctor manage it?  Of course, there’s also the question about whether the patient bears any responsibility in the overuse or abuse of the drug.  But even if a patient knowingly takes more than what is recommended and the doctor knows this, it is the doctor’s responsibility to respond accordingly.

In my book, there’s no excuse for the indiscriminate prescribing of benzodiazepines.  There’s also no excuse for abruptly discharging a heavy benzo user from one’s practice, or “dumping” him on a public clinic or detox facility.  (Trust me, this happens A LOT.)  Whether a doc should be sued for this is not my area of expertise.  However, I think it is good that attention is being drawn to what is, in the end, just bad medicine.  Hopefully the systems in place that foster this sort of care—inadequate medical education, poor reimbursement for therapy, emphasis on medication management, and arbitrary insurance-company regulations that limit access to more effective treatment—can be changed soon.

But I’m not holding my breath.  I’m calling my attorney.


Biomarker Envy VI: Therapygenetics

January 4, 2012

I enjoy learning about new developments in psychiatry just as much as the next guy.  In particular, developments that promise to make treatment more effective or “individualized.”  So I was intrigued by the title of a recent paper in Molecular Psychiatry, which seemed to herald the rise of a new use for genetic testing.  But not for a biological therapy.  No, this new use for genotyping is to predict which type of psychotherapy is best for a patient.

It even has a snappy new name:  “therapygenetics.”  The term was minted by Thalia Eley and her colleagues at Kings College London, authors of the study.  Basically, the study suggests that variation in a particular gene sequence might predict patients’ responses to a psychotherapeutic intervention.  And according to a recent editorial (pdf here) in Trends in Cognitive Sciences, this might be the first step in a new era of “personalized psychotherapy.”

Personalized psychotherapy?”  A colleague of mine happened to see the editorial on my desk.  After looking at that phrase for a moment, puzzled, he asked, “Isn’t psychotherapy personalized already?”  Good question.  After all, psychotherapy is the quintessential personalized medicine, isn’t it?  Haven’t we been criticizing “biomarker” studies because they try to personalize treatment by measuring chemicals in the blood, scanning people’s brains, and doing genetic tests?  Basically everything except talk to the person???

Not so fast.  Before long, your psychotherapist might ask you for a cheek swab or blood sample on your first visit.  And who knows—you just might thank him for it.

In this prospective, observational study, Eley’s group studied 359 children in the UK and Australia who enrolled in a cognitive behavior therapy clinic for treatment of an anxiety disorder.  Before starting therapy, the children underwent genomic testing, specifically for the genotype of the 5-HTTLPR, the promoter region of the serotonin transporter gene.  They found that children with SS genotype (i.e., two copies of the “S” allele) were more likely to respond to cognitive behavior therapy (CBT) than the other children.  It wasn’t an absolute benefit, but the data looked pretty good, particularly after six months of follow-up:

The above results may be reminiscent of some earlier work.  The 5-HTTLPR has become the “workhorse” of psychiatric pharmacogenetics, ever since the publication by Caspi in 2003 (pdf here) that the SS genotype predisposes people to depression if they also experience stressful life events.  That result was challenged in 2009, however, by larger studies showing no effect.  Then, this result was overturned again by an even larger analysis (pdf here) showing that, indeed, the S allele might in fact mediate the stress-depression relationship.  Confused yet?  I agree, it’s enough to make one’s head spin.  Or, at least, to make us conclude that we don’t really know whether it makes people more susceptible to depression.

But maybe Eley’s finding can lead us in a slightly different direction.  Maybe the 5-HTTLPR genotype may make one more responsive to treatment.  Maybe the S allele makes a person not just more sensitive to stressful life events (and therefore more likely to become depressed or anxious), but also more able to overcome them through therapy?

It’s an intriguing suggestion, but easy to dismiss.  After all, who’s to say that another group won’t overturn this result and lead us right back to square one?  And does this mean that we should test all our patients before subjecting them to CBT?  Who’s going to do that?  (And pay for it?)

Personally, I don’t believe Eley’s paper should be casually tossed aside.  First of all, anything that improves patient outcomes (yes, even pharmacogenetics) deserves study.  And while it’s probably premature to use genetic tests to assign people to psychotherapy interventions, it is encouraging to see this work.  Specifically, this is a first attempt to take an endophenotype and use it to enhance treatment response.

An endophenotype is a heritable feature—biochemical, anatomical, psychological—that is simpler than a “diagnosis” like a depression or anxiety disorder, but which can be readily observed and measured.  (See excellent review here.)  In this case, the S allele of 5-HTTLPR might bias one’s attention toward emotional stimuli, an endophenotype that has been found in other research.  (Note: in a previous naturalistic study of bulimic patients, the S allele was correlated with greater novelty seeking and insecure attachment.)  If such an endophenotype can be found in other subgroups of depressed and anxious patients, then it makes sense that we might be able to employ treatment strategies that exploit this psychological feature.

This is purely theoretical, of course, but the beauty is that this theory is entirely testable, and the Eley paper is the first attempt to do so in a non-pharmacological setting.  Behavioral and psychological endophenotypes offer a perfect opportunity to test the efficacy of psychosocial approaches, which, by definition, target patients’ behavior and psychology.  Biological phenotypes can also be tested (e.g., with different pharmacological interventions), but there are always several steps between a change in biology and a person’s subjective report of effect—this is the bane of psychiatry.

In other words, using psychobehavioral endophenotypes to enhance treatment offers face validity.  It just makes sense to both the clinician and patient.  Most patients would willingly submit to a genetic test (or, for that matter, a battery of psychological tests — maybe a “psychomarker” is on the horizon?) to match them with a psychotherapeutic treatment.  However, using a CYP450 genotype, or brain scan, or quantitative EEG, to predict which drug is best for them, just seems, well, weird.

In conclusion, I’ll quote a passage from the Trends in Cognitive Sciences editorial:  “Genetic variation can (and should) be incorporated into psychosocial treatment research…. Doing so promises to deliver a fuller, more nuanced understanding of psychopathology which, in turn, could enhance the ability to tailor treatments to individuals based on genetic profile, increase the effectiveness of psychosocial treatments, and ultimately alleviate substantial suffering associated with psychiatric illness.”

Hopefully, this will turn out to be true.  It seems like the best way to harness the inevitable (and money-driven) push toward genotyping, but to use such data in a way that maximizes patient response, rather than simply make our treatment more automated and algorithmic (the psychiatric industry’s version of “personalized”) than it seems to be headed right now.


%d bloggers like this: