Sleeping Pills Are Deadly? Says Who, Exactly?

March 1, 2012

As most readers know, we’re paying more attention than ever before to conflicts of interest in medicine.   If an individual physician, researcher, speaker, or author is known to have a financial relationship with a drug company, we publicize it.  It’s actually federal law now.  The idea is that doctors might be biased by drug companies who “pay” them (either directly—through gifts, meals, or cash—or indirectly, through research or educational grants) to say or write things that are favorable to their drug.

A recent article on the relationship between sedative/hypnotics and mortality, published this week in BMJ Open (the online version of the British Medical Journal) and widely publicized, raises additional questions about the conflicts and biases that individual researchers bring to their work.

Co-authors Daniel Kripke, of UC San Diego, and Robert Langer, of the Jackson Hole Center for Preventive Medicine, reviewed the electronic charts of over 30,000 patients in a rural Pennsylvania health plan.  Approximately 30% of those patients received at least one prescription for a hypnotic (a benzodiazepine like Klonopin or Restoril, or a sleeping agent like Lunesta or Ambien) during the five-year study period, and there was a strong relationship between hypnotics and risk of death.  The more prescriptions one received, the greater the likelihood that one would die during the study period.  There was also a specifically increased risk of cancer in groups receiving the largest number of hypnotic prescriptions.

The results have received wide media attention.  Mainstream media networks, major newspapers, popular websites, and other outlets have run with sensational headlines like “Higher Death Risk With Sleeping Pills” and “Sleeping Pills Can Bring On the Big Sleep.”

But the study has received widespread criticism, too.  Many critics have pointed out that concurrent psychiatric diagnoses were not addressed, so mortality may have been related more to suicide or substance abuse.  Others point out the likelihood of Berkson’s Bias—the fact that the cases (those who received hypnotic prescriptions) may have been far sicker than controls, despite attempts to match them.  The study also failed to report other medications patients received (like opioids, which can be dangerous when given with sedative/hypnotics) or to control for socioeconomic status.

What has not received a lot of attention, however, is the philosophical (and financial) bias of the authors.  Lead author Daniel Kripke has been, for many years, an outspoken critic of the sleeping pill industry.  He has also widely criticized the conventional wisdom that people need 8 or more hours of sleep per night.  He has written books about it, and was even featured on the popular Showtime TV show “Penn & Teller: Bullshit!” railing against drug companies (and doctors) who profit by prescribing sleep meds.  Kripke is also one of the pioneers of “bright light therapy” (using high-intensity light to affect circadian rhythms)—first in the area of depression, and, most recently, to improve sleep.  To the best of my knowledge, he has no financial ties to the makers of light boxes.  Then again, light boxes are technically not medical devices and, therefore, are not regulated by the FDA, so he may not be required to report any affiliation.  Nevertheless, he clearly has had a decades-long professional interest in promoting light therapy and demonizing sleeping pills.

Kripke’s co-author, Robert Langer, is an epidemiologist, a past site coordinator of the Women’s Health Initiative, and a staunch advocate of preventive medicine.  More importantly, though (and advertised prominently on his website), he is an expert witness in litigation involving hormone replacement therapy (HRT), and also in cancer malpractice cases.  Like Kripke, he has also found a place in the media spotlight; he will be featured in “Hot Flash Havoc,” a movie about HRT in menopausal women, to be released later this month.

[Interestingly, Kripke and Langer also collaborated on a 2011 study showing that sleep times >6.5 hrs or <5 hrs were associated with increased mortality.  One figure looked virtually identical to figure 1 in their BMJ paper (see below).  It would be interesting to know whether mortality in the current study is indeed due to sedative prescriptions or, if the results of their earlier paper are correct, simply due to the fact that the people requesting sedative prescriptions in the first place are the ones with compromised sleep and, therefore, increased mortality.  In other words, maybe the sedative is simply a marker for something else causing mortality—the same argument raised above.]

Do the authors’ backgrounds bias their results?  If Kripke and Langer were receiving grants and speakers’ fees from Forest Labs, and published an article extolling the benefits of Viibryd, Forest’s new antidepressant, how would we respond?  Might we dig a little deeper?  Approach the paper with more skepticism?  Is the media publicizing this study (largely uncritically) because its conclusion resonates with the “politically correct” idea that psychotropic medications are bad?  Michael Thase (a long-time pharma-sponsored researcher and U Penn professor) was put in the hot seat on “60 Minutes” a few weeks ago about whether antidepressants provide any benefit, but Kripke and Langer—two equally prominent researchers—seem to be getting a free ride, as far as the media are concerned.

I’m not trying to defend the drug industry, and I’m certainly not defending sedatives.  My own bias is that I prefer to minimize my use of hypnotics in my patients—although my opposition is not so much because of their cancer or mortality risk, but rather the risk of abuse, dependence, and their effect on other psychiatric and medical symptoms.  The bottom line is, I want to believe the BMJ study.  But more importantly, I want the medical literature to be objective, fair, and unbiased.

Unfortunately, it’s hard—if not impossible—to avoid bias, particularly when you’ve worked in a field for many years (like Kripke and Langer) and have a strong belief about why things are the way they are.  In such a case, it seems almost natural that you’d want to publish research providing evidence in support of your belief.  But when does a strongly held belief become a conflict of interest?  Does it contribute to a bias in the same way that a psychopharmacologist’s financial affiliation with a drug company might?

These are just a few questions that we’ll need to pay closer attention to, as we continue to disclose conflicts of interest among medical professionals.  Sometimes bias is obvious and driven by one’s pocketbook, other times it is more subtle and rooted in one’s beliefs and experience.  But we should always be wary of the ways in which it can compromise scientific objectivity or lead us to question what’s really true.


Big Brother Is Watching You (Sort Of)

February 17, 2012

I practice in California, which, like most (but not all) states has a service by which I can review my patients’ controlled-substance prescriptions.  “Controlled” substances are those drugs with a high potential for abuse, such as narcotic pain meds (e.g., Vicodin, Norco, OxyContin) or benzodiazepines (e.g., Xanax, Valium, Klonopin).  The thinking is that if we can follow patients who use high amounts of these drugs, we can prevent substance abuse or the illicit sale of these medications on the street or black market.

Unfortunately, California’s program may be on the chopping block.  Due to budget constraints, Governor Jerry Brown is threatening to close the Bureau of Narcotic Enforcement (BNE), the agency which tracks pharmacy data.  At present, the program is being supported by grant money—which could run out at any time—and there’s only one full-time staff member managing it.  Thus, while other states (even Florida, despite the opposition of Governor Rick Scott) are scrambling to implement programs like this one, it’s a travesty that we in California might lose ours.

Physicians (and the DEA) argue that these programs are valuable for detecting “doctor shoppers”—i.e., those who go from office to office trying to obtain Rx’es for powerful opioids with street value or addictive potential.  Some have even argued that there should be a nationwide database, which could help us identify people involved in interstate drug-smuggling rings like the famous “OxyContin Express” between rural Appalachia and Florida.

But I would say that the drug-monitoring programs should be preserved for an entirely different reason: namely, that they help to improve patient care.  I frequently check the prescription histories of my patients.  I’m not “playing detective,” seeking to bust a patient who might be abusing or selling their pills.  Rather, I do it to get a more accurate picture of a patient’s recent history.  Patients may come to me, for example, with complaints of anxiety while the database shows they’re already taking large amounts of Xanax or Ativan, occasionally from multiple providers.  Similarly, I might see high doses of pain medications, which (if prescribed & taken legitimately) cues me in to the possibility that pain management may be an important aspect of treating their psychiatric concerns, or vice versa.

I see no reason whatsoever that this system couldn’t be extended to non-controlled medications.  In fact, it’s just a logical extension of what’s already possible.  Most of my patients don’t recognize that I can call every single pharmacy in town and ask for a list of all their medications.  All I need is the patient’s name and birthdate.  Of course, there’s no way in the world I would do this, because I don’t have enough time to call every pharmacy in town.  So instead, I rely largely on what the patient tells me.  But sometimes there’s a huge discrepancy between what patients say they’re taking and what the pharmacy actually dispenses, owing to confusion, forgetfulness, language barriers, or deliberate obfuscation.

So why don’t we have a centralized, comprehensive database of patient med lists?

Some would argue it’s a matter of privacy.  Patients might not want to disclose that they’re taking Viagra or Propecia or an STD treatment (or methadone—for some reasons patients frequently omit that opioid).  But that argument doesn’t hold much water, because in practice, as I wrote above, I could, in theory, call every pharmacy in one’s town (or state) and find that out.

Another argument is that it would be too complicated to gather data from multiple pharmacies and correlate medication lists with patient names.  I don’t buy this argument either.  Consider “data mining.”  This widespread practice allows pharmaceutical companies to get incredibly detailed descriptions of all medications prescribed by each licensed doctor.  The key difference here, of course, is that the data are linked to doctors, not to patients, so patient privacy is not a concern.  (The privacy of patients is sacred, that of doctors, not so much; the Supreme Court even said so.)  Nevertheless, when my Latuda representative knows exactly how much Abilify, Seroquel, and Zyprexa I’ve prescribed in the last 6 months, and knows more about my practice than I do (unless I’ve decided to opt out of this system), then a comprehensive database is clearly feasible.

Finally, some would argue that a database would be far too expensive, given the costs of collecting data, hiring people to manage it, etc.  Maybe if it’s run by government bureaucrats, yes, but I believe this argument is out of touch with the times.  Why can’t we find some out-of-work Silicon Valley engineers, give them a small grant, and ask them to build a database that would collect info from pharmacy chains across the state, along with patient names & birthdates, which could be searched through an online portal by any verified physician?  And set it up so that it’s updated in real time.  Maintenance would probably require just a few people, tops.

Not only does such a proposal sound eminently doable, it actually sounds like it might be easy (and maybe even fun) to create.  If a group of code warriors & college dropouts can set up microblogging platforms, social networking sites, and online payment sites, fueled by nothing more than enthusiasm and Mountain Dew, then a statewide prescription database could be a piece of cake.

Alas, there are just too many hurdles to overcome.  Although it may seem easy to an IT professional, and may seem like just plain good medicine to a doc like me, history has a way of showing that what makes the best sense just doesn’t happen (especially when government agencies are involved).  Until this changes, I’ll keep bothering my local pharmacists by phone to get the information that would be nice to have at my fingertips already.


ADHD: A Modest Proposal

February 1, 2012

I’m reluctant to write a post about ADHD.  It just seems like treacherous ground.  Judging by comments I’ve read online and in magazines, and my own personal experience, expressing an opinion about this diagnosis—or just about anything in child psychiatry—will be met with criticism from one side or another.  But after reading L. Alan Sroufe’s article (“Ritalin Gone Wild”) in this weekend’s New York Times, I feel compelled to write.

If you have not read the article, I encourage you to do so.  Personally, I agree with every word (well, except for the comment about “children born into poverty therefore [being] more vulnerable to behavior problems”—I would remind Dr Sroufe that correlation does not equal causation).  In fact, I wish I had written it.  Unfortunately, it seems that only outsiders or retired psychiatrists can write such stuff about this profession. The rest of us might need to look for jobs someday.

Predictably, the article has attracted numerous online detractors.  For starters, check out this response from the NYT “Motherlode” blog, condemning Dr Sroufe for “blaming parents” for ADHD.  In my reading of the original article, Dr Sroufe did nothing of the sort.  Rather, he pointed out that ADHD symptoms may not entirely (or at all) arise from an inborn neurological defect (or “chemical imbalance”), but rather that environmental influences may be more important.  He also remarked that, yes, ADHD drugs do work; children (and adults, for that matter) do perform better on them, but those successes decline over time, possibly because a drug solution “does nothing to change [environmental] conditions … in the first place.”

I couldn’t agree more.  To be honest, I think this statement holds true for much of what we treat in psychiatry, but it’s particularly relevant in children and adolescents.  Children are exposed to an enormous number of influences as they try to navigate their way in the world, not to mention the fact that their brains—and bodies—continue to develop rapidly and are highly vulnerable.  “Environmental influences” are almost limitless.

I have a radical proposal which will probably never, ever, be implemented, but which might help resolve the problems raised by the NYT article.  Read on.

First of all, you’ll note that I referred to “ADHD symptoms” above, not “ADHD.”  This isn’t a typo.  In fact, this is a crucial distinction.  As with anything else in psychiatry, diagnosing ADHD relies on documentation of symptoms.  ADHD-like symptoms are extremely common, particularly in child-age populations.  (To review the official ADHD diagnostic criteria from the DSM-IV, click here.)  To be sure, a diagnosis of ADHD requires that these symptoms be “maladaptive and inconsistent with developmental level.”  Even so, I’ve often joked with my colleagues that I can diagnose just about any child with ADHD just by asking the right questions in the right way.  That’s not entirely a joke.  Try it yourself.  Look at the criteria, and then imagine you have a child in your office whose parent complains that he’s doing poorly in school, or gets in fights, or refuses to do homework, or daydreams a lot, etc.  When the ADHD criteria are on your mind—remember, you have to think like a psychiatrist here!—you’re likely to ask leading questions, and I guarantee you’ll get positive responses.

That’s a lousy way of making a diagnosis, of course, but it’s what happens in psychiatrists’ and pediatricians’ offices every day.  There are more “valid” ways to diagnose ADHD:  rating scales like the Connors or Vanderbilt surveys, extensive neuropsychiatric assessment, or (possibly) expensive imaging tests.  However, in practice, we often let subthreshold scores on those surveys “slide” and prescribe ADHD medications anyway (I’ve seen it plenty); neuropsychiatric assessments are often wishy-washy (“auditory processing score in the 60th percentile,” etc); and, as Dr Sroufe correctly points out, children with poor motivation or “an underdeveloped capacity to regulate their behavior” will most likely have “anomalous” brain scans.  That doesn’t necessarily mean they have a disorder.

So what’s my proposal?  My proposal is to get rid of the diagnosis of ADHD altogether.  Now, before you crucify me or accuse me of being unfit to practice medicine (as one reader—who’s also the author of a book on ADHD—did when I floated this idea on David Allen’s blog last week), allow me to elaborate.

First, if we eliminate the diagnosis of ADHD, we can still do what we’ve been doing.  We can still evaluate children with attention or concentration problems, or hyperactivity, and we can still use stimulant medications (of course, they’d be off-label now) to provide relief—as long as we’ve obtained the same informed consent that we’ve done all along.  We do this all the time in medicine.  If you complain of constant toe and ankle pain, I don’t immediately diagnose you with gout; instead, I might do a focused physical exam of the area and recommend a trial of NSAIDs.  If the pain returns, or doesn’t improve, or you have other features associated with gout, I may want to check uric acid levels, do a synovial fluid analysis, or prescribe allopurinol.

That’s what medicine is all about:  we see symptoms that suggest a diagnosis, and we provide an intervention to help alleviate the symptoms while paying attention to the natural course of the illness, refining the diagnosis over time, and continually modifying the therapy to treat the underlying diagnosis and/or eliminate risk factors.  With the ultimate goal, of course, of minimizing dangerous or expensive interventions and achieving some degree of meaningful recovery.

This is precisely what we don’t do in most cases of ADHD.  Or in most of psychiatry.  While exceptions definitely exist, often the diagnosis of ADHD—and the prescription of a drug that, in many cases, works surprisingly well—is the end of the story.  Child gets a diagnosis, child takes medication, child does better with peers or in school, parents are satisfied, everyone’s happy.  But what caused the symptoms in the first place?  Can (or should) that be fixed?  When can (or should) treatment be stopped?  How can we prevent long-term harm from the medication?

If, on the other hand, we don’t make a diagnosis of ADHD, but instead document that the child has “problems in focusing” or “inattention” or “hyperactivity” (i.e., we describe the specific symptoms), then it behooves us to continue looking for the causes of those symptoms.  For some children, it may be a chaotic home environment.  For others, it may be a history of neglect, or ongoing substance abuse.  For others, it may be a parenting style or interaction which is not ideal for that child’s social or biological makeup (I hesitate to write “poor parenting” because then I’ll really get hate mail!).  For still others, there may indeed be a biological abnormality—maybe a smaller dorsolateral prefrontal cortex (hey! the DLPFC!) or delayed brain maturation.

ADHD offers a unique platform upon which to try this open-minded, non-DSM-biased approach.  Dropping the diagnosis of “ADHD” would have a number of advantages.  It would encourage us to search more deeply for root causes; it would allow us to be more eclectic in our treatment; it would prevent patients, parents, doctors, teachers, and others from using it as a label or as an “excuse” for one’s behavior; and it would require us to provide truly individualized care.  Sure, there will be those who simply ask for the psychostimulants “because they work” for their symptoms of inattentiveness or distractibility (and those who deliberately fake ADHD symptoms because they want to abuse the stimulant or because they want to get into Harvard), but hey, that’s already happening now!  My proposal would create a glut of “false negative” ADHD diagnoses, but it would also reduce the above “false positives,” which, in my opinion, are more damaging to our field’s already tenuous nosology.

A strategy like this could—and probably should—be extended to other conditions in psychiatry, too.  I believe that some of what we call “ADHD” is truly a disorder—probably multiple disorders, as noted above; the same is probably true with “major depression,” ”bipolar disorder,” and just about everything else.  But when these labels start being used indiscriminately (and unfortunately DSM-5 doesn’t look to offer any improvement), the diagnoses become fixed labels and lock us into an approach that may, at best, completely miss the point, and at worst, cause significant harm.  Maybe we should rethink this.


Whatever Works?

January 29, 2012

My iPhone’s Clock Radio app wakes me each day to the live stream of National Public Radio.  Last Monday morning, I emerged from my post-weekend slumber to hear Alix Spiegel’s piece on the serotonin theory of depression.  In my confused, half-awake state, I heard Joseph Coyle, professor of psychiatry at Harvard, remark: “the ‘chemical imbalance’ is sort of last-century thinking; it’s much more complicated than that.”

Was I dreaming?  It was, admittedly, a surreal experience.  It’s not every day that I wake up to the voice of an Ivy League professor lecturing me in psychiatry (those days are long over, thank Biederman god).  Nor did I ever expect a national news program to challenge existing psychiatric dogma.  As I cleared my eyes, though, I realized, this is the real deal.  And it was refreshing, because this is what many of us have been thinking all along.  The serotonin hypothesis of depression is kaput.

Understandably, this story has received lots of attention (see here and here and here and here and here).  I don’t want to jump on the “I-told-you-so” bandwagon, but instead to offer a slightly different perspective.

A few disclaimers:  first and foremost, I agree that the “chemical imbalance” theory has overrun our profession and has commandeered the public’s understanding of mental illness—so much so that the iconic image of the synaptic cleft containing its neurotransmitters has become ensconced in the national psyche.  Secondly, I do prescribe SSRIs (serotonin-reuptake inhibitors), plus lots of other psychiatric medications, which occasionally do work.  (And, in the interest of full disclosure, I’ve taken three of them myself.  They did nothing for me.)

To the extent that psychiatrists talk about “chemical imbalances,” I can see why this could be misconstrued as “lying” to patients.  Ronald Pies’ eloquent article in Psychiatric Times last summer describes the chemical-imbalance theory as “a kind of urban legend,” which no “knowledgeable, well-trained psychiatrist” would ever believe.  But that doesn’t matter.  Thanks to us, the word is out there.  The damage has already been done.  So why, then, do psychiatrists (even the “knowledgeable, well-trained” ones) continue to prescribe SSRI antidepressants to patients?

Because they work.

Okay, maybe not 100% of the time.  Maybe not even 40% of the time, according to antidepressant drug trials like STAR*D.  Experience shows, however, that they work often enough for patients to come back for more.  In fact, when discussed in the right context, their potential side effects described in detail, and prescribed by a compassionate and apparently intelligent and trusted professional, antidepressants probably “work” far more than they do in the drug trials.

But does that make it right to prescribe them?  Ah, that’s an entirely different question.  Consider the following:  I may not agree with the serotonin theory, but if I prescribe an SSRI to a patient with depression, and they report a benefit, experience no obvious side effects, pay only $4/month for the medication, and (say) $50 for a monthly visit with me, is there anything wrong with that?  Plenty of doctors would say, no, not at all.  But what if my patient (justifiably so) doesn’t believe in the serotonin hypothesis and I prescribe anyway?  What if my patient experiences horrible side effects from the drug?  What if the drug costs $400/month instead of $4?  What if I charge the patient $300 instead of $50 for each return visit?  What if I decide to “augment” my patient’s SSRI with yet another serotonin agent, or an atypical antipsychotic, causing hundreds of dollars more, and potentially causing yet more side effects?  Those are the aspects that we don’t often think of, and constitute the unfortunate “collateral damage” of the chemical-imbalance theory.

Indeed, something’s “working” when a patient reports success with an antidepressant; exactly why and how it “works” is less certain.  In my practice, I tell my patients that, at individual synapses, SSRIs probably increase extracellular serotonin levels (at least in the short-term), but we don’t know what that means for your whole brain, much less for your thoughts or behavior.  Some other psychiatrists say that “a serotonin boost might help your depression” or “this drug might act on pathways important for depression.”   Are those lies?  Jeffrey Lacasse and Jonathan Leo write that “telling a falsehood to patients … is a serious violation of informed consent.”  But the same could be said for psychotherapy, religion, tai chi, ECT, rTMS, Reiki, qigong, numerology, orthomolecular psychiatry, somatic re-experiencing, EMDR, self-help groups, AA, yoga, acupuncture, transcendental meditation, and Deplin.  We recommend these things all the time, not knowing exactly how they “work.”

Most of those examples are rather harmless and inexpensive (except for Deplin—it’s expensive), but, like antidepressants, all rest on shaky ground.  So maybe psychiatry’s problem is not the “falsehood” itself, but the consequences of that falsehood.  This faulty hypothesis has spawned an entire industry capitalizing on nothing more than an educated guess, costing our health care system untold millions of dollars, saddling huge numbers of patients with bothersome side effects (or possibly worse), and—most distressingly to me—giving people an incorrect and ultimately dehumanizing solution to their emotional problems.  (What’s dehumanizing about getting better, you might ask?  Well, nothing, except when “getting better” means giving up one’s own ability to manage his/her situation and instead attribute their success to a pill.)

Dr Pies’ article in Psychiatric Times closed with an admonition from psychiatrist Nassir Ghaemi:  “We must not be drawn into a haze of promiscuous eclecticism in our treatment; rather, we must be guided by well-designed studies and the best available evidence.”  That’s debatable.  If we wait for “evidence” for all sorts of interventions that, in many people, do help, we’ll never get anywhere.  A lack of “evidence” certainly hasn’t eliminated religion—or, for that matter, psychoanalysis—from the face of the earth.

Thus, faulty theory or not, there’s still a place for SSRI medications in psychiatry, because some patients swear by them.  Furthermore—and with all due respect to Dr Ghaemi—maybe we should be a bit more promiscuous in our eclecticism.  Medication therapy should be offered side-by-side with competent psychosocial treatments including, but not limited to, psychotherapy, group therapy, holistic-medicine approaches, family interventions, and job training and other social supports.  Patients’ preferences should always be respected, along with safeguards to protect patient safety and prevent against excessive cost.  We may not have good scientific evidence for certain selections on this smorgasbord of options, but if patients keep coming back, report successful outcomes, and enter into meaningful and lasting recovery, that might be all the evidence we need.


(Mis)informed Consent

December 20, 2011

Over the years, the practice of medicine has become less of an art, and more a process of crossing T’s and dotting I’s.  “Treating the chart” has become, in many ways, more important than treating the patient, and it seems that the pen—or, rather, the electronic medical record—has emerged as a more valuable tool than the stethoscope or reflex hammer.

For psychiatrists, one of the pesky little details of any office visit is obtaining “informed consent.”  Most commonly, this is the document—signed by the patient—stating that he/she has been fully informed of the reason they’re being prescribed a medication, the potential risks of taking said medication, and any possible alternatives.  Most private insurers and hospitals, and all Medicaid programs, require this documentation in the charts of patients seeing mental health specialists, and (at least in my experience) these documents are frequently sought in chart audits.

What do I mean by “pesky”?  Put briefly, the process of obtaining informed consent can be time-consuming, and some doctors worry that it might actually interfere with treatment.  In a 2004 survey, for instance, 44% of psychiatrists reported that “informed consent … increases patients’ anxiety.”  With respect to antipsychotics, nearly 20% of psychiatrists in the same study admitted “it is good practice to withhold information about tardive dyskinesia from some patients.”  As a result, patients are often poorly informed about the meds they take.  In a 2001 study of psychiatric inpatients in Scotland, fewer than half knew the reason they were receiving medication, the side effects of those medications, or even remembered getting an explanation from staff.  (But, according to the survey, far more than half were “happy to take all medications”!!)

I was recently asked for some suggestions on how to improve the medication-consent process in my outpatient clinic.  I must admit, the current process is atrocious.  Our forms are 10+ years old, with general descriptions of each class of medication (and, of course, they lack any drug introduced in the last decade); and they have that “photocopy of a photocopy” appearance, with faded margins and text at a crooked angle.  But hey, no big deal—they’re just papers to sign and stick in the chart, basically.  In the community clinic where I work part-time, the process is even more rudimentary: we have one generic form with no drug names or descriptions; the front-desk staff asks each patient to sign the form before each visit, and afterward I simply write in the name of the medication(s) I’ve prescribed.

In thinking of ways to improve the process, I’ve come to realize that it may provide an opportunity for some meaningful change in our treatment approach.

First of all, there’s no excuse for not describing the potential adverse effects of the drugs we use, but we must be cautious not to trivialize this process.  Most psychiatrists I know, for example, have a readymade “speech” about the potential for rash with Lamictal, or weight gain with Zyprexa, or sedation with Seroquel.  (See this post at Shrink Rap—and its comments—for more on this perspective.)  But if the patient hears this as just a “speech,” it’s less likely to be meaningful, just like the pre-flight safety lectures you hear on airplanes.  I advise my students and residents to pretend they’re prescribing to their spouse, parent, or child, and give all the information they would want to hear about each new drug.  (This includes how to stop the medication, too.)

Second, just as important as the potential adverse effects, I believe that patients need to hear more specific explanations of how the drug might actually provide some benefit.  All too often we give a feeble explanation like “this Prozac should make you feel better in a few weeks” or ” Valium might calm your nerves a bit” or “since you haven’t responded to your antidepressant, here’s some Abilify to help it along.”  We owe it to our patients (and to ourselves) to provide more detailed explanations.  To be sure, most patients don’t need to hear a molecular mechanism, complete with pKa values or details of CYP450 metabolism, but we ought to have this information in our heads, and we must know how we’re using this information to treat the patient in front of us.  When a patient asks how an antipsychotic might help their depression, or why an anticonvulsant might help stabilize their mood, we must give an answer.  (And if no good answer is possible, we need to rethink our treatment plan.)

Third, it is equally important to discuss treatment options with a patient.   When patients ask “is there anything else I can do or take?” the ensuing discussion might extend the appointment by a few minutes, but it always leads to a more collaborative dialogue (unless, of course, the patient is fishing for a Xanax prescription or a month’s supply of Seroquel to sell for cash).  A discussion of alternatives often gives an indication of what the patient wants, what the patient values, and how we can best promote the patient’s recovery.

Finally, the informed consent process really should be extended to non-psychiatrists who prescribe these agents.  Primary-care docs routinely prescribe antidepressants, benzodiazepines, psychostimulants, and mood stabilizers (and, of course, my personal favorite, “Seroquel for sleep”), without a discussion of risks, benefits, and alternatives, or (in most cases) a signed consent form.  Heck, even gastroenterologists prescribe Reglan, which is as likely to cause tardive dyskinesia as many of the antipsychotics we use in psychiatry, and pain specialists are fond of Cymbalta (an SNRI with some potentially nasty withdrawal effects) for “chronic pain.”  These providers should recognize the potential risks (and mechanisms) of psychotropics, just as psychiatrists do, and share them with their patients.

So even though we might look at obtaining informed consent as a “necessary evil,” we should instead look at it as a way to enhance treatment.  If nothing else, this would force us to think about what we do and why we do it.  It would enable us to honestly evaluate the true benefits and risks of what we prescribe, and maybe steer us in a different—and healthier—direction.


How Abilify Works, And Why It Matters

September 13, 2011

One lament of many in the mental health profession (psychiatrists and pharmascolds alike) is that we really don’t know enough about how our drugs work.  Sure, we have hypothetical mechanisms, like serotonin reuptake inhibition or NMDA receptor antagonism, which we can observe in a cell culture dish or (sometimes) in a PET study, but how these mechanisms translate into therapeutic effect remains essentially unknown.

As a clinician, I have noticed certain medications being used more frequently over the past few years.  One of these is Abilify (aripiprazole).  I’ve used Abilify for its approved indications—psychosis, acute mania, maintenance treatment of bipolar disorder, and adjunctive treatment of depression.  It frequently (but not always) works.  But I’ve also seen Abilify prescribed for a panoply of off-label indications: “anxiety,” “obsessive-compulsive behavior,” “anger,” “irritability,” and so forth.  Can one medication really do so much?  And if so, what does this say about psychiatry?

From a patient’s perspective, the Abilify phenomenon might best be explained by what it does not do.  If you ask patients, they’ll say that—in general—they tolerate Abilify better than other atypical antipsychotics.  It’s not as sedating as Seroquel, it doesn’t cause the same degree of weight gain as Zyprexa, and the risk of contracting uncomfortable movement disorders or elevated prolactin is lower than that of Risperdal.  To be sure, many people do experience side effects of Abilify, but as far as I can tell, it’s an acceptable drug to most people who take it.

Abilify is a unique pharmacological animal.  Like other atypical antipsychotics, it binds to several different neurotransmitter receptors; this “signature” theoretically accounts for its therapeutic efficacy and side effect profile.  But unlike others in its class, it doesn’t block dopamine (specifically, dopamine D2) or serotonin (specifically, 5-HT1A) receptors.  Rather, it’s a partial agonist at those receptors.  It can activate those receptors, but not to the full biological effect.  In lay terms, then, it can both enhance dopamine and serotonin signaling where those transmitters are deficient, and inhibit signaling where they’re in excess.

Admittedly, that’s a crude oversimplification of Abilify’s effects, and an inadequate description of how a “partial agonist” works.  Nevertheless, it’s the convenient shorthand that most psychiatrists carry around in their heads:  with respect to dopamine and serotonin (the two neurotransmitters which, at least in the current vernacular, are responsible for a significant proportion of pathological behavior and psychiatric symptomatology), Abilify is not an all-or-none drug.  It’s not an on-off switch. It’s more of a “stabilizer,” or, in the words of Stephen Stahl, a “Goldilocks drug.”

Thus, Abilify can be seen, at the same time, as both an antipsychotic, and not an antipsychotic.  It’s both an antidepressant, and not an antidepressant.  And when you have a drug that is (a) generally well tolerated, (b) seems to work by “stabilizing” two neurotransmitter systems, and (c) resists conventional classification in this way, it opens the floodgates for all sorts of potential uses in psychiatry.

Consider the following conditions, all of which are subjects of Abilify clinical trials currently in progress (thanks to clinicaltrials.gov):  psychotic depression; alcohol dependence; “aggression”; improvement of insulin sensitivity; antipsychotic-induced hyperprolactinemia; cocaine dependence; Tourette’s disorder; postpartum depression; methamphetamine dependence; obsessive-compulsive disorder (OCD); late-life bipolar disorder; post-traumatic stress disorder (PTSD); cognitive deficits in schizophrenia; alcohol dependence; autism spectrum disorders; fragile X syndrome; tardive dyskinesia; “subsyndromal bipolar disorder” (whatever that is) in children; conduct disorder; ADHD; prodromal schizophrenia; “refractory anxiety”; psychosis in Parkinson’s disease; anorexia nervosa; substance-induced psychosis; prodromal schizophrenia; trichotillomania; and Alzheimers-related psychosis.

Remember, these are the existing clinical trials of Abilify.  Each one has earned IRB approval and funding support.  In other words, they’re not simply the fantasies of a few rogue psychiatrists; they’re supported by at least some preliminary evidence, or at least a very plausible hypothesis.  The conclusion one might draw from this is that Abilify is truly a wonder drug, showing promise in nearly all of the conditions we treat as psychiatrists.  We’ll have to wait for the clinical trial results, but what we can say at this point is that a drug which works as a “stabilizer” of two very important neurotransmitter systems can be postulated to work in virtually any way a psychopharmalogist might want.

But even if these trials are negative, my prediction is that this won’t stop doctors from prescribing Abilify for each of the above conditions.  Why?  Because the mechanism of Abilify allows for such elegant explanations of pathology (“we need to tune down the dopamine signal to get rid of those flashbacks” or “the serotonin 1A effect might help with your anxiety” – yes, I’ve heard both of these in the last week), that it would be anathema, at least to current psychiatric practice, not to use it in this regard.

This fact alone should lead us to ask what this says about psychiatry as a whole.  The fact that one drug is prescribed so widely—owing to its relatively nonspecific effects and a good deal of creative psychopharmacology on the part of doctors like me—and is so broadly accepted by patients, should call into question our hypotheses about the pathophysiology of mental illness, and how psychiatric disorders are distinguished from one another.  It should challenge our theories of neurotransmitters and receptors and how their interactions underlie specific symptoms.  And it should give us reason to question whether the “stories” we tell ourselves and our patients carry more weight than the medications we prescribe.


Psychopharm R&D Cutbacks II: A Response to Stahl

August 28, 2011

A lively discussion has emerged on the NEI Global blog and on Daniel Carlat’s psychiatry blog about a recent post by Stephen Stahl, NEI chairman, pop(ular) psychiatrist, and promoter of psychopharmaceuticals.  The post pertains to the exodus of pharmaceutical companies from neuroscience research (something I’ve blogged about too), and the changing face of psychiatry in the process.

Dr Stahl’s post is subtitled “Be Careful What You Ask For… You Just Might Get It” and, as one might imagine, it reads as a scathing (some might say “ranting”) reaction against several of psychiatry’s detractors: the “anti-psychiatry” crowd, the recent rules restricting pharmaceutical marketing to doctors, and those who complain about Big Pharma funding medical education.  He singles out Dr Carlat, in particular, as an antipsychiatrist, implying that Carlat believes mental illnesses are inventions of the drug industry, medications are “diabolical,” and drugs exist solely to enrich pharmaceutical companies.  [Not quite Carlat’s point of view, as  a careful reading of his book, his psychopharmacology newsletter, and, yes, his blog, would prove.]

While I do not profess to have the credentials of Stahl or Carlat, I have expressed my own opinions on this matter in my blog, and wanted to enter my opinion on the NEI post.

With respect to Dr Stahl (and I do respect him immensely), I think he must re-evaluate his influence on our profession.  It is huge, and not always in a productive way.  Case in point: for the last two months I have worked in a teaching hospital, and I can say that Stahl is seen as something of a psychiatry “god.”  He has an enormous wealth of knowledge, his writing is clear and persuasive, and the materials produced by NEI present difficult concepts in a clear way.  Stahl’s books are directly quoted—unflinchingly—by students, residents, and faculty.

But there’s the rub.  Stahl has done such a good job of presenting his (i.e., the psychopharmacology industry’s) view of things that it is rarely challenged or questioned.  The “pathways” he suggests for depression, anxiety, psychosis, cognition, insomnia, obsessions, drug addiction, medication side effects—basically everything we treat in psychiatry—are accompanied by theoretical models for how some new pharmacological agent might (or will) affect these pathways, when in fact the underlying premises or the proposed drug mechanisms—or both—may be entirely wrong.  (BTW, this is not a criticism of Stahl, this is simply a statement of fact; psychiatry as a neuroscience is decidedly still in its infancy.)

When you combine Stahl’s talent with his extensive relationships with drug companies, it makes for a potentially dangerous combination.  To cite just two examples, Stahl has written articles (in widely distributed “throwaway” journals) making compelling arguments for the use of low-dose doxepin (Silenor) and L-methylfolate (Deplin) in insomnia and depression, respectively, when the actual data suggest that their generic (or OTC) equivalents are just as effective.  Many similar Stahl productions are included as references or handouts in drug companies’ promotional materials or websites.

How can this be “dangerous”?  Isn’t Stahl just making hypotheses and letting doctors decide what to do with them?  Well, not really.  In my experience, if Stahl says something, it’s no longer a hypothesis, it becomes the truth.

I can’t tell you how many times a student (or even a professor of mine) has explained to me “Well, Stahl says drug A works this way, so it will probably work for symptom B in patient C.”  Unfortunately, we don’t have the follow-up discussion when drug A doesn’t treat symptom B; or patient C experiences some unexpected side effect (which was not predicted by Stahl’s model); or the patient improves in some way potentially unrelated to the medication.  And when we don’t get the outcome we want, we invoke yet another Stahl pathway to explain it, or to justify the addition of another agent.  And so on and so on, until something “works.”  Hey, a broken clock is still correct twice a day.

I don’t begrudge Stahl for writing his articles and books; they’re very well written, and the colorful pictures are fun to look at– it makes psychiatry almost as easy as painting by numbers.  I also (unlike Carlat) don’t get annoyed when doctors do speaking gigs to promote new drugs.  (When these paid speakers are also responsible for teaching students in an academic setting, however, that’s another issue.)  Furthermore, I accept the fact that drug companies will try to increase their profits by expanding market share and promoting their drugs aggressively to me (after all, they’re companies—what do we expect them to do??), or by showing “good will” by underwriting CME, as long as it’s independently confirmed to be without bias.

The problem, however, is that doctors often don’t ask for the data.  We don’t  ask whether Steve Stahl’s models might be wrong (or biased).  We don’t look closely at what we’re presented (either in a CME lesson or by a drug rep) to see whether it’s free from commercial influence.  And, perhaps most distressingly, we don’t listen enough to our patients to determine whether our medications actually do what Stahl tells us they’ll do.

Furthermore, our ignorance is reinforced by a diagnostic tool (the DSM) which requires us to pigeonhole patients into a small number of diagnoses that may have no biological validity; a reimbursement system that encourages a knee-jerk treatment (usually a drug) for each such diagnosis; an FDA approval process that gives the illusion that diagnoses are homogeneous and that all patients will respond the same way; and only the most basic understanding of what causes mental illness.  It creates the perfect opportunity for an authority like Stahl to come in and tell us what we need to know.  (No wonder he’s a consultant for so many pharmaceutical companies.)

As Stahl writes, the departure of Big Pharma from neuroscience research is unfortunate, as our existing medications are FAR from perfect (despite Stahl’s texts making them sound pretty darn effective).  However, this “breather” might allow us to pay more attention to our patients and think about what else—besides drugs—we can use to nurse them back to health.  Moreover, refocusing our research efforts on the underlying psychology and biology of mental illness (i.e., research untainted by the need to show a clinical drug response or to get FDA approval) might open new avenues for future drug development.

Stahl might be right that the anti-pharma pendulum has swung too far, but that doesn’t mean we can’t use this opportunity to make great strides forward in patient care.  The paychecks of some docs might suffer.  Hopefully our patients won’t.


Do Antipsychotics Treat PTSD?

August 23, 2011

Do antipsychotics treat PTSD?  It depends.  That seems to be the best response I can give, based on the results of two recent studies on this complex disorder.  A better question, though, might be: what do antipsychotics treat in PTSD?

One of these reports, a controlled, double-blinded study of the atypical antipsychotic risperidone (Risperdal) for the treatment of “military service-related PTSD,” was featured in a New York Times article earlier this month.  The NYT headline proclaimed, somewhat unceremoniously:  “Antipsychotic Use is Questioned for Combat Stress.”  And indeed, the actual study, published in the Journal of the American Medical Association (JAMA), demonstrated that a six-month trial of risperidone did not improve patients’ scores in a scale of PTSD symptoms, when compared to placebo.

But almost simultaneously, another paper was published in the online journal BMC Psychiatry, stating that Abilify—a different atypical antipsychotic—actually did help patients with “military-related PTSD with major depression.”

So what are we to conclude?  Even though there are some key differences between the studies (which I’ll mention below), a brief survey of the headlines might leave the impression that the two reports “cancel each other out.”  In reality, I think it’s safe to say that neither study contributes very much to our treatment of PTSD.  But it’s not because of the equivocal results.  Instead, it’s a consequence of the premises upon which the two studies were based.

PTSD, or post-traumatic stress disorder, is an incredibly complicated condition.  The diagnosis was first given to Vietnam veterans who, for years after their service, experienced symptoms of increased physiological arousal, avoidance of stimuli associated with their wartime experience, and continual re-experiencing (in the form of nightmares or flashbacks) of the trauma they experienced or observed.  It’s essentially a re-formulation of conditions that were, in earlier years, labeled “shell shock” or “combat fatigue.”

Since the introduction of this disorder in 1980 (in DSM-III), the diagnostic umbrella of PTSD has grown to include victims of sexual and physical abuse, traumatic accidents, natural disasters, terrorist attacks (like the September 11 massacre), and other criminal acts.  Some have even argued that poverty or unfortunate psychosocial circumstances may also qualify as the “traumatic” event.

Not only are the types of stressors that cause PTSD widely variable, but so are the symptoms that ultimately develop.  Some patients complain of minor but persistent symptoms, while others experience infrequent but intense exacerbations.  Similarly, the neurobiology of PTSD is still poorly understood, and may vary from person to person.  And we’ve only just begun to understand protective factors for PTSD, such as the concept of “resilience.”

Does it even make sense to say that one drug can (or cannot) treat such a complex disorder?  Take, for instance, the scale used in the JAMA article to measure patients’ PTSD symptoms.  The PTSD score they used as the outcome measure was the Clinician-Administered PTSD Scale, or CAPS, considered the “gold standard” for PTSD diagnosis.  But the CAPS includes 30 items, ranging from sleep disturbances to concentration difficulties to “survivor guilt”:

It doesn’t take a cognitive psychologist or neuroscientist to recognize that these 30 domains—all features of what we consider “clinical” PTSD—could be explained by just as many, if not more, neural pathways, and may be experienced in entirely different ways, depending upon on one’s psychological makeup and the nature of one’s past trauma.

In other words, saying that Risperdal is “not effective” for PTSD is like saying that acupuncture is not effective for chronic pain, or that a low-carb diet is not an effective way to lose weight.  Statistically speaking, these interventions might not help most patients, but in some, they may indeed play a crucial role.  We just don’t understand the disorders well enough.

[By the way, what about the other study, which reported that Abilify was helpful?  Well, this study was a retrospective review of patients who were prescribed Abilify, not a randomized, placebo-controlled trial.  And it did not use the CAPS, but the PCL-M, a shorter survey of PTSD symptoms.  Moreover, it only included 27 of the 123 veterans who agreed to take Abilify, and I cannot, for the life of me, figure out why the other 96 were excluded from their analysis.]

Anyway, the bottom line is this:  PTSD is a complicated, multifaceted disorder—probably a combination of disorders, similar to much of what we see in psychiatry.  To say that one medication “works” or another “doesn’t work” oversimplifies the condition almost to the point of absurdity.  And for the New York Times to publicize such a finding, only gives more credence to the misconception that a prescription medication is (or has the potential to be) the treatment of choice for all patients with a given diagnosis.

What we need is not another drug trial for PTSD, but rather a better understanding of the psychological and neurobiological underpinnings of the disease, a comprehensive analysis of which symptoms respond to which drug, which aspects of the disorder are not amenable to medication management, and how individuals differ in their experience of the disorder and in the tools (pharmacological and otherwise) they can use to overcome their despair.  Anything else is a failure to recognize the human aspects of the disease, and an issuance of false hope to those who suffer.


Critical Thinking and Drug Advertising

August 14, 2011

One of the advantages of teaching medical students is that I can keep abreast of changes in medical education.  It’s far too easy for a doctor (even just a few years out of training) to become complacent and oblivious to changes in the modern medical curriculum.  So I was pleasantly surprised earlier this week when a fourth-year medical student told me that his recent licensing examination included a vignette which tested his ability to interpret data from a pharmaceutical company advertisement.  Given that most patients (and, indeed, most doctors) now get their information from such sources, it was nice to see that this is now part of a medical student’s education.

For those of you unfamiliar with the process, the US Medical Licensing Examination (USMLE) is a three-step examination that all medical students must take in order to obtain a medical license in the United States.  Most students take steps 1 and 2 during medical school, while step 3 is taken during residency.

Effective this month, the drug-ad questions will appear in the Step 2 examination.  Obviously, I don’t have access to the particular ad that my med student saw, but here’s a sample item taken from the USMLE website (click to enlarge):


It’s attractive and seems concise.  It’s certainly easier to read—some might even say more “fun”—than a dry, boring journal article or data table.  But is it informative?  What would a doctor need to know to confidently prescribe this new drug?  That’s the emphasis of this new type of test question.  Specifically, the two questions pertaining to this item ask the student (1) to identify which statement is most strongly supported by information in the ad, and (2) which type of research design would give the best data in support of using this drug.

It’s good to know that students are being encouraged to ask such questions of themselves (and, more importantly, one would hope, of the people presenting them with such information).  For comparison, here are two “real-world” examples of promotional advertising I have received for two recently launched psychiatric drugs:


Again, nice to look at.  But essentially devoid of information.  Okay, maybe that’s unfair:  Latuda was found to be effective in “two studies for each dose,” and the Oleptro ad claims that “an eight-week study showed that depression symptoms improved for many people taking Oleptro.”  But what does “effective” mean?  What does “improved” mean?  Where’s the data?  How do these drugs compare to medications we’ve been using for years?  Those are the questions that we need to ask, not only to save costs (new drugs are expensive) but also to prevent exposing our patients to adverse effects that only emerge after a period of time on a drug.

(To be fair, it is quite easy to obtain this information on the drug company’s web sites, or by asking the respective drug reps.  But first impressions count for a lot, and how many providers actually ask for the info?  Or can understand it once they do get it??)

The issue of drug advertising and its influence on doctors has received a good degree of attention lately.  An article in PLoS Medicine last year found that exposure to pharmaceutical company information was frequently (although not always) associated with more prescriptions, higher health care costs, or lower prescribing quality.  Similarly, a report last May in the Archives of Otolaryngology evaluated 50 drug ads in otolaryngology (ENT) journals and found that only 14 (28%) of those claims were based on “strong evidence.”  And the journal Emergency Medicine Australasia went one step further last February and banned all drug company advertising, claiming that “marketing of drugs by the pharmaceutical industry, whose prime aim is to bias readers towards prescribing a particular product, is fundamentally at odds with the mission of medical journals.”

The authors of the PLoS article even wrote the editors of the Lancet, one of the world’s top medical journals, to ask if they’d be willing to ban drug ads, too.  Unfortunately, banning drug advertising may not solve the problem either.  As discussed in an excellent article by Harriet Washington in this summer’s American Scholar, drug companies have great influence over the research that gets funded, carried out, and published, regardless of advertising.  Washington writes: “there exist many ways to subvert the clinical-trial process for marketing purposes, and the pharmaceutical industry seems to have found them all.”

As I’ve written before, I have no philosophical—or practical—opposition to pharmaceutical companies, commercial R&D, or drug advertising.  But I am opposed to the blind acceptance of messages that are the direct product of corporate marketing departments, Madison Avenue hucksters, and drug-company shills.  It’s nice to know that the doctors of tomorrow are being taught to ask the right questions, to become aware of bias, and to develop stronger critical thinking skills.  Hopefully this will help them to make better decisions for their patients, rather than serve as unwitting conduits for big pharma’s more wasteful wares.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


%d bloggers like this: