Here’s A Disease. Do You Have It?

March 29, 2011

I serve as a consultant to a student organization at a nearby university.  These enterprising students produce patient-education materials (brochures, posters, handouts, etc) for several chronic diseases, and their mission—a noble one—is to distribute these materials to free clinics in underserved communities, with a goal to raise awareness of these conditions and educate patients on their proper management.

Because I work part-time in a community mental health clinic, I was, naturally, quite receptive to their offer to distribute some of their handiwork to my patients.  The group sent me several professional-looking flyers and brochures describing the key features of anxiety disorders, depression, PTSD, schizophrenia, and insomnia, and suggested that I distribute these materials to patients in my waiting room.

They do an excellent job at demystifying (and destigmatizing) mental illness, and describe, in layman’s terms, symptoms that may be suggestive of a significant psychiatric disorder (quoting from one, for example: “Certain neurotransmitters are out of balance when people are depressed.  They often feel sad, hopeless, helpless, lack energy, … If you think you may be depressed, talk to a doctor.”)  But just as I was about to print a stack of brochures and place them at the front door, I thought to myself, what exactly is our goal?

Experiencing symptoms of anxiety, depression, or insomnia doesn’t necessarily indicate mental illness or a need for medications or therapy; they might reflect a stressful period in one’s life or a difficult transition for which one might simply need some support or encouragement.  I feared that the questions posed in these materials may lead people to believe that there might be something “wrong” with them, when they are actually quite healthy.  (The target audience needs to be considered, too, but I’ll write more about that later.)

It led me to the question: when does “raising awareness” become “disease mongering”?

“Disease-mongering,” if you haven’t heard of it, is the (pejorative) term used to describe efforts to lead people to believe they have a disease when they most likely do not, or when the “disease” in question is so poorly defined as to be questionable in and of itself.  Accusations of disease-mongering have made in the area of bipolar disorder, fibromyalgia, restless legs syndrome, female sexual arousal disorder, “low testosterone,” and many others, and have mainly been directed toward pharmaceutical companies with a vested interest in getting people on their drugs.  (See this special issue of PLoS One for several articles on this topic.)

Psychiatric disorders are ripe for disease-mongering because they are essentially defined by subjective symptoms, rather than objective signs and tests.  In other words, if I simply recite the symptoms of depression to my doctor, he’ll probably prescribe me an antidepressant; but if I tell him I have an infection, he’ll check my temperature, my WBC count, maybe palpate some lymph nodes, and if all seems normal he probably won’t write me a script for an antibiotic.

It’s true that some patients might deliberately falsify or exaggerate symptoms in order to obtain a particular medication or diagnosis.  What’s far more likely, though, is that they are (unconsciously) led to believe they have some illness, simply on the basis of experiencing some symptoms that are, more or less, a slight deviation from “normal.”  This is problematic for a number of reasons.  Obviously, an improper diagnosis leads to the prescription of unnecessary medications (and to their undesirable side effects), driving up the cost of health care.  It may also harm the patient in other ways; it may prevent the patient from getting health insurance or a job, or—even more insidiously—lead them to believe they have less control over their thoughts or behaviors than they actually do.

When we educate the public about mental illness, and encourage people to seek help if they think they need it, we walk a fine line.  Some people who may truly benefit from professional help will ignore the message, saying they “feel fine,” while others with very minor symptoms which are simply part of everyday life may be drawn in.  (Here is another example, a flyer for childhood bipolar disorder, produced by the NIH; how many parents & kids might be “caught”?)  Mental health providers should never turn away someone who presents for an evaluation or assessment, but we also have an obligation to provide a fair and unbiased opinion of whether a person needs treatment or not.  After all, isn’t that our responsibility as professionals?  To provide our honest input as to whether someone is healthy or unhealthy?

I almost used the words “normal” and “abnormal” in the last sentence.  I try not to use these words (what’s “normal” anyway?), but keeping them in mind helps us to see things from the patient’s perspective.  When she hears constant messages touting “If you have symptom X then you might have disorder Y—talk to your doctor!” she goes to the doctor seeking guidance, not necessarily a diagnosis.

The democratization of medical and scientific knowledge is, in my opinion, a good thing.  Information about what we know (and what we don’t know) about mental illness should indeed be shared with the public.   But it should not be undertaken with the goal of prescribing more of a certain medication, bringing more patients into one’s practice, or doling out more diagnoses.  Prospective patients often can’t tell what the motives are behind the messages they see—magazine ads, internet sites, and waiting-room brochures may be produced by just about anyone —and this is where the responsibility and ethics of the professional are of utmost importance.

Because if the patient can’t trust us to tell them they’re okay, then are we really protecting and ensuring the public good?

(Thanks to altmentalities for the childhood bipolar flyer.)


Stress, Illness, and Biological Determinism

March 27, 2011

Two interesting articles caught my attention this week, on the important subject of “stress” and its relationship to human disease—both psychological and physical.  Each offers some promising ways to prevent stress-related disease, but they also point out some potential biases in precisely how we might go about doing so.

A piece by Paul Tough in the New Yorker profiled Nadine Burke, a San Francisco pediatrician (the article is here, but it’s subscription-only; another link might be here).  Burke works in SF’s poverty-stricken Bayview-Hunters Point neighborhood, where health problems are rampant.  She recognized that in this population, the precursors of disease are not just the usual suspects like poor access to health care, diet/lifestyle, education, and high rates of substance use, but also the impact of “adverse childhood experiences” or ACEs.

Drawing upon research by Vincent Felitti and Robert Anda, Burke found that patients who were subjected to more ACEs (such as parental divorce, physical abuse, emotional neglect, being raised by a family member with a drug problem, etc.) had worse outcomes as adults.  These early traumatic experiences had an effect on the development of illnesses such as cancer, heart disease, respiratory illness, and addiction.

The implication for public health, obviously, is that we must either limit exposure to stressful events in childhood, or decrease their propensity to cause long-term adverse outcomes.  The New Yorker article briefly covers some biological research in the latter area, such as how early stress affects DNA methylation in rats, and how inflammatory markers like C-reactive protein are elevated in people who were mistreated as children.  Burke is quoted as saying, “In many cases, what looks like a social situation is actually a neurochemical situation.”  And a Harvard professor claims, “this is a very exciting opportunity to bring biology into early-childhood policy.”

With words like “neurochemical” and “biology” (not to mention “exciting”) being used this way, it doesn’t take much reading-between-the-lines to assume that the stage is being set for a neurochemical intervention, possibly even a “revolution.”  One can almost hear the wheels turning in the minds of academics and pharmaceutical execs, who are undoubtedly anticipating an enormous market for endocrine modulators, demethylating agents, and good old-fashioned antidepressants as ways to prevent physical disease in the children of Hunters Point.

To its credit, the article stops short of proposing that all kids be put on drugs to eliminate the effects of stress.  The author emphasizes that Burke’s clinic engages in biofeedback, child-parent therapy, and other non-pharmacological interventions to promote secure attachment between child and caregiver.  But in a society that tends to favor the “promises” of neuropharmacology—not to mention patients who might prefer the magic elixir of a pill—is this simply window-dressing?  A way to appease patients and give the impression of doing good, until the “real” therapies, medications, become available?

More importantly, are we expecting drugs to reverse the effects of social inequities, cultural disenfranchisement, and personal irresponsibility?


The other paper is a study published this month in the Journal of Epidemiology and Community Health.  In this paper, researchers from Sweden measured “psychological distress” and its effects on long-term disability in more than 17,000 “average” Swedish adults.  The subjects were given a baseline questionnaire in 2002, and researchers followed them over a five-year period to see how many received new disability benefits for medical or psychiatric illness.

Not surprisingly, there was a direct correlation between high “psychological distress” and high rates of disability.  It is, of course, quite possible that people who had high baseline distress were distressed about a chronic and disabling health condition, which worsened over the next five years.  But the study also found that even low levels of psychological stress at baseline were significantly correlated with the likelihood of receiving a long-term disability benefit, for both medical and psychiatric illness.

The questionnaire used by the researchers was the General Health Questionnaire, a deceptively simple, 12-question survey of psychological distress (a typical question is “Have you recently felt like you were under constant strain?” with four possible answers, “not at all” up to “much more than usual”)  and scored on a 12-point scale.  Interestingly, people who scored only 1 point out of 12 were twice as likely to receive a disability reward than those who scored zero, and the rates only went up from there.

I won’t delve into other details of the results here, but as Sweden resembles the US in its high rates of psychiatric “disability” (between 1990 and 2007, the percentage of disability rewards due to psychiatric illness rose from ~15% to over 40%), the implication is clear: even mild psychological “distress” is a risk factor for future illness—both physical and mental—and to reverse this trend, the effects of this distress must be treated or prevented in some way.


Both of these articles—from different parts of the world, using different measurement instruments, and looking at somewhat different outcomes—nevertheless reach the same conclusion:  early life stress is a risk factor for future disease.  This is a long-recognized phenomenon (for an easily accessible exploration of the topic, read Why Zebras Don’t Get Ulcers, by Stanford’s Robert Sapolsky, a former mentor of mine).

But what do we do with this knowledge?  My fear is that, rather than looking at ways to minimize “stress” in the first place (through social programs, education, and other efforts to raise awareness of the detrimental effects of stress), we as a society are instead conditioned to think about how we can intervene with a drug or some other way to modulate the “neurochemical situation,” as Nadine Burke put it.  In other words, we’re less inclined to act than to react, and our reactions are essentially chemical in nature.

As a psychiatrist who has worked with an inner-city population for many years, I’m already called upon to make diagnoses and prescribe medications not for what are obviously (to me) clear-cut cases of significant and disabling mental illness, but, rather, the accumulated effects of stress and trauma.  (I’ll write more about this fascinating interface of society and biology in the future.)   True, sometimes the diagnoses do “fit,” and indeed sometimes the medications work.  But I am doing nothing to prevent the initial trauma, nor do I feel that I am helping people cope with their stress by telling them to take a pill once or twice a day.

We as a society need to make sure we don’t perpetuate the false promises of biological determinism.  I applaud Nadine Burke and I’m glad epidemiologists (and the New Yorker) are asking serious questions about precursors of disease.  But let’s think about what really helps, rather than looking solely to biology as our savior.

(Thanks to Michael at The Trusting Heart for leading me to the New Yorker article.)

The Dangerous Duality of “Dual Diagnosis”

March 23, 2011

When psychiatric illness coexists with a substance use disorder, we refer to this as a “dual diagnosis.” This term makes clear that we’re talking about two conditions in the same person, which could exist independently of each other (hence they’re also sometimes called “co-occurring disorders”), rather than one disorder causing the other—as seen, for example, in cases of a methamphetamine-induced psychotic reaction or an alcohol-induced depression.

Of course, no two conditions in medicine ever exist truly independently of each other, particularly in psychiatry, and the high prevalence of “dual diagnosis” patients (more than a third of alcoholics, for example, have a co-occurring mental illness, and at least 20% of persons with a mood disorder have a drug use problem) suggests that there’s something about mental illness that makes people more susceptible to addictive disorders, and vice versa.

A “dual diagnosis” label should, theoretically, draw attention to the special concerns these patients face, and to the need for specialized and integrated treatment.  Unfortunately, in practice, this rarely occurs.  Instead, this knowledge often results in compartmentalized care, which may have unfortunate consequences for the dually diagnosed.

How so?  Consider an inpatient psychiatric ward.  Patients are admitted to these units for brief “acute stabilization,” when they are actively symptomatic, often with psychosis, thoughts of suicide, or other poorly controlled symptoms.  Because these hospitalizations are very short, there’s little or no opportunity to engage in meaningful addiction treatment.  Even when the immediate precipitant of the patient’s acute episode is identified as the abuse of a drug or alcohol, we often discharge patients with little more than a written instruction to “go to AA” or “consider rehab” (or my personal favorite, shown above [click for larger version], which would be funny if it weren’t real).  Similarly, in the psychiatrist’s office—particularly when the visits are only 10 or 15 minutes long—there’s usually no time to discuss the addiction; at best, the patient might get something along the lines of, “oh, and be sure to try to cut down on your drinking, too.”

Even though this is commonplace, it sends a powerful yet dangerous message to the addict:  it says that his addiction is less important than the mental disorder, less worthy of treatment, or, perhaps, impossible to treat.  It might signal to the addict that his psychiatrist is unwilling or unable to talk about the addiction, which may be (subconsciously) interpreted as a tacit approval of the addictive behavior.  (If you think I’m exaggerating, then you’ve probably never experienced the overwhelming power of addictive thinking, and its unique ability to twist people’s judgment and common sense in extreme ways.)

It’s also just bad medicine.  As any ER psychiatrist can attest, substance-induced exacerbations of mental illness are rampant and a major cause of hospital admissions (not to mention medication noncompliance, aggression, criminal activity, and other unwanted outcomes).  Ignoring this fact and simply stabilizing the patient with the admonition to “consider” substance use treatment is unlikely to improve the long-term outcome.

In the drug or alcohol treatment setting, the situation is often quite similar.  Sometimes a therapist may not be aware of a patient’s mental health history or active symptoms, in which case he or she might have unrealistically high expectations about the patient’s progress. On the other hand, if the patient is known to carry a psychiatric diagnosis, a therapist might incorrectly attribute even the slightest resistance—and addicts show a lot of it—to that mental illness (even when the symptoms are well-controlled) and miss the opportunity to make substantial inroads in treatment.  Neither alternative “meets the addict where he is,” challenging him with demands that are appropriate for his capabilities and his level of understanding.

True “dual diagnosis” treatment, where it exists, involves close interaction among addiction therapists, rehab counselors, psychiatrists, and others involved in the mental, physical, social, and spiritual well-being of each patient.  Some psychiatrists are well-versed in the nature of addiction (those who have first-hand experience of addiction and recovery are often well positioned to understand the demands on the recovering addict), and, similarly, some addiction experts are adept at identifying and managing symptoms of mental illness.  With this combination, patients can benefit from individualized treatment and are given fewer opportunities to fly beneath the proverbial radar.

However, for most patients this is the exception rather than the rule.  “Addition psychiatrists” are sometimes little more than prescribers of a replacement therapy like Suboxone or naltrexone, and rehab programs often include mental health treatment “at a distance”—i.e., sending clients to a 15-minute visit with a psychiatrist who’s not involved in the day-to-day challenges of the recovering individual.  Addicts need more than this, and I’ll return to this topic in later posts.

Any discussion about improving real-world psychiatric treatment must address the dual-diagnosis issue.  We desperately need more psychiatrists who are knowledgeable about substance abuse disorders and the interplay between addictions and mental illness, and not just the latest “anticraving” drugs or substitution therapies.  We also need to educate other addiction treatment providers about the manifestations of mental illness and the medications and other therapies available.  Providing compartmentalized or lopsided care—even when well-intentioned—does no service to a struggling patient, and may in the long run do more harm than good.

Thank You, Somaxon Pharmaceuticals!

March 18, 2011

One year ago today, the pharmaceutical company Somaxon introduced Silenor, a new medication for the treatment of insomnia, and today I wish to say “thanks.”  Not because I think Silenor represented a bold new entry into the insomnia marketplace, but because Somaxon’s R&D and marketing departments have successfully re-introduced doctors to a cheap, old medication for a very common clinical complaint.

You see, Silenor is a branded version of a generic drug, doxepin, which has been around since the 1970s.  Doxepin is classified as a tricyclic antidepressant, even though it’s not used much for depression anymore because of its side effects, mainly sedation.  Of course, in psychiatry we sometimes exploit the side effects of certain drugs to treat entirely different conditions, so it’s not surprising that doxepin—which has been generic (i.e., cheap) for the last few decades—has been used occasionally for the treatment of insomnia.  However, this is an “off-label” use, and while that doesn’t prevent doctors from prescribing it, it may make us less likely to consider its use.

Somaxon spent several years, and millions of dollars, developing Silenor, a low-dose formulation of doxepin.  Stephen Stahl (paid by Somaxon) even publicized low-dose doxepin in his CNS Spectrums column.  Generic doxepin is currently available in comparatively high doses (10, 25, 50, 75, 100, and 150 mg), but Somaxon found that lower doses (6 and 3 mg, even 1 mg) can be used to treat insomnia.  Silenor is sold at 3 and 6 mg per tablet.

The obvious question here (to both expert and layman alike) is, what’s so special about the 3 or 6 mg dose?  Why can’t I just take a generic 10 mg pill and cut it in half (or thereabouts), for a homemade 5-mg dose?  Well, for one thing, the 10 mg dose is a capsule, so it can’t be split.  (There is a generic 10 mg/ml doxepin solution available, which would allow for very accurate dosing, but I’ll ignore that for now.)

Okay, so there’s the practical issue: pill vs. capsule.  But is 6 mg any better than 10 mg?  For any drug, there’s always variability in how people will respond.  The relative difference between 6 and 10 is large, but when you consider that people have been taking doses of up to 300 mg/day (the maximum approved dose for depression) for decades, it becomes relatively meaningless.  So what gives?

It’s natural to ask these questions.  Maybe Somaxon was hoping that doctors and patients simply assume that they’ve done all the necessary studies to prove that, no, doxepin is an entirely different drug at lower doses, and far more effective for sleep at 3 or 6 mg than at any other dose, even 10 mg.  Indeed, a few papers have been published (by authors affiliated with Somaxon) showing that 3 and 6 mg are both effective doses.  But they still don’t answer:  how are those different from higher doses?

I contacted the Medical Affairs department at Somaxon and asked this very question.  How is 3 or 6 mg different from 10 mg or higher?  The woman on the other end of the line, who (one would think) must have heard this question before, politely responded, “Doxepin’s not approved for insomnia at doses of 10 mg or higher, and the 3 and 6 mg doses are available in tablet form, not capsule.”

I knew that already; it’s on their web site.  I would like to think that no psychiatrist asking my question would settle for this answer.  So I asked if she had some additional information.  She sent me a six-page document entitled “Is the 10 mg Doxepin Capsule a Suitable Substitute for the Silenor® 6 mg tablet?”  (If you’re interested in reading it, please email me.)

After reading the document, my response to this question is, “well, yes, it probably is.”  The document explains that doxepin hasn’t been studied as an insomnia agent at higher doses (in other words, nobody has tried to get FDA approval for doxepin in insomnia), and the contents of the tablet are absorbed at a different rate than the capsule.

But what really caught my eye was the following figure, which traces plasma concentration of doxepin over a 12-hour period.  The lower curve is for 6 mg of Silenor.  The higher curve is for “estimated” 10 mg doxepin.

Huh?  “Estimated”?  Yes, that’s right, the upper curve was actually obtained by giving people 50 mg doxepin capsules and then “estimating” the plasma concentrations that would result if the person had actually been given 10 mg capsules.  (I know, I had to read it twice myself.)  I don’t know how they did the estimation.  Did they divide the plasma concentration by 5?  Use some other equation involving fancy things like logarithms or integrals?  I don’t know, and they don’t say.  Which only begs the question: why didn’t they just use 10 mg capsules for this study???

That seems a little fishy.  At any rate, their take-home message is the fact that with the lower dose, there’s less doxepin left over in the bloodstream after 8 hours, so there’s less of a “hangover” effect the next morning.  But this just raises even more questions for me.  If this is the case, then what about all those people who took 75-150 mg each day for depression?  Wouldn’t that result in a constant “hangover” effect?  I didn’t practice in the 1970s and 1980s, but I’m guessing that depressed people on doxepin weren’t in bed 24 hours a day, 7 days a week.  (I know, the serotonergic and noradrenergic effects “kick in” at higher doses, but the histamine and alpha-1 adrenergic receptors are already saturated.)  A related question is, what plasma concentration of doxepin is required to induce sleep anyway?  What plasma concentration accounts for a “hangover” effect?  0.5?  1.0?  2.0?  Does anyone know?

The Somaxon document states that “clinical trials demonstrated only a modest increase in mean sleep maintenance efficacy when the dose is increased from 3 mg to 6 mg.”  But according to the graph above, the 3 mg curve would be expected to look quite different, as it’s a 50% reduction in dose.  (And I can’t even begin to think what the 1 mg curve would look like, but that apparently works, too.)

We all know (or should know) that tables, charts, and graphs can be used to convey just about any message.  It’s important to look at which data are presented, but also which data are not presented, in order to draw any conclusions.  We must also ask what the data actually mean (i.e., the importance of a plasma concentration—what does it actually mean for a clinical effect?).  In the end, Somaxon’s “explanation” seems like a pretty flimsy explanation for using a very expensive name-brand drug.

That said, I do have to say “thank you” to Somaxon for reminding me of yet another medication that I can use to help treat insomnia.  Not Silenor, but low-dose generic doxepin (10 mg).  It’s a shame they had to spend the millions of dollars on R&D, clinical trials, and marketing, to convince me to prescribe the generic version of their drug, which costs only pennies a pill, but then again, you pays your money, you takes your chances.

(Postscript:  Speaking of money, Somaxon stock closed at $2.70/share today, down from a high of $10.01 on the day Silenor was approved, a loss of $258 million of market capitalization.  Imagine all the soft pillows, soothing music CDs, and OTC sleep aids that money could have bought…)

The Perils of Checklist Psychiatry

March 16, 2011

It’s no secret that doctors in all specialties spend less and less time with patients these days.  Last Sunday’s NY Times cover article (which I wrote about here and here) gave a fairly stark example of how reimbursement incentives have given modern psychiatry a sort of assembly-line mentality:  “Come in, state your problems, and here’s your script.  Next in line!!”  Unfortunately, all the trappings of modern medicine—shrinking reimbursements, electronic medical record systems which favor checklists over narratives, and patients who frequently want a “quick fix”—contribute directly to this sort of practice.

To be fair, there are many psychiatrists who don’t work this way.  But this usually comes with a higher price tag, which insurance companies often refuse to pay.  Why?  Well, to use the common yet frustrating phrase, it’s not “evidence-based medicine.”  As it turns out, the only available evidence is for the measurement of specific symptoms (measured by a checklist) and the prescription of pills over (short) periods of time.  Paradoxically, psychiatry—which should know better—no longer sees patients as people with interesting backgrounds and multiple ongoing social and psychological dynamics, but as collections of symptoms (anywhere in the world!) which respond to drugs.

The embodiment of this mentality, of course, is the DSM-IV, the “diagnostic manual” of psychiatry, which is basically a collection of symptom checklists designed to make a psychiatric diagnosis.  Now, I know that’s a gross oversimplification, and I’m also aware that sophisticated interviewing skills can help to determine the difference between a minor disturbance in a patient’s mood or behavior and a pathological condition (i.e., betwen a symptom and a syndrome).  But often the time, or those skills, simply aren’t available, and a diagnosis is made on the basis of what’s on the list.  As a result, psychiatric diagnoses have become “diagnoses of inclusion”:  you say you have a symptom, you’ll get a diagnosis.

To make matters worse, the checklist mentality, aided by the Internet, has spawned a small industry of “diagnostic tools,” freely available to clinicians and to patients, and published in books, magazines, and web sites.  (The bestselling book The Checklist Manifesto may have contributed, too.  In it, author-surgeon Atul Gawande explains how simple checklists are useful in complex situations in which lives are on the line.  He has received much praise, but the checklists he describes help to narrow our focus, when in psychiatry it should be broadened.  In other words, checklists are great for preparing an OR for surgery, or a jetliner for takeoff, but not in identifying the underlying causes of an individual’s suffering.)

Anyway, a quick Google search for any mental health condition (or even a personality trait like shyness, irritability, or anger) will reveal dozens of free questionnaires, surveys, and checklists designed to make a tentative diagnosis.  Most give the disclaimer “this is not meant to be a diagnostic tool—please consult your physician.”

But why?  If the patient has already answered all the questions that the doctor will ask anyway in the 10 to 15 minutes allotted for their appointment, why can’t the patient just email the questionnaire directly to a doc in another state (or another country) from the convenience of their own home, enter their credit card information, and wait for a prescription in the mail?  Heck, why not eliminate the middleman and submit the questionnaire directly to the drug company for a supply of pills?

I realize I’m exaggerating here.  Questionnaires and checklists can be extremely helpful—when used responsibly—as a way to obtain a “snapshot” of a patient’s progress or of his/her active symptoms, and to suggest topics for discussion in a more thorough interview.  Also, people also have an innate desire to know how they “score” on some measure—the exercise can even be entertaining—and their results can sometimes reveal things they didn’t know about themselves.

But what makes psychiatry and psychology fascinating is the discovery of alternate, more parsimonious (or potentially more serious) explanations for a patient’s traits and behaviors; or, conversely, informing a patient that his or her “high score” is actually nothing to be worried about.  That’s where the expert comes in.  Unfortunately, experts can behave like Internet surveys, too, and when we do, the “rush to judgment” can be shortsighted, unfair, and wrong.

Off-Label Meds: Caveat Prescriptor

March 13, 2011

In medicine we say that a drug is “indicated” for a given disorder when it has gone through rigorous testing for that condition. Typically, a drug company will perform clinical trials in which they select patients with the condition, give them the new drug, and compare them with similar patients who are given a placebo (or an established drug which is already used to treat the disease). In the US, when the FDA approves a drug, the drug company is then permitted to advertise it in magazines, journals, TV, the internet, and directly to doctors, but they must specify its “approved” use.

In the past few years, several drug companies have found themselves in trouble after accusations of marketing their drugs for off-label indications. Total fines have reached into the billions, and many companies have vowed to change their marketing practices in response.

It should be emphasized, however, that doctors use drugs off-label very frequently. This is particularly true in psychiatry, where an estimated 31% of all prescriptions are off-label. Some familiar examples include trazodone (an antidepressant) for insomnia or beta blockers (originally approved for hypertension and heart failure) for anxiety. Furthermore, some very common symptoms and conditions, such as personality disorders, impulsivity, nightmares, eating disorders, and PTSD, have no (or few) “indicated” medications, and yet we often treat them with medications, sometimes with great success. And since the FDA restricts its approvals to medications and devices, even psychotherapy—something we routinely recommend and “prescribe” to patients—is, technically, off-label.

One colleague took this one step further and explained that virtually any psychiatric drug which has been prescribed for more than 8 or 12 weeks is being used “off-label” since the studies to obtain FDA approval are generally no longer than that. Admittedly, that’s nitpicking, but it does demonstrate how the FDA approval process works with a very limited amount of clinical data.

Drug companies that deliberately market their drugs for off-label indications are indeed guilty of misrepresenting their products and deceiving doctors and consumers. But to blame them for bad patient outcomes conveniently ignores the one missing link in the process: the doctor who decided to prescribe the drug in the first place. Whether we like it or not, drug companies are businesses, they sell products, and as with everything else in our consumerist society, the buyer (in this case the doctor) must beware.

Here’s an example. A new drug came to market in February called Latuda, which has been FDA approved for the treatment of schizophrenia. Before a few months ago, most community psychiatrists (like me) knew absolutely nothing about this drug.

If a sales rep visits my office tomorrow and tells me that it’s approved for schizophrenia and for bipolar disorder, she is obviously giving me false information. This is not good. But how I choose to use the drug is up to me. It’s my responsibility—and my duty, frankly—to look at the data for schizophrenia (which exists, and which is available on the Latuda web site and in a few articles in the literature). If I look for data on bipolar disorder, I’ll find that it doesn’t exist.

That’s just due diligence. After reviewing the data, I may conclude that Latuda looks like a lousy drug for schizophrenia (I’ll save those comments for later). However, I might find that it may have some benefit in bipolar disorder, maybe on particular symptoms or in a certain subgroup of patients. Or, I might find some completely unrelated condition in which it might be effective. If so, I should be able to go ahead and use it—assuming I’ve exhausted the established, accepted, and less costly treatments already. Convincing my patient’s insurance company to pay for it would be another story… but I digress.

I don’t mean to imply that marketing has no place in medicine and that all decisions should be made by the physician with the “purity” of data alone. In fact, for a new drug like Latuda, sales reps and advertising materials are effective vehicles for disseminating information to physicians, and most of the time it is done responsibly. I just think doctors need to evaluate the messages more critically (isn’t that something we all learned to do in med school?). Fortunately, most sales reps are willing to engage doctors in that dialogue and help us to obtain hard data if we request it.

The bottom line is this: psychiatric disorders are complicated entities, and medications may have potential far beyond their “approved” indications. While I agree that pharmaceutical marketing should stick to proven data and not anecdotal evidence or hearsay, doctors should be permitted to use drugs in the ways they see fit, regardless of marketing. But—and this is critical—doctors have a responsibility to evaluate the data for both unapproved and approved indications, and should be able to defend their treatment decisions. Pleading ignorance, or crying “the rep told me so,” is just thoughtless medicine.

Are Your Thoughts Still Racing, Jiefang?

March 10, 2011

A recent Vanity Fair article described the trend by American pharmaceutical companies to conduct more clinical trials outside of the United States and Western Europe.  The writer and bioethicist Carl Elliott also detailed this trend in his book White Coat, Black Hat, and it has recently received increasing scrutiny in the media.  While much attention has focused on the ethical concerns of overseas clinical trials, I’m avoiding that hot topic for now and arguing that we should pay some attention to questions of clinical relevance.

This is no small matter.  The VF article reports that one-third of clinical trials by the 20 largest US-based pharmaceutical companies are conducted exclusively at foreign sites, and medications destined for use in the U.S. have been tested in almost 60,000 clinical trials in 173 countries since 2000.  The reasons for “outsourcing” clinical trials are not surprising:  cheaper costs, less restrictive regulations, more accessible subjects, and patients who are less likely to have taken other medications in the past, thus yielding a more “pure” population and, hopefully, more useful data.

At first glance, overseas clinical trials really shouldn’t be much of a problem.  The underlying biology of a disease should have nothing to do with where the diseased person lives.  Hypertension and hepatitis are probably quite similar, if not identical, whether the patient is in Boston or Bangalore.  An article in this month’s Archives of General Psychiatry appears to reinforce this concept, showing that rates of bipolar disorder—as well as its “severity” and “impact”—are similar in a variety of different international settings.  Hence, if you were to ask me where I’d do a clinical trial for a new bipolar medication, I’d probably go where it would cost less to do so (i.e., overseas), too.

But is this appropriate?  Just because we can find “bipolar disorder” in the U.S. and in Uganda, does this mean we should we treat it the same way?  Over at the blog 1boringoldman, Mickey has uncovered data showing that trials of Seroquel (an atypical antipsychotic) for bipolar depression are being conducted in 11 Chinese provinces.  You can search the data yourself at (a truly fantastic tool, BTW) and find that many other psychiatric drugs are being tested worldwide, for a wide range of indications.

To a lowly community psychiatrist like me, this raises a few red flags.  As I learned in my transcultural psychiatry lectures in med school and residency, the manifestations of disease—and the recommended treatment approaches—can vary dramatically based on the culture in which the disease appears.  Even in my own practice, “bipolar disorder” varies greatly from person to person:  a bipolar patient from a wealthy San Francisco suburb experiences her disease very differently from the patient from the poverty-stricken neighborhoods of East Oakland.  A good psychiatrist must respect these differences.  Or so I was taught.

In his book Crazy Like Us, author Ethan Watters gives numerous examples of this phenomenon on a much larger scale.  He argues that the cultural dimensions that frame a disease have a profound impact on how a patient experiences and interprets his or her symptoms.  He also describes how patients’ expectations of treatments (drugs, “talk” therapy) differ from culture to culture, and can determine the success or failure of a treatment.

Let’s say you asked me to treat Jiefang, a young peasant woman with bipolar disorder from Guangdong Province.  Before doing so, I would want to read up on her community’s attitudes towards mental illness (and try to understand what “bipolar disorder” itself means in her community, if anything), learn about the belief systems in place regarding her signs and symptoms, and understand her goals for treatment.  Before prescribing Seroquel (or any other drug, for that matter), I’d like to know how she feels about using a chemical substance which might affect her feelings, emotions, and behavior.  I imagine it would take me a while before Jiefang and I felt comfortable proceeding with this approach.

There’s just something fishy about scientists from a multinational Contract Research Organization hired by Astra-Zeneca, flying into Guangdong with their white coats and clipboards, recruiting a bunch of folks with (western-defined) bipolar disorder just like Jiefang, giving them various doses of Seroquel, measuring their responses to bipolar rating scales (developed by westerners, of course), and submitting those data for FDA approval.

I sure hope I’m oversimplifying things.  Then again, maybe not.  When the next me-too drug is “FDA approved” for schizophrenia or bipolar depression (or, gasp, fibromyalgia), how can I be sure that it was tested on patients like the ones in my practice?  Or even tested at all on patients who know what those diagnoses even mean?   There’s no way to tell anymore.

The “pathoplastic” features of disease—what Watters calls the “coloring and content”—make psychiatry fascinating.  But they’re often more than just details; they include the ways in which patients are influenced by public beliefs and cultural ideas, the forces to which they attribute their symptoms, and the faith (or lack thereof) they put into medications.  These factors must be considered in any attempt to define and treat mental illness.

Clinical trials have never resembled the “real world.”  But designing clinical trials that resemble our target patients even less—simply for the sake of bringing  a drug to market quickly and more cheaply—is not just unreal, but deceptive and potentially dangerous.

%d bloggers like this: