Do Antipsychotics Treat PTSD?

August 23, 2011

Do antipsychotics treat PTSD?  It depends.  That seems to be the best response I can give, based on the results of two recent studies on this complex disorder.  A better question, though, might be: what do antipsychotics treat in PTSD?

One of these reports, a controlled, double-blinded study of the atypical antipsychotic risperidone (Risperdal) for the treatment of “military service-related PTSD,” was featured in a New York Times article earlier this month.  The NYT headline proclaimed, somewhat unceremoniously:  “Antipsychotic Use is Questioned for Combat Stress.”  And indeed, the actual study, published in the Journal of the American Medical Association (JAMA), demonstrated that a six-month trial of risperidone did not improve patients’ scores in a scale of PTSD symptoms, when compared to placebo.

But almost simultaneously, another paper was published in the online journal BMC Psychiatry, stating that Abilify—a different atypical antipsychotic—actually did help patients with “military-related PTSD with major depression.”

So what are we to conclude?  Even though there are some key differences between the studies (which I’ll mention below), a brief survey of the headlines might leave the impression that the two reports “cancel each other out.”  In reality, I think it’s safe to say that neither study contributes very much to our treatment of PTSD.  But it’s not because of the equivocal results.  Instead, it’s a consequence of the premises upon which the two studies were based.

PTSD, or post-traumatic stress disorder, is an incredibly complicated condition.  The diagnosis was first given to Vietnam veterans who, for years after their service, experienced symptoms of increased physiological arousal, avoidance of stimuli associated with their wartime experience, and continual re-experiencing (in the form of nightmares or flashbacks) of the trauma they experienced or observed.  It’s essentially a re-formulation of conditions that were, in earlier years, labeled “shell shock” or “combat fatigue.”

Since the introduction of this disorder in 1980 (in DSM-III), the diagnostic umbrella of PTSD has grown to include victims of sexual and physical abuse, traumatic accidents, natural disasters, terrorist attacks (like the September 11 massacre), and other criminal acts.  Some have even argued that poverty or unfortunate psychosocial circumstances may also qualify as the “traumatic” event.

Not only are the types of stressors that cause PTSD widely variable, but so are the symptoms that ultimately develop.  Some patients complain of minor but persistent symptoms, while others experience infrequent but intense exacerbations.  Similarly, the neurobiology of PTSD is still poorly understood, and may vary from person to person.  And we’ve only just begun to understand protective factors for PTSD, such as the concept of “resilience.”

Does it even make sense to say that one drug can (or cannot) treat such a complex disorder?  Take, for instance, the scale used in the JAMA article to measure patients’ PTSD symptoms.  The PTSD score they used as the outcome measure was the Clinician-Administered PTSD Scale, or CAPS, considered the “gold standard” for PTSD diagnosis.  But the CAPS includes 30 items, ranging from sleep disturbances to concentration difficulties to “survivor guilt”:

It doesn’t take a cognitive psychologist or neuroscientist to recognize that these 30 domains—all features of what we consider “clinical” PTSD—could be explained by just as many, if not more, neural pathways, and may be experienced in entirely different ways, depending upon on one’s psychological makeup and the nature of one’s past trauma.

In other words, saying that Risperdal is “not effective” for PTSD is like saying that acupuncture is not effective for chronic pain, or that a low-carb diet is not an effective way to lose weight.  Statistically speaking, these interventions might not help most patients, but in some, they may indeed play a crucial role.  We just don’t understand the disorders well enough.

[By the way, what about the other study, which reported that Abilify was helpful?  Well, this study was a retrospective review of patients who were prescribed Abilify, not a randomized, placebo-controlled trial.  And it did not use the CAPS, but the PCL-M, a shorter survey of PTSD symptoms.  Moreover, it only included 27 of the 123 veterans who agreed to take Abilify, and I cannot, for the life of me, figure out why the other 96 were excluded from their analysis.]

Anyway, the bottom line is this:  PTSD is a complicated, multifaceted disorder—probably a combination of disorders, similar to much of what we see in psychiatry.  To say that one medication “works” or another “doesn’t work” oversimplifies the condition almost to the point of absurdity.  And for the New York Times to publicize such a finding, only gives more credence to the misconception that a prescription medication is (or has the potential to be) the treatment of choice for all patients with a given diagnosis.

What we need is not another drug trial for PTSD, but rather a better understanding of the psychological and neurobiological underpinnings of the disease, a comprehensive analysis of which symptoms respond to which drug, which aspects of the disorder are not amenable to medication management, and how individuals differ in their experience of the disorder and in the tools (pharmacological and otherwise) they can use to overcome their despair.  Anything else is a failure to recognize the human aspects of the disease, and an issuance of false hope to those who suffer.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Critical Thinking and Drug Advertising

August 14, 2011

One of the advantages of teaching medical students is that I can keep abreast of changes in medical education.  It’s far too easy for a doctor (even just a few years out of training) to become complacent and oblivious to changes in the modern medical curriculum.  So I was pleasantly surprised earlier this week when a fourth-year medical student told me that his recent licensing examination included a vignette which tested his ability to interpret data from a pharmaceutical company advertisement.  Given that most patients (and, indeed, most doctors) now get their information from such sources, it was nice to see that this is now part of a medical student’s education.

For those of you unfamiliar with the process, the US Medical Licensing Examination (USMLE) is a three-step examination that all medical students must take in order to obtain a medical license in the United States.  Most students take steps 1 and 2 during medical school, while step 3 is taken during residency.

Effective this month, the drug-ad questions will appear in the Step 2 examination.  Obviously, I don’t have access to the particular ad that my med student saw, but here’s a sample item taken from the USMLE website (click to enlarge):


It’s attractive and seems concise.  It’s certainly easier to read—some might even say more “fun”—than a dry, boring journal article or data table.  But is it informative?  What would a doctor need to know to confidently prescribe this new drug?  That’s the emphasis of this new type of test question.  Specifically, the two questions pertaining to this item ask the student (1) to identify which statement is most strongly supported by information in the ad, and (2) which type of research design would give the best data in support of using this drug.

It’s good to know that students are being encouraged to ask such questions of themselves (and, more importantly, one would hope, of the people presenting them with such information).  For comparison, here are two “real-world” examples of promotional advertising I have received for two recently launched psychiatric drugs:


Again, nice to look at.  But essentially devoid of information.  Okay, maybe that’s unfair:  Latuda was found to be effective in “two studies for each dose,” and the Oleptro ad claims that “an eight-week study showed that depression symptoms improved for many people taking Oleptro.”  But what does “effective” mean?  What does “improved” mean?  Where’s the data?  How do these drugs compare to medications we’ve been using for years?  Those are the questions that we need to ask, not only to save costs (new drugs are expensive) but also to prevent exposing our patients to adverse effects that only emerge after a period of time on a drug.

(To be fair, it is quite easy to obtain this information on the drug company’s web sites, or by asking the respective drug reps.  But first impressions count for a lot, and how many providers actually ask for the info?  Or can understand it once they do get it??)

The issue of drug advertising and its influence on doctors has received a good degree of attention lately.  An article in PLoS Medicine last year found that exposure to pharmaceutical company information was frequently (although not always) associated with more prescriptions, higher health care costs, or lower prescribing quality.  Similarly, a report last May in the Archives of Otolaryngology evaluated 50 drug ads in otolaryngology (ENT) journals and found that only 14 (28%) of those claims were based on “strong evidence.”  And the journal Emergency Medicine Australasia went one step further last February and banned all drug company advertising, claiming that “marketing of drugs by the pharmaceutical industry, whose prime aim is to bias readers towards prescribing a particular product, is fundamentally at odds with the mission of medical journals.”

The authors of the PLoS article even wrote the editors of the Lancet, one of the world’s top medical journals, to ask if they’d be willing to ban drug ads, too.  Unfortunately, banning drug advertising may not solve the problem either.  As discussed in an excellent article by Harriet Washington in this summer’s American Scholar, drug companies have great influence over the research that gets funded, carried out, and published, regardless of advertising.  Washington writes: “there exist many ways to subvert the clinical-trial process for marketing purposes, and the pharmaceutical industry seems to have found them all.”

As I’ve written before, I have no philosophical—or practical—opposition to pharmaceutical companies, commercial R&D, or drug advertising.  But I am opposed to the blind acceptance of messages that are the direct product of corporate marketing departments, Madison Avenue hucksters, and drug-company shills.  It’s nice to know that the doctors of tomorrow are being taught to ask the right questions, to become aware of bias, and to develop stronger critical thinking skills.  Hopefully this will help them to make better decisions for their patients, rather than serve as unwitting conduits for big pharma’s more wasteful wares.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


Google Is My New Hippocampus

August 6, 2011

A few days ago, upon awakening but before my brain was fully alert, I was reviewing the events of the previous few days in preparation for the new one.  At one point I tried to remember a conversation I had had with a colleague about three days prior, but I could not quite remember the specifics of our discussion.  “No big deal,” I thought to myself, “I’ll just Google it.”

Almost immediately, I recognized the folly of this thought.  Obviously, there is no way to “Google” the events of our personal lives.  But while impractical, the solution was a logical one.  If I want to know any fact or piece of information, I Google it online.  If I want to find a file on my computer, I use Google Desktop.  All of my email conversations for the last five years are archived in my Google Mail account, so I can quickly find correspondence (and people, and account numbers, and emailed passwords, etc) at the click of the “Search” button.  No wonder I immediately thought of Googling myself.

A recent article in Science claims that the permeation of Google and other search engines into our lives—and now onto our smartphones and other portable gadgets—has not only made it easier for us to retrieve information, but it has also changed the way we remember.  In their experiments, three cognitive psychologists from Columbia, Harvard, and UW-Madison demonstrated that we are more likely to forget information if we know that we can access it (e.g., by a search engine) in the future.  Moreover, even for simple data, we’re more likely to remember where we store pieces of information than the subject matter itself.

The implication here is that the process of memory storage & retrieval is rapidly changing in the Online Age.  Humans no longer need to memorize anything (who was the 18th president?  What’s the capital of Australia?  When was the Six-Day War?), but instead just need to know how to access it.

Is this simply a variation of the old statement that “intelligence is not necessarily knowing everything but instead where to find it”?  Perhaps.  An optimist might look at this evolution in human memory as presenting an opportunity to use more brain power for processing complex pieces of information that can’t be readily stored.  In my work, for instance, I’m glad I don’t need to recall precise drug mechanisms, drug-drug interactions, or specific diagnostic criteria (I can look them up quite easily), but can instead spend pay closer attention to the process of listening to my patients and attending to more subtle concerns.  (Which often does more good in the long run anyway.)

The difference, however, is that I was trained in an era in which I did have to memorize all of this information without the advantage of an external online memory bank.  Along the way, I was able to make my own connections among sets of seemingly unrelated facts.  I was able to weed out those that were irrelevant, and retain those that truly made a difference in my daily work.  This resulted, in my opinion, in a much richer understanding of my field.

While I’ve seen no studies of this issue, I wonder whether students in medicine (or, for that matter, other fields requiring mastery of a large body of information) are developing different sets of skills in the Google Era.  Knowing that one can always “look something up” might make a student more careless or lazy.  On the other hand, it might help one to develop a whole new set of clinical skills that previous generations simply didn’t have time for.

Unfortunately, those skills are not the things that are rewarded in our day-to-day work.    We value information and facts, rather than substance and process.  In general, patients want to know drug doses, mechanisms, and side effects, rather than developing a “therapeutic relationship” with their doctor.  Third-party payers don’t care about the insights or breakthroughs that might happen during therapy, but instead that the proper diagnoses and billing codes are given, and that patients improve on some objective measurement.  And when my charts are reviewed by an auditor (or a lawyer), what matters is not the quality of the doctor-patient interaction, but instead the documentation, the informed consent, the checklists, the precise drug dosing, details in the treatment plan, and so on.

I think immediate access to information is a wonderful thing.  Perhaps I rely on it too much.  (My fiancé has already reprimanded me for looking up actors or plot twists on IMDB while we’re watching movies.)  But now that we know it’s changing the way we store information and—I don’t think this is too much of a stretch—the way we think, we should look for ways to use information more efficiently, creatively, and productively.  The human brain has immense potential; now that our collective memories are external (and our likelihood of forgetting is essentially nil), let’s tap that potential do some special and unique things that computers can’t do.  Yet.


Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


Psychiatry, Homeostasis, and Regression to the Mean

July 20, 2011

Are atypical antipsychotics overprescibed?  This question was raised in a recent article on the Al Jazeera English website, and has been debated back and forth for quite some time on various blogs, including this one.  Not surprisingly, their conclusion was that, yes, these medications are indeed overused—and, moreover, that the pharmaceutical industry is responsible for getting patients “hooked” on these drugs via inappropriate advertising and off-label promotion of these agents.

However, I don’t know if this is an entirely fair characterization.

First of all, let’s just be up front with what should be obvious.  Pharmaceutical companies are businesses.  They’re not interested in human health or disease, except insofar as they can exploit people’s fears of disease (sometimes legitimately, sometimes not) to make money.  Anyone who believes that a publicly traded drugmaker might forego their bottom line to treat malaria in Africa “because it’s the right thing to do” is sorely mistaken.  The mission of companies like AstraZeneca, Pfizer, and BMS is to get doctors to prescribe as much Seroquel, Geodon, and Abilify (respectively) as possible.  Period.

In reality, pharmaceutical company revenues would be zero if doctors (OK, and nurse practitioners and—at least in some states—psychologists) didn’t prescribe their drugs.  So it’s doctors who have made antipsychotics one of the most prescribed classes of drugs in America, not the drug companies.  Why is this?  Has there been an epidemic of schizophrenia?  (NB:  most cases of schizophrenia do not fully respond to these drugs.)  Are we particularly susceptible to drug marketing?  Do we believe in the clear and indisputable efficacy of these drugs in the many psychiatric conditions for which they’ve been approved (and those for which they haven’t)?

No, I like to think of it instead as our collective failure to appreciate that patients are more resilient and adaptive than we give them credit for, not to mention our infatuation with the concept of biological psychiatry.  In fact, much of what we attribute to our drugs may in fact be the result of something else entirely.

For an example of what I mean, take a look at the following figure:

This figure has nothing to do with psychiatry.  It shows the average body temperature of two groups of patients with fever—one who received intravenous Tylenol, and the other who received an intravenous placebo.  As you can easily see, Tylenol cut the fever short by a good 30-60 minutes.  But both groups of patients eventually reestablished a normal body temperature.

This is a concept called homeostasis.  It’s the innate ability of a living creature to keep things constant.  When you have a fever, you naturally perspire to give off heat.  When you have an infection, you naturally mobilize your immune system to fight it.  (BTW, prescribing antibiotics for viral respiratory infections is wasteful:  the illness resolves itself “naturally” but the use of a drug leads us to believe that the drug is responsible.)  When you’re sad and hopeless, lethargic and fatigued, you naturally engage in activities to pull yourself out of this “rut.”  All too often, when we doctors see these symptoms, we jump at a diagnosis and a treatment, neglecting the very real human capacity—evolutionarily programmed!!—to naturally overcome these transient blows to our psychological stability and well-being.

There’s another concept—this one from statistics—that we often fail to recognize.  It’s called “regression to the mean.”  If I survey a large number of people on some state of their psychological function (such as mood, or irritability, or distractibility, or anxiety, etc), those with an extreme score on their first evaluation will most likely have a more “normal” score on their next evaluation, and vice versa, even in the absence of any intervention.  In other words, if you’re having a particularly bad day today, you’re more likely to be having a better day the next time I see you.

This is perhaps the best argument for why it takes multiple sessions with a patient—or, at the very least, a very thorough psychiatric history—to make a confident psychiatric diagnosis and to follow response to treatment.  Symptoms—especially mild ones—come and go.  But in our rush to judgment (not to mention the pressures of modern medicine to determine a diagnosis ASAP for billing purposes), endorsement of a few symptoms is often sufficient to justify the prescription of a drug.

Homeostasis and regression to the mean are not the same.  One is a biological process, one is due to natural, semi-random variation.  But both of these concepts should be considered as explanations for our patients “getting better.”  When these changes occur in the context of taking a medication (particularly one like an atypical antipsychotic, with so many uses for multiple nonspecific diagnoses), we like to think the medication is doing the trick, when the clinical response may be due to something else altogether.

Al Jazeera was right: the pharmaceutical companies have done a fantastic job in placing atypical antipsychotics into every psychiatrist’s armamentarium.  And yes, we use them, and people improve.  The point, though, is that the two are sometimes not connected.  Until and unless we find some way to recognize this—and figure out what really works—Big Pharma will continue smiling all the way to the bank.


Addiction Medicine: A New Specialty Or More Of The Same?

July 14, 2011

In an attempt to address a significant—and unmet—need in contemporary health care, the American Board of Addiction Medicine (ABAM) has accredited ten new residency programs in “addiction medicine.”  Details can be found in this article in the July 10 New York Times.  This new initiative will permit young doctors who have completed medical school and an initial internship year to spend an additional year learning about the management of addictive disease.

To be sure, there’s a definite need for trained addiction specialists.  Nora Volkow, director of the National Institute on Drug Abuse (NIDA), says that the lack of knowledge about substance abuse among physicians is “a very serious problem,” and I have certainly found this to be true.  Addictions to drugs and alcohol are devastating (and often life-threatening) conditions that many doctors are ill-prepared to understand—much less treat—and such disorders frequently complicate the management of many medical and psychiatric conditions.

Having worked in the addiction field, however (and having had my own personal experiences in the recovery process), I’m concerned about the precedent that these programs might set for future generations of physicians treating addictive illness.

As much as I respect addiction scientists and agree that the neurochemical basis of addiction deserves greater study, I disagree (in part) with the countless experts who have pronounced for the last 10-20 years that addiction is “a brain disease.”  In my opinion, addiction is a brain disease in the same way that “love” is a rush of dopamine or “anxiety” is a limbic system abnormality.  In other words: yes, addiction clearly does involve the brain, but overcoming one’s addiction (which means different things to different people) is a process that transcends the process of simply taking a pill, correcting one’s biochemistry, or fixing a mutant gene.  In some cases it requires hard work and immense will power; in other cases, a grim recognition of one’s circumstances (“hitting bottom”) and a desire to change; and in still other cases, a “spiritual awakening.”  None of these can be prescribed by a doctor.

In fact, the best argument against the idea of addiction as a biological illness is simple experience.  Each of us has heard of the alcoholic who got sober by going to meetings; or the heroin addict who successfully quit “cold turkey”; or the hard-core cocaine user who stopped after a serious financial setback or the threat of losing his job, marriage, or both.  In fact, these stories are actually quite common.  By comparison, no one overcomes diabetes after experiencing “one too many episodes of ketoacidosis,” and no one resolves their hypertension by establishing a relationship with a Higher Power.

That’s not to say that pharmacological remedies have no place in the treatment of addiction.  Methadone and buprenorphine (Suboxone) are legal, prescription substitutes for heroin and other opioids, and they have allowed addicts to live respectable, “functional” lives.  Drugs like naltrexone or Topamax might curb craving for alcohol in at least some alcoholic patients (of course, when you’re talking about the difference between 18 beers/day and 13 beers/day, you might correctly ask, “what’s the point?”), and other pharmaceuticals might do the same for such nasty things as cocaine, nicotine, gambling, or sugar & flour.

But we in medicine tend to overemphasize the pharmacological solution.  My own specialty of psychiatry is the best example of this:  we have taken extremely rich, complicated, and variable human experiences and phenotypes and distilled them into a bland, clinical lexicon replete with “symptoms” and “disorders,” and prescribe drugs that supposedly treat those disorders—on the basis of studies that rarely resemble the real world—while at the same time frequently ignoring the very real personal struggles that each patient endures.  (Okay, time to get off my soapbox.)

A medical specialty focusing on addictions is a fantastic idea and holds tremendous promise for those who suffer from these absolutely catastrophic conditions.  But ONLY if it transcends the “medical” mindset and instead sees these conditions as complex psychological, spiritual, motivational, social, (mal)adaptive, life-defining—and, yes, biochemical—phenomena that deserve comprehensive and multifaceted care.  As with much in psychiatry, there will be some patients whose symptoms or “brain lesions” are well defined and who respond well to a simple medication approach (a la the “medical model”), but the majority of patients will have vastly more complicated reasons for using, and an equally vast number of potential solutions they can pursue.

Whether this can be taught in a one-year Addiction Medicine residency remains to be seen.  Some physicians, for example, call themselves “addiction specialists” simply by completing an 8-hour-long online training course to prescribe Suboxone to heroin and Oxycontin abusers.  (By the way, Reckitt Benckiser, the manufacturer of Suboxone, is not a drug company, but is better known by its other major products:  Lysol, Mop & Glo, Sani Flush, French’s mustard, and Durex condoms.)  Hopefully, an Addiction Medicine residency will be more than a year-long infomercial for the latest substitution and “anti-craving” agents from multi-national conglomerates.

Nevertheless, the idea that new generations of young doctors will be trained specifically in the diagnosis and management of addictive disorders is a very welcome one indeed.  The physicians who choose this specialty will probably do so for a very particular reason, perhaps—even though this is by no means essential—due to their own personal experience or the experience of a loved one.  I simply hope that their teachers remind them that addiction is incredibly complicated, no two patients become “addicted” for the same reasons, and successful treatment often relies upon ignoring the obvious and digging more deeply into one’s needs, worries, concerns, anxieties, and much, much more.  This has certainly been my experience in psychiatry, and I’d hate to think that TWO medical specialties might be corrupted by an aggressive focus on a medication-centric, “one-size-fits-all” approach to the complexity of human nature.


The Virtual Clinic Is Open And Ready For Business

July 9, 2011

Being an expert clinician requires mastery of an immense body of knowledge, aptitude in physical examination and differential diagnosis, and an ability to assimilate all information about a patient in order to institute the most appropriate and effective treatment.

Unfortunately, in many practice settings these days, such expertise is not highly valued.  In fact, these age-old skills are being shoved to the side in favor of more expedient, “checklist”-type medicine, often done by non-skilled providers or in a hurried fashion.  If the “ideal” doctor’s visit is a four-course meal at a highly rated restaurant, today’s medical appointments are more like dining at the Olive Garden, if not McDonald’s or Burger King.

At the rate we’re going, it’s only a matter of time before medical care becomes available for take-out or delivery.  Instead of a comprehensive evaluation, your visit may be an online questionnaire followed by the shipment of your medications directly to your door.

Well, that time is now.  Enter “Virtuwell.”

The Virtuwell web site describes itself as “the simplest and most convenient way to solve the most common medical conditions that can get in the way of your busy life.”  It is, quite simply, an online site where (for the low cost of $40) you can answer a few questions about your symptoms and get a “customized Treatment Plan” reviewed and written by a nurse practitioner.  If necessary, you’ll also get a prescription written to your pharmacy.  No appointments, no waiting, no insurance hassles.  And no embarrassing hospital gowns.

As you might expect, some doctors are upset at what they perceive as a travesty of our profession.  (For example, some comments posted on an online discussion group for MDs: “the public will have to learn the hard way that you get what you pay for”; “they have no idea what they don’t know—order a bunch of tests and antibiotics and call it ‘treated'”; and “I think this is horrible and totally undermines our profession.”)  But then again, isn’t this what we have been doing for quite a while already?  Isn’t this what a lot of medicine has become, with retail clinics, “doc-in-a-box” offices in major shopping centers, urgent-care walk-in sites, 15-minute office visits, and managed care?

When I worked in community mental health, I know that some of my fellow MDs saw 30-40 patients per day, and their interviews may just as well have been done over the telephone or online.  It wasn’t ideal, but most patients did just fine, and few complained about it.  (Well, if they did, their complaints carried very little weight, sadly.)  Maybe it’s true that much of what we do does not require 8+ years of specialty education and the immense knowledge that most physicians possess, and many conditions are fairly easy to treat.  Virtuwell is simply capitalizing on that reality.

With the advent of social media, the internet, and services like Virtuwell, the role of the doctor will further be called into question, and new ways of delivering medical care will develop.  For example, this week also saw the introduction of the “Skin Scan,” an iPhone app which allows you to follow the growth of your moles and uses a “proprietary algorithm” to determine whether they’re malignant.  Good idea?  If it saves you from a diagnosis of melanoma, I think the answer is yes.

In psychiatry—a specialty in which treatment decisions are largely based on what the patient says, rather than a physical exam finding—the implications of web-based “office visits” are particularly significant.  It’s not too much of a stretch to envision an HMO providing online evaluations for patients with straightforward complaints of depression or anxiety or ADHD-like symptoms, or even a pharmaceutical company selling its drugs directly to patients based on an online “mood questionnaire.”  Sure, there might be some issues with state Medical Boards or the DEA, but nothing that a little political pressure couldn’t fix.  Would this represent a decline in patient care, or would it simply be business as usual?  Perhaps it would backfire, and prove that a face-to-face visit with a psychiatrist is a vital ingredient in the mental well-being of our patients.  Or it might demonstrate that we simply get in the way.

These are questions we must consider for the future of this field, as in all of medicine.  One might argue that psychiatry is particularly well positioned to adapt to these changes in health care delivery systems, since so many of the conditions we treat are influenced and defined (for better or for worse) by the very cultural and societal trends that lead our patients to seek help in these new ways.

The bottom line is, we can’t just stubbornly stand by outdated notions of psychiatric care (or, for that matter, by our notions of “disease” and “treatment”), because cultural influences are already changing what it means to be healthy or sick, and the ways in which our patients get better.  To stay relevant, we need to embrace sites like Virtuwell, and use these new technologies when we can.  When we cannot, we must demonstrate why, and prove how we can do better.

[Credit goes to Neuroskeptic for the computer-screen psychiatrist.  Classic!]