Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


Psychiatry, Homeostasis, and Regression to the Mean

July 20, 2011

Are atypical antipsychotics overprescibed?  This question was raised in a recent article on the Al Jazeera English website, and has been debated back and forth for quite some time on various blogs, including this one.  Not surprisingly, their conclusion was that, yes, these medications are indeed overused—and, moreover, that the pharmaceutical industry is responsible for getting patients “hooked” on these drugs via inappropriate advertising and off-label promotion of these agents.

However, I don’t know if this is an entirely fair characterization.

First of all, let’s just be up front with what should be obvious.  Pharmaceutical companies are businesses.  They’re not interested in human health or disease, except insofar as they can exploit people’s fears of disease (sometimes legitimately, sometimes not) to make money.  Anyone who believes that a publicly traded drugmaker might forego their bottom line to treat malaria in Africa “because it’s the right thing to do” is sorely mistaken.  The mission of companies like AstraZeneca, Pfizer, and BMS is to get doctors to prescribe as much Seroquel, Geodon, and Abilify (respectively) as possible.  Period.

In reality, pharmaceutical company revenues would be zero if doctors (OK, and nurse practitioners and—at least in some states—psychologists) didn’t prescribe their drugs.  So it’s doctors who have made antipsychotics one of the most prescribed classes of drugs in America, not the drug companies.  Why is this?  Has there been an epidemic of schizophrenia?  (NB:  most cases of schizophrenia do not fully respond to these drugs.)  Are we particularly susceptible to drug marketing?  Do we believe in the clear and indisputable efficacy of these drugs in the many psychiatric conditions for which they’ve been approved (and those for which they haven’t)?

No, I like to think of it instead as our collective failure to appreciate that patients are more resilient and adaptive than we give them credit for, not to mention our infatuation with the concept of biological psychiatry.  In fact, much of what we attribute to our drugs may in fact be the result of something else entirely.

For an example of what I mean, take a look at the following figure:

This figure has nothing to do with psychiatry.  It shows the average body temperature of two groups of patients with fever—one who received intravenous Tylenol, and the other who received an intravenous placebo.  As you can easily see, Tylenol cut the fever short by a good 30-60 minutes.  But both groups of patients eventually reestablished a normal body temperature.

This is a concept called homeostasis.  It’s the innate ability of a living creature to keep things constant.  When you have a fever, you naturally perspire to give off heat.  When you have an infection, you naturally mobilize your immune system to fight it.  (BTW, prescribing antibiotics for viral respiratory infections is wasteful:  the illness resolves itself “naturally” but the use of a drug leads us to believe that the drug is responsible.)  When you’re sad and hopeless, lethargic and fatigued, you naturally engage in activities to pull yourself out of this “rut.”  All too often, when we doctors see these symptoms, we jump at a diagnosis and a treatment, neglecting the very real human capacity—evolutionarily programmed!!—to naturally overcome these transient blows to our psychological stability and well-being.

There’s another concept—this one from statistics—that we often fail to recognize.  It’s called “regression to the mean.”  If I survey a large number of people on some state of their psychological function (such as mood, or irritability, or distractibility, or anxiety, etc), those with an extreme score on their first evaluation will most likely have a more “normal” score on their next evaluation, and vice versa, even in the absence of any intervention.  In other words, if you’re having a particularly bad day today, you’re more likely to be having a better day the next time I see you.

This is perhaps the best argument for why it takes multiple sessions with a patient—or, at the very least, a very thorough psychiatric history—to make a confident psychiatric diagnosis and to follow response to treatment.  Symptoms—especially mild ones—come and go.  But in our rush to judgment (not to mention the pressures of modern medicine to determine a diagnosis ASAP for billing purposes), endorsement of a few symptoms is often sufficient to justify the prescription of a drug.

Homeostasis and regression to the mean are not the same.  One is a biological process, one is due to natural, semi-random variation.  But both of these concepts should be considered as explanations for our patients “getting better.”  When these changes occur in the context of taking a medication (particularly one like an atypical antipsychotic, with so many uses for multiple nonspecific diagnoses), we like to think the medication is doing the trick, when the clinical response may be due to something else altogether.

Al Jazeera was right: the pharmaceutical companies have done a fantastic job in placing atypical antipsychotics into every psychiatrist’s armamentarium.  And yes, we use them, and people improve.  The point, though, is that the two are sometimes not connected.  Until and unless we find some way to recognize this—and figure out what really works—Big Pharma will continue smiling all the way to the bank.


Addiction Medicine: A New Specialty Or More Of The Same?

July 14, 2011

In an attempt to address a significant—and unmet—need in contemporary health care, the American Board of Addiction Medicine (ABAM) has accredited ten new residency programs in “addiction medicine.”  Details can be found in this article in the July 10 New York Times.  This new initiative will permit young doctors who have completed medical school and an initial internship year to spend an additional year learning about the management of addictive disease.

To be sure, there’s a definite need for trained addiction specialists.  Nora Volkow, director of the National Institute on Drug Abuse (NIDA), says that the lack of knowledge about substance abuse among physicians is “a very serious problem,” and I have certainly found this to be true.  Addictions to drugs and alcohol are devastating (and often life-threatening) conditions that many doctors are ill-prepared to understand—much less treat—and such disorders frequently complicate the management of many medical and psychiatric conditions.

Having worked in the addiction field, however (and having had my own personal experiences in the recovery process), I’m concerned about the precedent that these programs might set for future generations of physicians treating addictive illness.

As much as I respect addiction scientists and agree that the neurochemical basis of addiction deserves greater study, I disagree (in part) with the countless experts who have pronounced for the last 10-20 years that addiction is “a brain disease.”  In my opinion, addiction is a brain disease in the same way that “love” is a rush of dopamine or “anxiety” is a limbic system abnormality.  In other words: yes, addiction clearly does involve the brain, but overcoming one’s addiction (which means different things to different people) is a process that transcends the process of simply taking a pill, correcting one’s biochemistry, or fixing a mutant gene.  In some cases it requires hard work and immense will power; in other cases, a grim recognition of one’s circumstances (“hitting bottom”) and a desire to change; and in still other cases, a “spiritual awakening.”  None of these can be prescribed by a doctor.

In fact, the best argument against the idea of addiction as a biological illness is simple experience.  Each of us has heard of the alcoholic who got sober by going to meetings; or the heroin addict who successfully quit “cold turkey”; or the hard-core cocaine user who stopped after a serious financial setback or the threat of losing his job, marriage, or both.  In fact, these stories are actually quite common.  By comparison, no one overcomes diabetes after experiencing “one too many episodes of ketoacidosis,” and no one resolves their hypertension by establishing a relationship with a Higher Power.

That’s not to say that pharmacological remedies have no place in the treatment of addiction.  Methadone and buprenorphine (Suboxone) are legal, prescription substitutes for heroin and other opioids, and they have allowed addicts to live respectable, “functional” lives.  Drugs like naltrexone or Topamax might curb craving for alcohol in at least some alcoholic patients (of course, when you’re talking about the difference between 18 beers/day and 13 beers/day, you might correctly ask, “what’s the point?”), and other pharmaceuticals might do the same for such nasty things as cocaine, nicotine, gambling, or sugar & flour.

But we in medicine tend to overemphasize the pharmacological solution.  My own specialty of psychiatry is the best example of this:  we have taken extremely rich, complicated, and variable human experiences and phenotypes and distilled them into a bland, clinical lexicon replete with “symptoms” and “disorders,” and prescribe drugs that supposedly treat those disorders—on the basis of studies that rarely resemble the real world—while at the same time frequently ignoring the very real personal struggles that each patient endures.  (Okay, time to get off my soapbox.)

A medical specialty focusing on addictions is a fantastic idea and holds tremendous promise for those who suffer from these absolutely catastrophic conditions.  But ONLY if it transcends the “medical” mindset and instead sees these conditions as complex psychological, spiritual, motivational, social, (mal)adaptive, life-defining—and, yes, biochemical—phenomena that deserve comprehensive and multifaceted care.  As with much in psychiatry, there will be some patients whose symptoms or “brain lesions” are well defined and who respond well to a simple medication approach (a la the “medical model”), but the majority of patients will have vastly more complicated reasons for using, and an equally vast number of potential solutions they can pursue.

Whether this can be taught in a one-year Addiction Medicine residency remains to be seen.  Some physicians, for example, call themselves “addiction specialists” simply by completing an 8-hour-long online training course to prescribe Suboxone to heroin and Oxycontin abusers.  (By the way, Reckitt Benckiser, the manufacturer of Suboxone, is not a drug company, but is better known by its other major products:  Lysol, Mop & Glo, Sani Flush, French’s mustard, and Durex condoms.)  Hopefully, an Addiction Medicine residency will be more than a year-long infomercial for the latest substitution and “anti-craving” agents from multi-national conglomerates.

Nevertheless, the idea that new generations of young doctors will be trained specifically in the diagnosis and management of addictive disorders is a very welcome one indeed.  The physicians who choose this specialty will probably do so for a very particular reason, perhaps—even though this is by no means essential—due to their own personal experience or the experience of a loved one.  I simply hope that their teachers remind them that addiction is incredibly complicated, no two patients become “addicted” for the same reasons, and successful treatment often relies upon ignoring the obvious and digging more deeply into one’s needs, worries, concerns, anxieties, and much, much more.  This has certainly been my experience in psychiatry, and I’d hate to think that TWO medical specialties might be corrupted by an aggressive focus on a medication-centric, “one-size-fits-all” approach to the complexity of human nature.


The Virtual Clinic Is Open And Ready For Business

July 9, 2011

Being an expert clinician requires mastery of an immense body of knowledge, aptitude in physical examination and differential diagnosis, and an ability to assimilate all information about a patient in order to institute the most appropriate and effective treatment.

Unfortunately, in many practice settings these days, such expertise is not highly valued.  In fact, these age-old skills are being shoved to the side in favor of more expedient, “checklist”-type medicine, often done by non-skilled providers or in a hurried fashion.  If the “ideal” doctor’s visit is a four-course meal at a highly rated restaurant, today’s medical appointments are more like dining at the Olive Garden, if not McDonald’s or Burger King.

At the rate we’re going, it’s only a matter of time before medical care becomes available for take-out or delivery.  Instead of a comprehensive evaluation, your visit may be an online questionnaire followed by the shipment of your medications directly to your door.

Well, that time is now.  Enter “Virtuwell.”

The Virtuwell web site describes itself as “the simplest and most convenient way to solve the most common medical conditions that can get in the way of your busy life.”  It is, quite simply, an online site where (for the low cost of $40) you can answer a few questions about your symptoms and get a “customized Treatment Plan” reviewed and written by a nurse practitioner.  If necessary, you’ll also get a prescription written to your pharmacy.  No appointments, no waiting, no insurance hassles.  And no embarrassing hospital gowns.

As you might expect, some doctors are upset at what they perceive as a travesty of our profession.  (For example, some comments posted on an online discussion group for MDs: “the public will have to learn the hard way that you get what you pay for”; “they have no idea what they don’t know—order a bunch of tests and antibiotics and call it ‘treated'”; and “I think this is horrible and totally undermines our profession.”)  But then again, isn’t this what we have been doing for quite a while already?  Isn’t this what a lot of medicine has become, with retail clinics, “doc-in-a-box” offices in major shopping centers, urgent-care walk-in sites, 15-minute office visits, and managed care?

When I worked in community mental health, I know that some of my fellow MDs saw 30-40 patients per day, and their interviews may just as well have been done over the telephone or online.  It wasn’t ideal, but most patients did just fine, and few complained about it.  (Well, if they did, their complaints carried very little weight, sadly.)  Maybe it’s true that much of what we do does not require 8+ years of specialty education and the immense knowledge that most physicians possess, and many conditions are fairly easy to treat.  Virtuwell is simply capitalizing on that reality.

With the advent of social media, the internet, and services like Virtuwell, the role of the doctor will further be called into question, and new ways of delivering medical care will develop.  For example, this week also saw the introduction of the “Skin Scan,” an iPhone app which allows you to follow the growth of your moles and uses a “proprietary algorithm” to determine whether they’re malignant.  Good idea?  If it saves you from a diagnosis of melanoma, I think the answer is yes.

In psychiatry—a specialty in which treatment decisions are largely based on what the patient says, rather than a physical exam finding—the implications of web-based “office visits” are particularly significant.  It’s not too much of a stretch to envision an HMO providing online evaluations for patients with straightforward complaints of depression or anxiety or ADHD-like symptoms, or even a pharmaceutical company selling its drugs directly to patients based on an online “mood questionnaire.”  Sure, there might be some issues with state Medical Boards or the DEA, but nothing that a little political pressure couldn’t fix.  Would this represent a decline in patient care, or would it simply be business as usual?  Perhaps it would backfire, and prove that a face-to-face visit with a psychiatrist is a vital ingredient in the mental well-being of our patients.  Or it might demonstrate that we simply get in the way.

These are questions we must consider for the future of this field, as in all of medicine.  One might argue that psychiatry is particularly well positioned to adapt to these changes in health care delivery systems, since so many of the conditions we treat are influenced and defined (for better or for worse) by the very cultural and societal trends that lead our patients to seek help in these new ways.

The bottom line is, we can’t just stubbornly stand by outdated notions of psychiatric care (or, for that matter, by our notions of “disease” and “treatment”), because cultural influences are already changing what it means to be healthy or sick, and the ways in which our patients get better.  To stay relevant, we need to embrace sites like Virtuwell, and use these new technologies when we can.  When we cannot, we must demonstrate why, and prove how we can do better.

[Credit goes to Neuroskeptic for the computer-screen psychiatrist.  Classic!]


When A Comorbidity Isn’t “Comorbid” At All

July 7, 2011

When medical professionals speak of the burden of illness, we use the term “morbidity.”  This can refer either to the impact of an illness on a patient’s quality of life, or to the overall impact of a disease on a defined community.  We also speak of “co-morbidities,” which, as you might expect, are two concurrent conditions, both of which must be treated in order for a patient to experience optimal health.

Comorbidities can be entirely unrelated, as in the case of a tooth abscess and fecal incontinence (at least I hope those are unrelated!).  Alternatively, they can be intimately connected, like CHF and coronary artery disease.  They may also represent seemingly discrete phenomena which, upon closer inspection, might be related after all—at least in some patients—like schizophrenia and obesity, depression and HIV, or chronic fatigue syndrome and XMRV (oops, scratch that last one!).  The idea is that it’s most parsimonious to find the connections between and among these comorbidities (when they exist) and treat both disorders simultaneously in order to achieve the best outcomes for patients.

I was recently asked to write an article on the comorbidity of alcoholism and anxiety disorders, and how best to manage these conditions when they co-occur.  Being the good (and modest—ha!) researcher that I am, I scoured the literature and textbooks for clinical trials, and found several studies of treatment interventions for combined anxiety and alcoholism.  Some addressed the disorders sequentially, some in parallel, some in an integrated fashion.  I looked at drug trials and therapy trials, in a variety of settings and for various lengths of time.

I quickly found that there’s no “magic bullet” to treat anxiety and alcoholism.  No big surprise.  But when I started to think about how these conditions appear in the real world (in other words, not in a clinical trial), I began to understand why.

You see, there’s great overlap among most psychiatric diagnoses—think of “anxious depression” or “bipolar with psychotic features.”  As a result, psychiatrists in practice more often treat symptoms than diseases.  And nowhere is this more the case than in the diagnosis and treatment of addictions.

Addictions are incredibly complex phenomena.  While we like to think of addictions like alcoholism as “diseases,” I’m starting to think they really are not.  Instead, an addiction like alcoholism is a manifestation or an epiphenomenon of some underlying disorder, some underlying pain or deficiency, or some sense of helplessness or powerlessness (for a more elaborate description, see Lance Dodes’ book The Heart of Addiction).  In other words, people drink not because of a dopamine receptor mutation, or a deficiency in some “reward chemical,” or some “sensation-seeking” genotype, but because of anxiety, depression, or other painful emotional states.  They could just as easily be “addicted” to gambling, running, bike riding, cooking (and yes, sex) as ways of coping with these emotions.  Incidentally, what’s “problematic” differs from person to person and from substance to substance.  (And it is notable, for instance, that mainlining heroin = “bad” and running marathons = “good.”  Who made that rule?)

“But wait,” you might say, “there’s your comorbidity right there… you said that people drink because they’re anxious.”  Okay, so what is that “anxiety”?  Panic disorder?  Post-traumatic stress disorder?  Social phobia?  Yes, there are certainly some alcoholics with those “pre-existing conditions” who use alcohol as a way of coping with them, but they are a small minority.  (And even within that minority, I’m sure there are those whose drinking has been a remarkably helpful coping mechanism, despite the fact that it would be far more supportive of our treatment paradigm if they just took a pill that we prescribed to them.)

For the great majority of people, however, the use of alcohol (or another addictive behavior) is a way to deal with a vastly more complicated set of anxieties, deficiencies, and an inability to deal with the here and now in a more direct way.  And that’s not necessarily a bad thing.  In fact, it can be quite adaptive.

Unfortunately, when we psychiatrists hear that word “anxiety,” we immediately think of the anxiety disorders as written in the DSM-IV and think that all anxious alcoholics have a clear “dual diagnosis” which—if we diagnose correctly—can be treated according to some formula.  Instead, we ought to think about anxiety in a more diffuse and idiosyncratic way:  i.e., the cognitive, emotional, behavioral, and existential phenomena that uniquely affect each of our patients.  (I’m tempted to venture into psychodynamic territory and describe the tensions between unconscious drives and the patient’s ego, but I’m afraid that might be too quaint for the sensibilities of the 21st century mind.)

Thus, I predict that the rigorous, controlled (and expensive, and time-consuming) studies of medications and other interventions for “comorbid” anxiety disorders and alcoholism are doomed to fail.  This is because alcoholism and anxiety are not comorbid in the sense that black and white combine to form the stripes of a zebra.  Rather, they make various shades of grey.  Some greys are painful and everlasting, while others are easier to erase.  By simplifying them as black+white and treating them accordingly, we miss the point that people are what matter, and that the “grey areas” are key to understanding each patient’s anxieties, insecurities, and motivations—in other words, to figuring out how each patient is unique.


I Just Don’t Know What (Or Whom) To Believe Anymore

July 2, 2011

de-lu-sion [dih-loo-zhuhn] Noun.  1. An idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality, typically a symptom of mental disorder.

The announcement this week of disciplinary action against three Harvard Medical School psychiatrists (which you can read about here and here and here and here) for violating that institution’s conflict-of-interest policy comes at a pivotal time for psychiatry.  Or at least for my own perceptions of it.

As readers of this blog know, I can be cynical, critical, and skeptical about the medicine I practice on a daily basis.  This arises from two biases that have defined my approach to medicine from Day One:  (1) a respect for the patient’s point of view (which, in many ways, arose out of my own personal experiences), and (2) a need to see and understand the evidence (probably a consequence of my years of graduate work in basic molecular neuroscience before becoming a psychiatrist).

Surprisingly, I have found these attributes to be in short supply among many psychiatrists—even among the people we consider to be our leaders in the field.  And Harvard’s action against Biederman, Spencer, and Wilens might unfortunately just be the tip of the iceberg.

I entered medical school in the late 1990s.  I recall one of my preclinical lectures at Cornell, in which the chairman of our psychiatry department, Jack Barchas, spoke with breathless enthusiasm about the future of psychiatry.  He expounded passionately about how the coming era would bring deeper knowledge of the biological mechanisms of mental illness and new, safer, more effective medications that would vastly improve our patients’ lives.

My other teachers and mentors were just as optimistic.  The literature at the time was filled with studies of new pharmaceuticals (the atypical antipsychotics, primarily), molecular and neuroimaging discoveries, and novel research into genetic markers of illness.  As a student, it was hard not to be caught up in the excitement of the coming revolution in biological psychiatry.

But I now wonder whether we may have been deluding ourselves.  I have no reason to think that Dr Barchas was lying to us in that lecture at Cornell, but those who did the research about which he pontificated may not have been giving us the whole story.  In fact, we’re now learning that those “revolutionary” new drugs were not quite as revolutionary as they appeared.  Drug companies routinely hid negative results and designed their studies to make the new drugs appear more effective.  They glossed over data about side effects, and frequently drug companies would ghostwrite books and articles that appeared to come from their (supposedly unbiased) academic colleagues.

This went on for a long time.  And for all those years, these same academics taught the current generation of psychiatrists like me, and lectured widely (for pay, of course) to psychiatrists in the community.

In my residency years in the mid-2000s, for instance, each one of my faculty members (with only one exception that I’m aware of) spoke for drug companies or was being paid to do research on drugs that we were actively prescribing in the clinic and on the wards.  (I didn’t know this at the time, of course; I learned this afterward.)  And this was undoubtedly the case in other top-tier academic centers throughout the country, having a trickle-down effect on the practice of psychiatry worldwide.

Now, there’s nothing wrong with academics doing research or being paid to do it.  For me, the problem is that those two “pillars” by which I practice medicine (i.e., respect for the patient’s well-being, and a desire for hard evidence) were not the priorities of much of this clinical research.  Patients weren’t always getting better with these new drugs (certainly not in the long run), and the data were finessed and refined in ways that embellished the main message.  This was, of course, exacerbated by the big paychecks many of my academic mentors received.  Money has a remarkable way of influencing what people say and how (and how often) they say it.

But how is a student—or a practicing doc in the community who is several decades out of medical school—supposed to know this?  In my opinion, those who teach medical students and psychiatry residents probably should not be on a pharma payroll or give promotional talks for drugs.  These “academic leaders” are supposed to be fair, neutral, thoughtful authorities who make recommendations on patient outcomes data and nothing else.  Isn’t that why we have academic medical centers in the first place?   (Hey, at least we know that drug reps are paid handsome salaries & bonuses by drug companies… But don’t we expect university professors to be different?)

Just as a series of little white lies can snowball into an enormous unintended deception, I’m afraid that the last 10-20 years of cumulative tainted messages (sometimes deliberate, sometimes not) about the “promises” of psychiatry have created a widespread shared delusion about what we can offer our patients.  And if that’s too much of an exaggeration, then we might at least agree that our field now suffers a crisis of confidence in our leaders.  As Daniel Carlat commented in a story about the Harvard action: “When I get on the phone now and talk to a colleague about a study… [I ask] ‘was this industry funded, and can we trust the study?'”

It may be too late to avoid irreparable damage to this field or our confidence in it.  But at least some of this is coming to light.  If nothing else, we’re taking a cue from our area of clinical expertise, and challenging the delusional thought processes that have driven our actions for many, many years.