“Trainwrecks”

May 15, 2012

One of the highlights of the American Psychiatric Association (APA) Annual Meeting is the Exhibit Hall.  Here, under bright lights and fancy multimedia displays, sponsors get to show off their new wares.  If anyone wonders whether modern psychiatry isn’t all about psychopharmacology, one visit to the APA Exhibit Hall would set them straight.  Far and away, the biggest and glitziest displays are those of Big Pharma, promising satisfaction and success—and legions of grateful patients—for prescribing their products.

At the 2012 Annual Meeting last week, I checked out most of the Pharma exhibits, mainly just to see what was in the pipeline.  (Not much, it turns out.)  I didn’t partake in any of the refreshments—lest I be reported to the Feds as the recipient of a $2 cappuccino or a $4 smoothie—but still felt somewhat like an awestruck Charlie Bucket in Willie Wonka’s miraculous Chocolate Factory.

One memorable exchange was at the Nuedexta booth.  Nuedexta, as readers of this blog may recall from a 2011 post, is a combination of dextromethorphan and quinidine, sold by Avanir Pharmaceuticals and approved for the treatment of “pseudobulbar affect,” or PBA.  PBA is a neurological condition, found in patients with multiple sclerosis or stroke, and characterized by uncontrollable laughing and crying.  While PBA can be a devastating condition, treatment options do exist.  In my blog post I wrote that “a number of medications, including SSRIs like citalopram, and tricyclic antidepressants (TCAs), are effective in managing the symptoms of PBA.”  One year later, Nuedexta still has not been approved by the FDA for any other indication than PBA.

In my discussion with the Avanir salesman, I asked the same question I posed to the Avanir rep one year ago:  “If I had a patient in whom I suspected PBA, I’d probably refer him to his neurologist for management of that condition—so why, as a psychiatrist, would I use this medication?”  The rep’s answer, delivered in that cool, convincing way that can only emerge from the salesman’s anima, was a disturbing insight into the practice of psychiatry in the 21st century:

“Well, you probably have some patients who are real trainwrecks, with ten things going on.  Chances are, there might be some PBA in there, so why not try some Nuedexta and see if it makes a difference?”

I nodded, thanked him, and politely excused myself.  (I also promptly tweeted about the exchange.)  I don’t know if his words comprised an official Nuedexta sales pitch, but the ease with which he shared it (no wink-wink, nudge-nudge here) suggested that it has proven successful in the past.  Quite frankly, it’s also somewhat ugly.

First of all, I refuse to refer to any of my patients as “trainwrecks.”  Doctors and medical students sometimes use this term to refer to patients with multiple problems and who, as a result, are difficult to care for.  We’ve all used it, myself included.  But the more I empathize with my patients and try to understand their unique needs and wishes, the more I realize how condescending it is.  (Some might refer to me as a “trainwreck,” too, given certain aspects of my past.)  Furthermore, many of the patients with this label have probably—and unfortunately—earned it as a direct result of psychiatric “treatment.”

Secondly, as any good scientist will tell you, the way to figure out the inner workings of a complicated system is to take it apart and analyze its core features.  If a person presents an unclear diagnostic picture, clouded by a half-dozen medications and no clear treatment goals, the best approach is to take things away and see what remains, not to add something else to the mix and “see if it makes a difference.”

Third, the words of the Avanir rep demonstrate precisely what is wrong with our modern era of biological psychopharmacology.  Because the syndromes and “disorders” we treat are so vague, and because many symptoms can be found in multiple conditions—not to mention everyday life—virtually anything a patient reports could be construed as an indication for a drug, with a neurobiological mechanism to “explain” it.  This is, of course, exactly what I predicted for Nuedexta when I referred to it as a “pipeline in a pill” (a phrase that originally came from Avanir’s CEO).  But the same could be said for just about any drug a psychiatrist prescribes for an “emotional” or “behavioral” problem.  When ordinary complaints can be explained by tenuous biological pathways, it becomes far easier to rationalize the use of a drug, regardless of whether data exist to support it.

Finally, the strategy of “throw a medication into the mix and see if it works” is far too commonplace in psychiatry.  It is completely mindless and ignores any understanding of the underlying biology (if there is such a thing) of the illnesses we treat.  And yet it has become an accepted treatment paradigm.  Consider, for instance, the use of atypical antipsychotics in the treatment of depression.  Not only have the manufacturers of Abilify and Seroquel XR never explained how a dopamine partial agonist or antagonist (respectively) might help treat depression, but look at the way they use the results of STAR*D to help promote their products.  STAR*D, as you might recall, was a large-scale, multi-step study comparing multiple antidepressants which found that no single antidepressant was any better than any other.  (All were pretty poor, actually.)  The antipsychotic manufacturers want us to use their products not because they performed well in STAR*D (they weren’t even in STAR*D!!!) but because nothing else seemed to work very well.

If the most convincing argument we can make for a drug therapy is “well, nothing else has worked, so let’s try it,” this doesn’t bode well for the future of our field.  This strategy is mindless and sloppy, not to mention potentially dangerous.  It opens the floodgates for expensive and relatively unproven treatments which, in all fairness, may work in some patients, but add to the iatrogenic burden—and diagnostic confusion—of others.  It also permits Pharma (and the APA’s key opinion leaders) to maintain the false promise of a neurochemical solution for the human, personal suffering of those who seek our help.

This, in my opinion, is the real “trainwreck” that awaits modern psychiatry.  And only psychiatrists can keep us on the tracks.


Is The Joke On Me?

May 12, 2012

I recently returned from the American Psychiatric Association (APA) Annual Meeting in Philadelphia.  I had the pleasure of participating on a panel discussing “psychiatrists and the new media” with the bloggers/authors from Shrink Rap, and Bob Hsiung of dr-bob.org.  The panel discussion was a success.  Some other parts of the conference, however, left me with a sense of doubt and unease.  I enjoy being a psychiatrist, but whenever I attend these psychiatric meetings, I sometimes find myself questioning the nature of what I do.  At times I wonder whether everyone else knows something I don’t.  Sometimes I even ask myself:  is the joke on me?

Here’s an example of what I mean.  On Sunday, David Kupfer of the University of Pittsburgh (and task force chair of the forthcoming DSM-5) gave a talk on “Rethinking Bipolar Disorder.”  The room—a cavernous hall at the Pennsylvania Convention Center—was packed.  Every chair was filled, while scores of attendees stood in the back or sat on the floor, listening with rapt attention.  The talk itself was a discussion of “where we need to go” in the management of bipolar disorder in the future.  Dr Kupfer described a new view of bipolar disorder as a chronic, multifactorial disorder involving not just mood lability and extremes of behavior, but also endocrine, inflammatory, neurophysiologic, and metabolic processes that deserve our attention as well.  He emphasized the fact that in between mood episodes, and even before they develop, there are a range of “dysfunctional symptom domains”—involving emotions, cognition, sleep, physical symptoms, and others—that we psychiatrists should be aware of.  He also introduced a potential way to “stage” development of bipolar disorder (similar to the way doctors stage tumors), suggesting that people at early stages might benefit from prophylactic psychiatric intervention.

Basically, the take-home message (for me, at least) was that in the future, psychiatrists will be responsible for treating other manifestations of bipolar disorder than those we currently attend to.  We will also need to look for subthreshold symptoms in people who might have a “prodrome” of bipolar disorder.

A sympathetic observer might say that Kupfer is simply asking us to practice good medicine, caring for the entire person rather than one’s symptoms, and prevent development or recurrence of bipolar illness.  On the other hand, a cynic might look at these pronouncements as a sort of disease-mongering, encouraging us to uncover signs of “disease” where they might not exist.  But both of these conclusions overlook a much more fundamental question that, to me, remains unanswered.  What exactly is bipolar disorder anyway?

I realize that’s an extraordinarily embarrassing question for a psychiatrist to ask.  And in all fairness, I do know what bipolar disorder is (or, at least, what the textbooks and the DSM-IV say it is).  I have seen examples of manic episodes in my own practice, and in my personal life, and have seen how they respond to medications, psychotherapy, or the passage of time.  But those are the minority.  Over the years (although my career is still relatively young), I have also seen dozens, if not hundreds, of people given the diagnosis of “bipolar disorder” without a clear history of a manic episode—the defining feature of bipolar disorder, according to the DSM.

As I looked around the room at everyone concentrating on Dr Kupfer’s every word, I wondered to myself, am I the only one with this dilemma?  Are my patients “special” or “unique”?  Maybe I’m a bad psychiatrist; maybe I don’t ask the right questions.  Or maybe everyone else is playing a joke on me.   That’s unlikely; others do see the same sorts of patients I do (I know this for a fact, from my own discussions with other psychiatrists).  But nobody seems to have the same crisis of confidence that I do.  It makes me wonder whether we have reached a point in psychiatry when psychiatrists can listen to a talk like this one (or see patients each day) and accept diagnostic categories, without paying any attention to the fact that they our nosology says virtually nothing at all about the unique nature of each person’s suffering.  It seems that we accept the words of our authority figures without asking the fundamental question of whether they have any basis in reality.  Or maybe I’m just missing out on the joke.

As far as I’m concerned, no two “bipolar” patients are alike, and no two “bipolar” patients have the same treatment goals.  The same can be said for almost everything else we treat, from “depression” to “borderline personality disorder” to addiction.  In my opinion, lumping all those people together and assuming they’re all alike for the purposes of a talk (or, even worse, for a clinical trial) makes it difficult—and quite foolish—to draw any conclusions about that group of individuals.

What we need to do is to figure out whether what we call “bipolar disorder” is a true disorder in the first place, rather than accept it uncritically and start looking for yet additional symptom domains or biomarkers as new targets of treatment.  To accept the assumption that everyone currently with the “bipolar” label indeed has the same disorder (or any disorder at all) makes a mockery of the diagnostic process and destroys the meaning of the word.  Some would argue this has already happened.

But then again, maybe I’m the only one who sees it this way.  No one at Kupfer’s talk seemed to demonstrate any bewilderment or concern that we might be heading towards a new era of disease management without really knowing what “disease” we’re treating in the first place.  If this is the case, I sure would appreciate it if someone would let me in on the joke.


What’s the Proper Place of Science in Psychiatry and Medicine?

April 29, 2012

On the pages of this blog I have frequently written about the “scientific” aspects of psychiatry and questioned how truly scientific they are.   And I’m certainly not alone.  With the growing outcry against psychiatry for its medicalization of human behavior and the use of powerful drugs to treat what’s essentially normal variability in our everyday existence, it seems as if everyone is challenging the evidence base behind what we do—except most of us who do it on a daily basis.

Psychiatrists are unique among medical professionals, because we need to play two roles at once.  On the one hand, we must be scientists—determining whether there’s a biological basis for a patient’s symptoms.  On the other hand, we must identify environmental or psychological precursors to a patient’s complaints and help to “fix” those, too.  However, today’s psychiatrists often eschew the latter approach, brushing off their patients’ internal or interpersonal dynamics and ignoring environmental and social influences, rushing instead to play the “doctor” card:  labeling, diagnosing, and prescribing.

Why do we do this?  We all know the obvious reasons:  shrinking appointment lengths, the influence of drug companies, psychiatrists’ increasing desire to see themselves as “clinical neuroscientists,” and so on.

But there’s another, less obvious reason, one which affects all doctors.  Medical training is all about science.  There’s a reason why pre-meds have to take a year of calculus, organic chemistry, and physics to get into medical school.  It’s not because doctors solve differential equations and perform redox reactions all day.  It’s because medicine is a science (or so we tell ourselves), and, as such, we demand a scientific, mechanistic explanation for everything from a broken toe to a myocardial infarction to a manic episode.  We do “med checks,” as much as we might not want to, because that’s what we’ve been trained to do.  And the same holds true for other medical specialties, too.  Little emphasis is placed on talking and listening.  Instead, it’s all about data, numbers, mechanisms, outcomes, and the right drugs for the job.

Perhaps it’s time to rethink the whole “medical science” enterprise.  In much of medicine, paying more and more attention to biological measures—and the scientific evidence—hasn’t really improved outcomes.  “Evidence-based medicine,” in fact, is really just a way for payers and the government to create guidelines to reduce costs, not a way to improve individual patients’ care. Moreover, we see examples all the time—in all medical disciplines—of the corruption of scientific data (often fueled by drug company greed) and very little improvement in patient outcomes.  Statins, for instance, are effective drugs for high cholesterol, but their widespread use in people with no other risk factors seems to confer no additional benefit.  Decades of research into understanding appetite and metabolism hasn’t eradicated obesity in our society.  A full-scale effort to elucidate the brain’s “reward pathways” hasn’t made a dent in the prevalence of drug and alcohol addiction.

Psychiatry suffers under the same scientific determinism.  Everything we call a “disease” in psychiatry could just as easily be called something else.  I’ve seen lots of depressed people in my office, but I can’t say for sure whether I’ve ever seen one with a biological illness called “Major Depressive Disorder.”  But that’s what I write in the chart.  If a patient in my med-management clinic tells me he feels better after six weeks on an antidepressant, I have no way of knowing whether it was due to the drug.  But that’s what I tell myself—and that’s usually what he believes, too.  My training encourages me to see my patients as objects, as collections of symptoms, and to interpret my “biological” interventions as having a far greater impact on my patients’ health than the hundreds or thousands of other phenomena my patient experiences in between appointments with me.  Is this fair?

(This may explain some of the extreme animosity from the anti-psychiatry crowd—and others—against some very well-meaning psychiatrists.  With few exceptions, the psychiatrists I know are thoughtful, compassionate people who entered this field with a true desire to alleviate suffering.  Unfortunately, by virtue of their training, many have become uncritical supporters the scientific model, making them easy targets for those who have been hurt by that very same model.)

My colleague Daniel Carlat, in his book Unhinged, asks the question: “Why do [psychiatrists] go to medical school? How do months of intensive training in surgery, internal medicine, radiology, etc., help psychiatrists treat mental illness?”  He lays out several alternatives for the future of psychiatric training.  One option is a hybrid approach that combines a few years of biomedical training with a few years of rigorous exposure to psychological techniques and theories.  Whether this would be acceptable to psychiatrists—many of whom wear their MD degrees as scientific badges of honor—or to psychologists—who might feel that their turf is being threatened—is anyone’s guess.

I see yet another alternative.  Rather than taking future psychiatrists out of medical school and teaching them an abbreviated version of medicine, let’s change medical school itself.  Let’s take some of the science out of medicine and replace it with what really matters: learning how to think critically and communicate with patients (and each other), and to think about our patients in a greater societal context.  Soon the Medical College Admissions Test (MCAT) will include more questions about cultural studies and ethics.  Medical education should go one step further and offer more exposure to economics, politics, management, health-care policy, decision-making skills, communication techniques, multicultural issues, patient advocacy, and, of course, how to interpret and critique the science that does exist.

We doctors will need a scientific background to interpret the data we see on a regular basis, but we must also acknowledge that our day-to-day clinical work requires very little science at all.  (In fact, all the biochemistry, physiology, pharmacology, and anatomy we learned in medical school is either (a) irrelevant, or (b) readily available on our iPhones or by a quick search of Wikipedia.)  We need to be cautious not to bring science into a clinical scenario simply because it’s easy or “it’s what we know,” particularly—especially—when it provides no benefit to the patient.

So we don’t need to take psychiatry out of medicine.  Instead, we should bring a more enlightened, patient-centered approach to all of medicine, starting with formal medical training itself.  This would help all medical professionals to offer care that focuses on the person, rather than an MRI or CT scan, receptor profile or genetic polymorphism, or lab value or score on a checklist.  It would help us to be more accepting of our patients’ diversity and less likely to rush to a diagnosis.  It might even restore some respect for the psychiatric profession, both within and outside of medicine.  Sure, it might mean that fewer patients are labeled with “mental illnesses” (translating into less of a need for psychiatrists), but for the good of our patients—and for the future of our profession—it’s a sacrifice that we ought to be willing to make.


Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


Yes, We Still Need Psychiatrists, But For What?

April 15, 2012

If anyone’s looking for a brief primer on the popular perception of psychiatry and the animosity felt by those who feel hurt or scarred by this (my) profession, a good place to start would be a recent post by Steven Moffic entitled “Why We Still Need Psychiatrists!” on Robert Whitaker’s site, Mad In America.

Moffic, a psychiatrist at the Medical College of Wisconsin, is a published author, a regular contributor to Psychiatric Times, and a member of the Group for the Advancement of Psychiatry.  Whitaker is a journalist best known for his books Mad in America and Anatomy of an Epidemic, both of which have challenged modern psychiatric practice.

Moffic’s thesis is that we still “need” psychiatrists, particularly to help engineer necessary changes in the delivery of psychiatric care (for example, integration of psychiatry into primary care, incorporating therapeutic communities and other psychosocial treatments into the psychiatric mainstream, etc).  He argues that we are the best to do so by virtue of our extensive training, our knowledge of the brain, and our “dedication to the patient.”

The reaction by readers was, predictably, swift and furious.  While Whitaker’s readers are not exactly a representative sample (one reader, for example, commented that “the search for a good psychiatrist can begin in the obituary column” – a comment which was later deleted by Mr Whitaker), their comments—and Moffic’s responses—reinforce the idea that, despite our best intentions, psychiatrists are still not on the same page as many of the people we intend to serve.

As I read the comments, I find myself sympathetic to many of Moffic’s critics.  There’s still a lot we don’t know about mental illness, and much of what we do might legitimately be called “pseudoscience.”  However, I am also keenly aware of one uncomfortable fact:  For every patient who argues that psychiatric diagnoses are fallacies and that medications “harm” or “kill” people, there are dozens—if not hundreds—of others who not only disagree, but who INSIST that they DO have these disorders and who don’t just accept but REQUEST drug treatment.

For instance, consider this response to Moffic’s post:

Stop chemically lobotomizing adults, teens, children, and infants for your imaginary psychiatric ‘brain diseases.’  Stop spreading lies to the world about these ‘chronic’ (fake) brain illnesses, telling people they can only hope to manage them with ‘appropriate’ (as defined by you and yours) ‘treatments,’ so that they are made to falsely believe in non-existent illnesses and deficiencies that would have them ‘disabled’ for a lifetime and too demoralized about it to give a damn.

I don’t know how Moffic would respond to such criticism.  If he’s like most psychiatrists I know, he may just shrug it off as a “fringe” argument.  But that’s a dangerous move, because despite the commenter’s tone, his/her arguments are worthy of scientific investigation.

Let’s assume this commenter’s points are entirely correct.  That still doesn’t change the fact that lots of people have already “bought in” to the psychiatric model.  In my practice, I routinely see patients who want to believe that they have a “brain disease.”  They ask me for the “appropriate treatment”—often a specific medication they’ve seen on TV, or have taken from a friend, and don’t want to hear about the side effects or how it’s not indicated for their condition.  (It takes more energy to say “no” than to say “yes.”)  They often appreciate the fact that there’s a “chemical deficiency” or “imbalance” to explain their behavior or their moods.  (Incidentally, family members, the criminal justice system, and countless social service agencies also appreciate this “explanation.”)  Finally, as I’ve written about before, many patients don’t see “disability” as such a bad thing; in fact, they actively pursue itsometimes even demanding this label—despite my attempts to convince them otherwise.

In short, I agree with many of the critics on Whitaker’s site—and Whitaker himself—that psychiatry has far overstepped its bounds and has mislabeled and mistreated countless people.  (I can’t tell you how many times I’ve been asked to prescribe a drug for which I think to myself “what in the world is this going to do????”)  But the critics fail to realize is that this “delusion” of psychiatry is not just in psychiatrists’ minds.  It’s part of society.  Families, the legal system, Social Security, Medicaid/Medicare, Big Pharma, Madison Avenue, insurance companies, and employers of psychiatrists (and, increasingly, non-psychiatrists) like me—all of them see psychiatry the same way:  as a way to label and “pathologize” behaviors that are, oftentimes, only slight variants of “normal” (whatever that is) and seek to “treat” them, usually with chemicals.

Any attempt to challenge this status quo (this “shared delusion,” as I wrote in my response to Moffic’s post) is met with resistance, as illustrated by the case of Loren Mosher, whom Moffic discusses briefly.  The influence of the APA and drug companies on popular thought—not to mention legislation and allocation of health-care resources—is far more deeply entrenched than most people realize.

But the good thing is that Moffic’s arguments for why we need psychiatrists can just as easily be used as arguments for why psychiatrists are uniquely positioned to change this state of affairs.  Only psychiatrists—with their years of scientific education—can dig through the muck (as one commenter wrote, “to find nuggets in the sewage”) and appropriately evaluate the medical literature.  Psychiatrists should have a commanding knowledge of the evidence for all sorts of treatments (not just “biological” ones, even though one commenter lamented that she knew more about meds than her psychiatrist!) and argue for their inclusion and reimbursement in the services we provide.

Psychiatrists can (or should) also have the communication skills to explain to patients how they can overcome “illnesses” or, indeed, to educate them that their complaints are not even “illnesses” in the first place.  Finally, psychiatrists should command the requisite authority and respect amongst policymakers to challenge the broken “disability” system, a system which, I agree, does make people “too demoralized to give a damn.”

This is an uphill battle.  It’s particularly difficult when psychiatrists tenaciously hold on to a status quo which, unfortunately, is also foisted upon them by their employers.  (And I fear that Obamacare, should it come to pass, is only going to intensify the overdiagnosis and ultrarapid biological management of patients—more likely by providers with even less education than the psychiatrist).  But it’s a battle we must fight, not just for the sake of our jobs, but—as Whitaker’s readers emphasize—for the long-term well-being of millions of patients, and, quite possibly, for the well-being of our society as a whole.


Skin In The Game

April 8, 2012

We’ve all heard the saying “you get what you pay for.”  But in medicine, where the laws of economics don’t work like they do everywhere else, this maxim is essentially meaningless.  Thanks to our national health-insurance mess, some people pay very little (or nothing) out-of-pocket for a wide range of services, while others have to fork over huge sums of money for even the most basic of care.

Good arguments have been made for health insurance to become more like automobile or homeowners insurance.  Car insurance doesn’t cover oil changes and replacement tires, but it does pay for collisions and mishaps that may result if you don’t perform routine maintenance.  Homeowners insurance doesn’t pay the plumber, but might reimburse you for a flood that results from a blown valve on your water heater.

In medicine, we’ve never really seen this type of arrangement, apart from the occasional high-deductible plans and health savings accounts.  If you have a typical employer-sponsored health plan, not only do you pay little or nothing for your basic, routine care, but your insurance company has probably added even more services (massage, discounted gym memberships, “healthy eating” classes) in the name of preventive medicine and wellness.  (It’s almost as if your auto insurance paid for exactly what you’d do if you wanted to hang on to your car for 500,000 miles.)  When faced with this smorgasbord of free options, it’s easy to ignore the true underlying cost.  One way to reverse this trend is to ask for patients to put some “skin in the game.”

This might happen in Medicaid, the insurance plan for low-income persons.  California Governor Jerry Brown, for instance, proposed that patients receiving Medi-Cal (the California version of Medicaid) should pay higher co-pay amounts for care which is currently free (or nearly so).  A $5 co-payment for an office visit, or a $50 co-pay for an emergency room visit might sound hefty, but it’s a bargain—even for a poor family—if it means the difference between life and death… or even just sickness and health.

Unfortunately, California’s proposal was shot down in February by the Obama administration on legal grounds: the co-pays “are neither temporary nor targeted at a specific population.”  There are other legitimate questions, too, about its feasibility.  Would people forgo routine checkups or neglect to fill prescriptions to save a few dollars, only to cost the system more money down the road?  Would doctors and hospitals even bother to bill people (or send accounts to collections) for such low sums?  Is it fair to charge people money for what some people think is a right and should be free to all?

Without commenting on the moral and political arguments for or against this plan, I believe that this is a proposal worth testing—and psychiatry may be precisely the specialty in which it may have the greatest promise.

Psychiatric illnesses are unique among medical conditions.  Effective treatment involves more than just taking a pill or subjecting oneself to a biological intervention.  It involves the patient wanting to get better and believing in the path he or she is taking to achieve that outcome (even if it violates what the provider thinks is best).  Call it placebo effect, call it “transference,” call it insight, call it what you will—the psychological aspect of the patient’s “buying in” (pardon the pun) to treatment is an important part of successful psychiatric care, just as important—perhaps more so—as the biological effect of the drugs we prescribe.

Like it or not, part of that “wanting” and “believing” also involves “paying.”  Payment needn’t be extreme, but it should be enough to be noticeable.  Because only when someone has “skin in the game” does he or she feel motivated to change.  (Incidentally, this doesn’t have to be money, it could be one’s time, as well:  agreeing to attend an hour of weekly psychotherapy, going to self-help groups 2 or 3 times a week, or simply driving or taking the bus to the doctor’s office can mean a great deal for one’s recovery.)  It’s more than symbolic; it can mean a lot.

In my own life, I’ll admit, I took medical care for granted.  I was fortunate enough to be a healthy child, and had parents with good jobs that provided excellent health insurance.  It wasn’t until my mid-20s that I actually had to pay for medical care—even my co-payments seemed shocking, since I had never really had to pay anything before then.  Over the years, as I struggled with my own mental health needs (which were, unfortunately, not covered by my insurance), I had to pay ever-larger amounts out of my own pocket.  I honestly believe that this was a major contributor to my successful recovery—for starters, I wanted to get to a point where it didn’t make such a huge bite out of my bank account!

The absence of a “buy-in” is most stark precisely where Governor Brown wants to change it:  in Medicaid patients.  In the community clinics where I have worked, patients can visit the office with zero co-payment (and no penalties for no-shows).  This includes medication and therapy visits.  Prescriptions are often free as well; some patients take 4 or 5 (or more) medications—at zero out-of-pocket cost—which can set the government back hundreds of dollars a month.  At the same time, patients with no health insurance (or even with insurance, like me) can’t access the same drugs because of their prohibitive price tag or byzantine insurance restrictions.  It’s nowhere near a level playing field.

To make matters worse, patients on Medicaid generally tend to be more medically ill and, almost by definition, face significant environmental stressors that detrimentally affect their physical and mental well-being.  In these patients, we give psychiatric diagnoses far too liberally (often simply to give patients the opportunity to keep coming to see us, not because we truly believe there’s a diagnosable “mental illness”), and allow them to keep coming in—for free—to get various forms filled out and to refill medications that cost a fortune and don’t treat anything, perpetuating their dependence on an already overburdened health care system.  In fact, these patients would be much better served if we expected (and helped) them to obtain—and yes, even pay for—counseling or social-work assistance to overcome their environmental stressors, or measures to promote physical and mental wellness.

In the end, the solution seems like common sense.  When you own something—whether a home, an automobile, a major appliance, whatever—you tend to invest much more time and money in it than if you were just renting or borrowing.  The same could be said for your own health.  I don’t think it’s unreasonable to ask people to pony up an investment—even a small one—in their psychological and physical well-being.  Not only does it make good fiscal sense, but the psychological effect of taking responsibility for one’s own health may result in even greater future returns on that investment.  For everyone.


Did The APA Miss A Defining Moment?

April 1, 2012

Sometimes an organization or individual facing a potential public-relations disaster can use the incident as a way to send a powerful message, as well as change the way that organization or individual is perceived.   I wonder whether the American Psychiatric Association (APA) may have missed its opportunity to do exactly that.

Several weeks ago, the CBS news program 60 Minutes ran a story with the provocative argument that antidepressants are no better than placebo.  Reporter Lesley Stahl highlighted the work of Irving Kirsch, a psychologist who has studied the placebo effect for decades.  He has concluded that most, and maybe all, of the benefit of antidepressants can be attributed to placebo.  Simply put, they work because patients (and their doctors) expect them to work.

Since then, the psychiatric establishment has offered several counterarguments.  All have placed psychiatry squarely on the defensive.  One psychiatrist (Michael Thase), interviewed on the CBS program, defended antidepressants, arguing that Kirsch “is confusing the results of studies with what goes on in practice.”  Alan Schatzberg, past APA president and former Stanford chairman, said at a conference last weekend (where he spoke about “new antidepressants”) that the APA executive committee was “outraged” at the story, glibly remarking, “In this nation, if you can attack a psychiatrist, you win a medal.”  The leadership of the APA has mounted an aggressive defense, too.  Incoming APA president and Columbia chairman Jeffrey Lieberman called Kirsch “mistaken and confused, … ideologically based, [and] … just plain wrong.”  Similarly, current APA president John Oldham called the story “irresponsible and dangerous [and] … at odds with common clinical experience.”

These are indeed strong words.  But it raises one very important question:  who or what exactly are these spokesmen defending?  Patients?  Psychiatrists?  Drugs?  It would seem to me that the leadership of a professional medical organization should be defending good patient care, or at the very least, greater opportunities for its members to provide good patient care.  The arguments put forth by APA leadership, however, seem to be defending none of the above.  Instead, they seem to be defending antidepressants.

For the purposes of this post, I won’t weigh in on the question of whether antidepressants work or not.  It’s a complicated issue with no easy answer (we’ll offer some insight in the May issue of the Carlat Psychiatry Report).  However, let’s just assume that the general public now has good reason to believe that current antidepressants are essentially worthless, thanks to the 60 Minutes story (not to mention—just a few weeks earlier—a report on NPR’s “Morning Edition,” as well as a two-part series by Marcia Angell in the New York Review of Books last summer).  Justifiably or not, our patients will be skeptical of psychopharmacology going forward.  If we psychiatrists are hell-bent on defending antidepressants, we’d better have even stronger reasons for doing so than simply “we know they work.”

But why are psychiatrists defending antidepressants in the first place?  If anyone should be defending antidepressants, it should be the drug companies, not psychiatrists.  Why didn’t 60 Minutes interview a Lilly medical expert to explain how they did the initial studies of Prozac, or a Pfizer scientist to explain why patients should be put on Pristiq?  (Now that would have been fun!!)  I would have loved to hear Michael Thase—or anyone from the psychiatric establishment—say to Lesley Stahl:

“You know, Dr. Kirsch might just be onto something.  His research is telling us that maybe antidepressants really don’t work as well as we once thought.  As a result, we psychiatrists want drug companies to do better studies on their drugs before approval, and stop marketing their drugs so aggressively to us—and to our patients—until they can show us better data.  In the meantime we want to get paid to provide therapy along with—or instead of—medications, and we hope that the APA puts more of an emphasis on non-biological treatments for depression in the future.”

Wouldn’t that have been great?  For those of us (like me) who think the essence of depression is far more than faulty biology to be corrected with a pill, it would have been very refreshing to hear.  Moreover, it would help our field to reclaim some of the “territory” we’ve been abdicating to others (therapists, psychologists, social workers)—territory that may ultimately be shown to be more relevant for most patients than drugs.  (By the way, I don’t mean to drive a wedge between psychiatry and these other specialties, as I truly believe we can coexist and complement each other.  But as I wrote in my last post, psychiatry really needs to stand up for something, and this would have been a perfect opportunity to do exactly that.)

To his credit, Dr. Oldham wrote an editorial two weeks ago in Psychiatric News (the APA’s weekly newsletter) explaining that he was asked to contribute to the 60 Minutes piece, but CBS canceled his interview at the last minute.  He wrote a response but CBS refused to post it on its website (the official APA response can be found here).  Interestingly, he went on to acknowledge that “good care” (i.e., whatever works) is what our patients need, and also conceded that, at least for “milder forms of depression,” the “nonspecific [placebo] effect dwarfs the specific [drug] effect.”

I think the APA would have a pretty powerful argument if it emphasized this message (i.e., that the placebo effect might be much greater than we believe, and that we should study this more closely—maybe even harness it for the sake of our patients) over what sounds like a knee-jerk defense of drugs.  It’s a message that would demand better science, prioritize our patients’ well-being, and, perhaps even reduce treatment costs in the long run.  If, instead, we call “foul” on anyone who criticizes medications, not only do we send the message that we put our faith in only one form of therapy (out of many), but we also become de facto spokespersons for the pharmaceutical industry.  If the APA wants to change that perception among the general public, this would be a great place to start.


The Problem With Organized Psychiatry

March 27, 2012

Well, it happened again.  I attended yet another professional conference this weekend (specifically, the annual meeting of my regional psychiatric society), and—along with all the talks, exhibits, and networking opportunities—came the call I’ve heard over and over again in venues like this one:  We must get psychiatrists involved in organized medicine.  We must stand up for what’s important to our profession and make our voices heard!!

Is this just a way for the organization to make money?  One would be forgiven for drawing this conclusion.  Annual dues are not trivial: membership in the society costs up to $290 per person, and also requires APA membership, which ranges from $205 to $565 per year.  But setting the money aside, the society firmly believes that we must protect ourselves and our profession.  Furthermore, the best way to do so is to recruit as many members as possible, and encourage members to stand up for our interests.

This raises one important question:  what exactly are we standing up for?  I think most psychiatrists would agree that we’d like to keep our jobs, and we’d like to get paid well, too.  (Oh, and benefits would be nice.)  But that’s about all the common ground that comes to mind.  The fact that we work in so many different settings makes it impossible for us to speak as a single voice or even (gasp!) to unionize.

Consider the following:  the conference featured a panel discussion by five early-career psychiatrists:  an academic psychiatrist; a county mental health psychiatrist; a jail psychiatrist; an HMO psychiatrist; and a cash-only private-practice psychiatrist.  What might all of those psychiatrists have in common?  As it turns out, not much.  The HMO psychiatrist has a 9-to-5 job, a stable income, and extraordinary benefits, but a restricted range of services, a very limited medication formulary and not much flexibility in what she can provide.  The private-practice guy, on the other hand, can do (and charge) essentially whatever he wants (a lot, as it turns out); but he also has to pay his own overhead.  The county psychiatrist wants his patients to have access to additional services (therapy, case management, housing, vocational training, etc) that might be irrelevant—or wasteful—in other settings.  The academic psychiatrist is concerned about his ability to obtain research funding, to keep his inpatient unit afloat, and to satisfy his department chair.  The jail psychiatrist wants access to substance abuse treatment and other vital services, and to help inmates make the transition back into their community safely.

Even within a given practice setting, different psychiatrists might disagree on what they want:  Some might want to see more patients, while delegating services like psychotherapy and case management to other providers.  On the other hand, some might want to spend more time with fewer patients and to be paid to provide these services themselves.  Some might want a more generous medication formulary, while others might argue that the benefits of medication are too exaggerated and want patients to have access to other types of treatment.  Finally, some might lobby for greater access to pharmaceutical companies and the benefits they provide (samples, coupons, lectures, meals, etc), while others might argue that pharmaceutical promotion has corrupted our field.

For most of the history of modern medicine, doctors have had a hard time “organizing” because there has been no entity worth organizing against.  Today, we have some definite targets: the Affordable Care Act, big insurance companies, hospital employers, pharmacy benefits managers, state and local governments, malpractice attorneys, etc.  But not all doctors see those threats equally.  (Many, in fact, welcome the Affordable Care Act with open arms.)  So even though there are, for instance, several unanswered questions as to how the ACA (aka “Obamacare”) might change the health-care-delivery landscape, the ramifications are, in the eyes of most doctors, too far-removed from the day-to-day aspects of patient care for any of us to worry about.  Just like everything else in the above list, we shrug them off as nuisances—the costs of doing business—and try to devote attention to our patients instead of agitating for change.

In psychiatry, the conflicts are particularly  wide-ranging, and the stakes more poorly defined than elsewhere in medicine, making the targets of our discontent less clear.  One of the panelists put it best when she said: “there’s a lot of white noise in psychiatry.”  In other words, we really can’t figure out where we’re headed—or even where we want to head.  At one extreme, for instance, are those psychiatrists who argue (sometimes convincingly) that all psychiatry is a farce, that diagnoses are socially constructed entities with no external validity, and that “treatment” produces more harm than good.  At the other extreme are the DSM promoters and their ilk, arguing for greater access to effective treatment, the medicalization of human behavior, and the early recognition and treatment of mental illness—sometimes even before it develops.

Until we psychiatrists determine what we want the future of psychiatric care to look like, it will be difficult for us to jump on any common bandwagon.  In the meantime, the future of our field will be determined by those who do have a well-formed agenda and who can rally around a common goal.  At present, that includes the APA, insurance companies, Big Pharma, and government.  As for the rest of us, we’ll just pick up whatever scraps are left over, and “organize” after we’ve finished our charts, returned our calls, completed the prior authorizations, filed the disability paperwork, paid our bills, and said good-night to our kids.


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


Is The Criticism of DSM-5 Misguided? Part II

March 14, 2012

A few months ago, I wrote about how critics of the DSM-5 (led by Allen Frances, editor of the DSM-IV) might be barking up the wrong tree.  I argued that many of the problems the critics predict are not the fault of the book, but rather how people might use it.  Admittedly, this sounds a lot like the “guns don’t kill people, people do” argument against gun control (as one of my commenters pointed out), or a way for me to shift responsibility to someone else (as another commenter wrote).  But it’s a side of the issue that no one seems to be addressing.

The issue emerges again with the ongoing controversy over the “bereavement exclusion” in the DSM-IV.  Briefly, our current DSM says that grieving over a loved one does not constitute major depression (as long as it doesn’t last more than two months) and, as such, should not be treated.  However, some have argued that this exclusion should be removed in DSM-5.  According to Sidney Zisook, a UCSD psychiatrist, if we fail to recognize and treat clinical depression simply because it occurs in the two-month bereavement period, we do those people a “disservice.”  Likewise, David Kupfer, chair of the DSM-5 task force, defends the removal of the bereavement exclusion because “if patients … want help, they should not be prevented from getting [it] because somebody tells them that this is what everybody has when they have a loss.”

The NPR news program “Talk of the Nation” featured a discussion of this topic on Tuesday’s broadcast, but the guests and callers described the issue in a more nuanced (translation: “real-world”) fashion.  Michael Craig Miller, Editor of the Harvard Mental Health Letter, referred to the grieving process by saying: “The reality is that there is no firm line, and it is always a judgment call…. labels tend not to matter as much as the practical concern, that people shouldn’t feel a sense of shame.  If they feel they need some help to get through something, then they should ask for it.”  Bereavement and the need for treatment, therefore, is not a yes/no, either/or proposition, but something individually determined.

This sentiment was echoed in a February 19 editorial in Lancet by the psychiatrist/anthropologist Arthur Kleinman, who wrote that the experience of loss “is always framed by meanings and values, which themselves are affected by all sorts of things like one’s age, health, financial and work conditions, and what is happening in one’s life and in the wider world.”  Everyone seems to be saying pretty much the same thing:  people grieve in different ways, but those who are suffering should have access to treatment.

So why the controversy?  I can only surmise it’s because the critics of DSM-5 believe that mental health clinicians are unable to determine who needs help and, therefore, have to rely on a book to do so.  Listening to the arguments of Allen Frances et al, one would think that we have no ability to collaborate, empathize, and relate with our patients.  I think that attitude is objectionable to anyone who has made it his or her life’s work to treat the emotional suffering of others, and underestimates the effort that many of us devote to the people we serve.

But in some cases the critics are right.  Sometimes clinicians do get answers from the book, or from some senseless protocol (usually written by a non-clinician).  One caller to the NPR program said she was handed an antidepressant prescription upon her discharge from the hospital after a stillbirth at 8 months of pregnancy.  Was she grieving?  Absolutely.  Did she need the antidepressant?  No one even bothered to figure that out.  It’s like the clinicians who see “bipolar” in everyone who has anger problems; “PTSD” in everyone who was raised in a turbulent household; or “ADHD” in every child who does poorly in school.

If a clinician observes a symptom and makes a diagnosis simply on the basis of a checklist from a book, or from a single statement by a patient, and not on the basis of his or her full understanding, experience, and clinical assessment of that patient, then the clinician (and not the book) deserves to take full responsibility for any negative outcome of that treatment.  [And if this counts as acceptable practice, then we might as well fire all the psychiatrists and hire high-school interns—or computers!—at a mere fraction of the cost, because they could do this job just as well.]

Could the new DSM-5 be misused?  Yes.  Drug companies could (and probably will) exploit it to develop expensive and potentially harmful drugs.  Researchers will use it to design clinical trials on patients that, regrettably, may not resemble those in the “real world.”  Unskilled clinicians will use it to make imperfect diagnoses and give inappropriate labels to their patients.  Insurance companies will use the labels to approve or deny treatment.  Government agencies will use it to determine everything from who’s “disabled” to who gets access to special services in preschool.  And, of course, the American Psychiatric Association will use it as their largest revenue-generating tool, written by authors with extensive drug-industry ties.

To me, those are the places where critics should focus their rage.  But remember, to most good clinicians, it’s just a book—a field guide, helping us to identify potential concerns, and to guide future research into mental illness and its treatment.  What we choose to do with such information depends upon our clinical acumen and our relationship with our patients.  To assume that clinicians will blindly use it to slap the “depression” label and force antidepressants on anyone whose spouse or parent just died “because the book said so,” is insulting to those of us who actually care about our patients, and about what we do to improve their lives.