What’s the Proper Place of Science in Psychiatry and Medicine?

April 29, 2012

On the pages of this blog I have frequently written about the “scientific” aspects of psychiatry and questioned how truly scientific they are.   And I’m certainly not alone.  With the growing outcry against psychiatry for its medicalization of human behavior and the use of powerful drugs to treat what’s essentially normal variability in our everyday existence, it seems as if everyone is challenging the evidence base behind what we do—except most of us who do it on a daily basis.

Psychiatrists are unique among medical professionals, because we need to play two roles at once.  On the one hand, we must be scientists—determining whether there’s a biological basis for a patient’s symptoms.  On the other hand, we must identify environmental or psychological precursors to a patient’s complaints and help to “fix” those, too.  However, today’s psychiatrists often eschew the latter approach, brushing off their patients’ internal or interpersonal dynamics and ignoring environmental and social influences, rushing instead to play the “doctor” card:  labeling, diagnosing, and prescribing.

Why do we do this?  We all know the obvious reasons:  shrinking appointment lengths, the influence of drug companies, psychiatrists’ increasing desire to see themselves as “clinical neuroscientists,” and so on.

But there’s another, less obvious reason, one which affects all doctors.  Medical training is all about science.  There’s a reason why pre-meds have to take a year of calculus, organic chemistry, and physics to get into medical school.  It’s not because doctors solve differential equations and perform redox reactions all day.  It’s because medicine is a science (or so we tell ourselves), and, as such, we demand a scientific, mechanistic explanation for everything from a broken toe to a myocardial infarction to a manic episode.  We do “med checks,” as much as we might not want to, because that’s what we’ve been trained to do.  And the same holds true for other medical specialties, too.  Little emphasis is placed on talking and listening.  Instead, it’s all about data, numbers, mechanisms, outcomes, and the right drugs for the job.

Perhaps it’s time to rethink the whole “medical science” enterprise.  In much of medicine, paying more and more attention to biological measures—and the scientific evidence—hasn’t really improved outcomes.  “Evidence-based medicine,” in fact, is really just a way for payers and the government to create guidelines to reduce costs, not a way to improve individual patients’ care. Moreover, we see examples all the time—in all medical disciplines—of the corruption of scientific data (often fueled by drug company greed) and very little improvement in patient outcomes.  Statins, for instance, are effective drugs for high cholesterol, but their widespread use in people with no other risk factors seems to confer no additional benefit.  Decades of research into understanding appetite and metabolism hasn’t eradicated obesity in our society.  A full-scale effort to elucidate the brain’s “reward pathways” hasn’t made a dent in the prevalence of drug and alcohol addiction.

Psychiatry suffers under the same scientific determinism.  Everything we call a “disease” in psychiatry could just as easily be called something else.  I’ve seen lots of depressed people in my office, but I can’t say for sure whether I’ve ever seen one with a biological illness called “Major Depressive Disorder.”  But that’s what I write in the chart.  If a patient in my med-management clinic tells me he feels better after six weeks on an antidepressant, I have no way of knowing whether it was due to the drug.  But that’s what I tell myself—and that’s usually what he believes, too.  My training encourages me to see my patients as objects, as collections of symptoms, and to interpret my “biological” interventions as having a far greater impact on my patients’ health than the hundreds or thousands of other phenomena my patient experiences in between appointments with me.  Is this fair?

(This may explain some of the extreme animosity from the anti-psychiatry crowd—and others—against some very well-meaning psychiatrists.  With few exceptions, the psychiatrists I know are thoughtful, compassionate people who entered this field with a true desire to alleviate suffering.  Unfortunately, by virtue of their training, many have become uncritical supporters the scientific model, making them easy targets for those who have been hurt by that very same model.)

My colleague Daniel Carlat, in his book Unhinged, asks the question: “Why do [psychiatrists] go to medical school? How do months of intensive training in surgery, internal medicine, radiology, etc., help psychiatrists treat mental illness?”  He lays out several alternatives for the future of psychiatric training.  One option is a hybrid approach that combines a few years of biomedical training with a few years of rigorous exposure to psychological techniques and theories.  Whether this would be acceptable to psychiatrists—many of whom wear their MD degrees as scientific badges of honor—or to psychologists—who might feel that their turf is being threatened—is anyone’s guess.

I see yet another alternative.  Rather than taking future psychiatrists out of medical school and teaching them an abbreviated version of medicine, let’s change medical school itself.  Let’s take some of the science out of medicine and replace it with what really matters: learning how to think critically and communicate with patients (and each other), and to think about our patients in a greater societal context.  Soon the Medical College Admissions Test (MCAT) will include more questions about cultural studies and ethics.  Medical education should go one step further and offer more exposure to economics, politics, management, health-care policy, decision-making skills, communication techniques, multicultural issues, patient advocacy, and, of course, how to interpret and critique the science that does exist.

We doctors will need a scientific background to interpret the data we see on a regular basis, but we must also acknowledge that our day-to-day clinical work requires very little science at all.  (In fact, all the biochemistry, physiology, pharmacology, and anatomy we learned in medical school is either (a) irrelevant, or (b) readily available on our iPhones or by a quick search of Wikipedia.)  We need to be cautious not to bring science into a clinical scenario simply because it’s easy or “it’s what we know,” particularly—especially—when it provides no benefit to the patient.

So we don’t need to take psychiatry out of medicine.  Instead, we should bring a more enlightened, patient-centered approach to all of medicine, starting with formal medical training itself.  This would help all medical professionals to offer care that focuses on the person, rather than an MRI or CT scan, receptor profile or genetic polymorphism, or lab value or score on a checklist.  It would help us to be more accepting of our patients’ diversity and less likely to rush to a diagnosis.  It might even restore some respect for the psychiatric profession, both within and outside of medicine.  Sure, it might mean that fewer patients are labeled with “mental illnesses” (translating into less of a need for psychiatrists), but for the good of our patients—and for the future of our profession—it’s a sacrifice that we ought to be willing to make.


Depression Tests: When “Basic” Research Becomes “Applied”

April 22, 2012

Anyone with an understanding of the scientific process can appreciate the difference between “basic” and “applied” research.  Basic research, often considered “pure” science, is the study of science for its own sake, motivated by curiosity and a desire to understand.  General questions and theories are tested, often without any obvious practical application.  On the other hand, “applied” research is usually done for a specific reason: to solve a real-world problem or to develop a new product: a better mousetrap, a faster computer, or a more effective way to diagnose illness.

In psychiatric research, the distinction between “basic” and “applied” research is often blurred.  Two recent articles (and the accompanying media attention they’ve received) provide very good examples of this phenomenon.  Both stories involve blood tests to diagnose depression.  Both are intriguing, novel studies.  Both may revolutionize our understanding of mental illness.  But responses to both have also been blown way out of proportion, seeking to “apply” what is clearly only at the “basic” stage.

The first study, by George Papakostas and his colleagues at Massachusetts General Hospital and Ridge Diagnostics, was published last December in the journal Molecular Psychiatry.  They developed a technique to measure nine proteins in the blood, plug those values into a fancy (although proprietary—i.e., unknown) algorithm, and calculate an “MDDScore” which, supposedly, diagnoses depression.  In their paper, they compared 70 depressed patients with 43 non-depressed people and showed that their assay identifies depression with a specificity of 81% and a sensitivity of 91%.

The other study, published two weeks ago in Translational Psychiatry by Eve Redei and her colleagues at Northwestern University, purports to diagnose depression in adolescents.  They didn’t measure proteins in patients’ blood, but rather levels of RNA.  (As a quick aside, RNA is the “messenger” molecule inside each cell that tells the cell which proteins to make.)  They studied a smaller number of patients—only 14 depressed teenagers, compared with 14 non-depressed controls—and identified 11 RNA molecules which were expressed differently between the two groups.  These were selected from a much larger number of RNA transcripts on the basis of an animal model of depression: specifically, a rat strain that was bred to show “depressive-like” behavior.

If we look at each of these studies as “basic” science, they offer some potentially tantalizing insights into what might be happening in the bodies of depressed people (or rats).  Even though some of us argue that no two “depressed” people are alike—and we should look instead at person-centered factors that might explain how they are unique—these studies nevertheless might have something to say about the common underlying biology of depression—if such a thing exists.  At the very least, further investigation might explain why proteins that have no logical connection with depression (such as apolipoprotein CIII or myeloperoxidase) or RNA transcripts (for genes like toll-like-receptor-1 or S-phase-cyclin-A-associated protein) might help us, someday, to develop more effective treatments than the often ineffective SSRIs that are the current standard of care.

Surprisingly, though, this is not how these articles have been greeted.  Take the Redei article, for instance.  Since its publication, there have been dozens of media mentions, with such headlines as “Depression Blood Test for Teens May Lead To Less Stigma” and “Depression Researchers May Have Developed First Blood Test For Teens.”  To the everyday reader, it seems as if we’ve gone straight from the bench to the bedside.  Granted, each story mentions that the test is not quite “ready for prime time,” but headlines draw readers’ attention.  Even the APA’s official Twitter feed mentioned it (“Blood test for early-onset #depression promising,” along with the tags “#childrenshealth” and “#fightstigma”), giving it a certain degree of legitimacy among doctors and patients alike.

(I should point out that one of Redei’s co-authors, Bill Gardner, emphasized—correctly—on his own blog that their study was NOT to be seen as a test for depression, and that it required refinement and replication before it could be used clinically.  He also acknowledged that their study population—adolescents—are often targets for unnecessary pharmacological intervention, demanding even further caution in interpreting their results.)

As for the Papakostas article, there was a similar flurry of articles about it when preliminary results were presented last year.  Like Redei’s research, it’s an interesting study that could change the way we diagnose depression.  However, unlike Redei’s study, it was funded by a private, self-proclaimed “neurodiagnostics” company.  (That company, Ridge Diagnostics, has not revealed the algorithm by which they calculate their “MDDScore,” essentially preventing any independent group from trying to replicate their findings.)

Incidentally, the Chairman of the Board of Ridge Diagnostics is David Hale, who also founded—and is Chairman of—Somaxon Pharmaceuticals, a company I wrote about last year when it tried to bring low-dose doxepin to the market as a sleep aid, and then used its patent muscle to issue cease-and-desist letters to people who suggested using the ultra-cheap generic version instead of Somaxon’s name-brand drug.

Ridge Diagnostics has apparently decided not to wait for replication of its findings, and instead is taking its MDDScore to the masses, complete with a Twitter feed, a Facebook Page, and a series of videos selling the MDDScore (priced at a low, low $745!), aimed directly at patients.  At this rate, it’s only a matter of time before the MDDScore is featured on the “Dr Oz Show” or “The Doctors.”  Take a look at this professionally produced video, for instance, posted last month on Youtube:


(Interesting—the host hardly even mentions the word “depression.”  A focus group must have told them that it detracted from his sales pitch.)

I think it’s great that scientists are investigating the basic biology of depression.  I also have no problem when private companies try to get in on the act.  However, when research that is obviously at the “basic” stage (and, yes, not ready for prime time) becomes the focus of a viral video marketing campaign or a major story on the Huffington Post, one must wonder why we’ve been so quick to cross the line from “basic” research into the “applied” uses of those preliminary findings.  Okay, okay, I know the answer is money.  But who has the authority—and the voice—to say, “not so fast” and preserve some integrity in the field of psychiatric research?  Where’s the money in that?


Yes, We Still Need Psychiatrists, But For What?

April 15, 2012

If anyone’s looking for a brief primer on the popular perception of psychiatry and the animosity felt by those who feel hurt or scarred by this (my) profession, a good place to start would be a recent post by Steven Moffic entitled “Why We Still Need Psychiatrists!” on Robert Whitaker’s site, Mad In America.

Moffic, a psychiatrist at the Medical College of Wisconsin, is a published author, a regular contributor to Psychiatric Times, and a member of the Group for the Advancement of Psychiatry.  Whitaker is a journalist best known for his books Mad in America and Anatomy of an Epidemic, both of which have challenged modern psychiatric practice.

Moffic’s thesis is that we still “need” psychiatrists, particularly to help engineer necessary changes in the delivery of psychiatric care (for example, integration of psychiatry into primary care, incorporating therapeutic communities and other psychosocial treatments into the psychiatric mainstream, etc).  He argues that we are the best to do so by virtue of our extensive training, our knowledge of the brain, and our “dedication to the patient.”

The reaction by readers was, predictably, swift and furious.  While Whitaker’s readers are not exactly a representative sample (one reader, for example, commented that “the search for a good psychiatrist can begin in the obituary column” – a comment which was later deleted by Mr Whitaker), their comments—and Moffic’s responses—reinforce the idea that, despite our best intentions, psychiatrists are still not on the same page as many of the people we intend to serve.

As I read the comments, I find myself sympathetic to many of Moffic’s critics.  There’s still a lot we don’t know about mental illness, and much of what we do might legitimately be called “pseudoscience.”  However, I am also keenly aware of one uncomfortable fact:  For every patient who argues that psychiatric diagnoses are fallacies and that medications “harm” or “kill” people, there are dozens—if not hundreds—of others who not only disagree, but who INSIST that they DO have these disorders and who don’t just accept but REQUEST drug treatment.

For instance, consider this response to Moffic’s post:

Stop chemically lobotomizing adults, teens, children, and infants for your imaginary psychiatric ‘brain diseases.’  Stop spreading lies to the world about these ‘chronic’ (fake) brain illnesses, telling people they can only hope to manage them with ‘appropriate’ (as defined by you and yours) ‘treatments,’ so that they are made to falsely believe in non-existent illnesses and deficiencies that would have them ‘disabled’ for a lifetime and too demoralized about it to give a damn.

I don’t know how Moffic would respond to such criticism.  If he’s like most psychiatrists I know, he may just shrug it off as a “fringe” argument.  But that’s a dangerous move, because despite the commenter’s tone, his/her arguments are worthy of scientific investigation.

Let’s assume this commenter’s points are entirely correct.  That still doesn’t change the fact that lots of people have already “bought in” to the psychiatric model.  In my practice, I routinely see patients who want to believe that they have a “brain disease.”  They ask me for the “appropriate treatment”—often a specific medication they’ve seen on TV, or have taken from a friend, and don’t want to hear about the side effects or how it’s not indicated for their condition.  (It takes more energy to say “no” than to say “yes.”)  They often appreciate the fact that there’s a “chemical deficiency” or “imbalance” to explain their behavior or their moods.  (Incidentally, family members, the criminal justice system, and countless social service agencies also appreciate this “explanation.”)  Finally, as I’ve written about before, many patients don’t see “disability” as such a bad thing; in fact, they actively pursue itsometimes even demanding this label—despite my attempts to convince them otherwise.

In short, I agree with many of the critics on Whitaker’s site—and Whitaker himself—that psychiatry has far overstepped its bounds and has mislabeled and mistreated countless people.  (I can’t tell you how many times I’ve been asked to prescribe a drug for which I think to myself “what in the world is this going to do????”)  But the critics fail to realize is that this “delusion” of psychiatry is not just in psychiatrists’ minds.  It’s part of society.  Families, the legal system, Social Security, Medicaid/Medicare, Big Pharma, Madison Avenue, insurance companies, and employers of psychiatrists (and, increasingly, non-psychiatrists) like me—all of them see psychiatry the same way:  as a way to label and “pathologize” behaviors that are, oftentimes, only slight variants of “normal” (whatever that is) and seek to “treat” them, usually with chemicals.

Any attempt to challenge this status quo (this “shared delusion,” as I wrote in my response to Moffic’s post) is met with resistance, as illustrated by the case of Loren Mosher, whom Moffic discusses briefly.  The influence of the APA and drug companies on popular thought—not to mention legislation and allocation of health-care resources—is far more deeply entrenched than most people realize.

But the good thing is that Moffic’s arguments for why we need psychiatrists can just as easily be used as arguments for why psychiatrists are uniquely positioned to change this state of affairs.  Only psychiatrists—with their years of scientific education—can dig through the muck (as one commenter wrote, “to find nuggets in the sewage”) and appropriately evaluate the medical literature.  Psychiatrists should have a commanding knowledge of the evidence for all sorts of treatments (not just “biological” ones, even though one commenter lamented that she knew more about meds than her psychiatrist!) and argue for their inclusion and reimbursement in the services we provide.

Psychiatrists can (or should) also have the communication skills to explain to patients how they can overcome “illnesses” or, indeed, to educate them that their complaints are not even “illnesses” in the first place.  Finally, psychiatrists should command the requisite authority and respect amongst policymakers to challenge the broken “disability” system, a system which, I agree, does make people “too demoralized to give a damn.”

This is an uphill battle.  It’s particularly difficult when psychiatrists tenaciously hold on to a status quo which, unfortunately, is also foisted upon them by their employers.  (And I fear that Obamacare, should it come to pass, is only going to intensify the overdiagnosis and ultrarapid biological management of patients—more likely by providers with even less education than the psychiatrist).  But it’s a battle we must fight, not just for the sake of our jobs, but—as Whitaker’s readers emphasize—for the long-term well-being of millions of patients, and, quite possibly, for the well-being of our society as a whole.


Skin In The Game

April 8, 2012

We’ve all heard the saying “you get what you pay for.”  But in medicine, where the laws of economics don’t work like they do everywhere else, this maxim is essentially meaningless.  Thanks to our national health-insurance mess, some people pay very little (or nothing) out-of-pocket for a wide range of services, while others have to fork over huge sums of money for even the most basic of care.

Good arguments have been made for health insurance to become more like automobile or homeowners insurance.  Car insurance doesn’t cover oil changes and replacement tires, but it does pay for collisions and mishaps that may result if you don’t perform routine maintenance.  Homeowners insurance doesn’t pay the plumber, but might reimburse you for a flood that results from a blown valve on your water heater.

In medicine, we’ve never really seen this type of arrangement, apart from the occasional high-deductible plans and health savings accounts.  If you have a typical employer-sponsored health plan, not only do you pay little or nothing for your basic, routine care, but your insurance company has probably added even more services (massage, discounted gym memberships, “healthy eating” classes) in the name of preventive medicine and wellness.  (It’s almost as if your auto insurance paid for exactly what you’d do if you wanted to hang on to your car for 500,000 miles.)  When faced with this smorgasbord of free options, it’s easy to ignore the true underlying cost.  One way to reverse this trend is to ask for patients to put some “skin in the game.”

This might happen in Medicaid, the insurance plan for low-income persons.  California Governor Jerry Brown, for instance, proposed that patients receiving Medi-Cal (the California version of Medicaid) should pay higher co-pay amounts for care which is currently free (or nearly so).  A $5 co-payment for an office visit, or a $50 co-pay for an emergency room visit might sound hefty, but it’s a bargain—even for a poor family—if it means the difference between life and death… or even just sickness and health.

Unfortunately, California’s proposal was shot down in February by the Obama administration on legal grounds: the co-pays “are neither temporary nor targeted at a specific population.”  There are other legitimate questions, too, about its feasibility.  Would people forgo routine checkups or neglect to fill prescriptions to save a few dollars, only to cost the system more money down the road?  Would doctors and hospitals even bother to bill people (or send accounts to collections) for such low sums?  Is it fair to charge people money for what some people think is a right and should be free to all?

Without commenting on the moral and political arguments for or against this plan, I believe that this is a proposal worth testing—and psychiatry may be precisely the specialty in which it may have the greatest promise.

Psychiatric illnesses are unique among medical conditions.  Effective treatment involves more than just taking a pill or subjecting oneself to a biological intervention.  It involves the patient wanting to get better and believing in the path he or she is taking to achieve that outcome (even if it violates what the provider thinks is best).  Call it placebo effect, call it “transference,” call it insight, call it what you will—the psychological aspect of the patient’s “buying in” (pardon the pun) to treatment is an important part of successful psychiatric care, just as important—perhaps more so—as the biological effect of the drugs we prescribe.

Like it or not, part of that “wanting” and “believing” also involves “paying.”  Payment needn’t be extreme, but it should be enough to be noticeable.  Because only when someone has “skin in the game” does he or she feel motivated to change.  (Incidentally, this doesn’t have to be money, it could be one’s time, as well:  agreeing to attend an hour of weekly psychotherapy, going to self-help groups 2 or 3 times a week, or simply driving or taking the bus to the doctor’s office can mean a great deal for one’s recovery.)  It’s more than symbolic; it can mean a lot.

In my own life, I’ll admit, I took medical care for granted.  I was fortunate enough to be a healthy child, and had parents with good jobs that provided excellent health insurance.  It wasn’t until my mid-20s that I actually had to pay for medical care—even my co-payments seemed shocking, since I had never really had to pay anything before then.  Over the years, as I struggled with my own mental health needs (which were, unfortunately, not covered by my insurance), I had to pay ever-larger amounts out of my own pocket.  I honestly believe that this was a major contributor to my successful recovery—for starters, I wanted to get to a point where it didn’t make such a huge bite out of my bank account!

The absence of a “buy-in” is most stark precisely where Governor Brown wants to change it:  in Medicaid patients.  In the community clinics where I have worked, patients can visit the office with zero co-payment (and no penalties for no-shows).  This includes medication and therapy visits.  Prescriptions are often free as well; some patients take 4 or 5 (or more) medications—at zero out-of-pocket cost—which can set the government back hundreds of dollars a month.  At the same time, patients with no health insurance (or even with insurance, like me) can’t access the same drugs because of their prohibitive price tag or byzantine insurance restrictions.  It’s nowhere near a level playing field.

To make matters worse, patients on Medicaid generally tend to be more medically ill and, almost by definition, face significant environmental stressors that detrimentally affect their physical and mental well-being.  In these patients, we give psychiatric diagnoses far too liberally (often simply to give patients the opportunity to keep coming to see us, not because we truly believe there’s a diagnosable “mental illness”), and allow them to keep coming in—for free—to get various forms filled out and to refill medications that cost a fortune and don’t treat anything, perpetuating their dependence on an already overburdened health care system.  In fact, these patients would be much better served if we expected (and helped) them to obtain—and yes, even pay for—counseling or social-work assistance to overcome their environmental stressors, or measures to promote physical and mental wellness.

In the end, the solution seems like common sense.  When you own something—whether a home, an automobile, a major appliance, whatever—you tend to invest much more time and money in it than if you were just renting or borrowing.  The same could be said for your own health.  I don’t think it’s unreasonable to ask people to pony up an investment—even a small one—in their psychological and physical well-being.  Not only does it make good fiscal sense, but the psychological effect of taking responsibility for one’s own health may result in even greater future returns on that investment.  For everyone.


Did The APA Miss A Defining Moment?

April 1, 2012

Sometimes an organization or individual facing a potential public-relations disaster can use the incident as a way to send a powerful message, as well as change the way that organization or individual is perceived.   I wonder whether the American Psychiatric Association (APA) may have missed its opportunity to do exactly that.

Several weeks ago, the CBS news program 60 Minutes ran a story with the provocative argument that antidepressants are no better than placebo.  Reporter Lesley Stahl highlighted the work of Irving Kirsch, a psychologist who has studied the placebo effect for decades.  He has concluded that most, and maybe all, of the benefit of antidepressants can be attributed to placebo.  Simply put, they work because patients (and their doctors) expect them to work.

Since then, the psychiatric establishment has offered several counterarguments.  All have placed psychiatry squarely on the defensive.  One psychiatrist (Michael Thase), interviewed on the CBS program, defended antidepressants, arguing that Kirsch “is confusing the results of studies with what goes on in practice.”  Alan Schatzberg, past APA president and former Stanford chairman, said at a conference last weekend (where he spoke about “new antidepressants”) that the APA executive committee was “outraged” at the story, glibly remarking, “In this nation, if you can attack a psychiatrist, you win a medal.”  The leadership of the APA has mounted an aggressive defense, too.  Incoming APA president and Columbia chairman Jeffrey Lieberman called Kirsch “mistaken and confused, … ideologically based, [and] … just plain wrong.”  Similarly, current APA president John Oldham called the story “irresponsible and dangerous [and] … at odds with common clinical experience.”

These are indeed strong words.  But it raises one very important question:  who or what exactly are these spokesmen defending?  Patients?  Psychiatrists?  Drugs?  It would seem to me that the leadership of a professional medical organization should be defending good patient care, or at the very least, greater opportunities for its members to provide good patient care.  The arguments put forth by APA leadership, however, seem to be defending none of the above.  Instead, they seem to be defending antidepressants.

For the purposes of this post, I won’t weigh in on the question of whether antidepressants work or not.  It’s a complicated issue with no easy answer (we’ll offer some insight in the May issue of the Carlat Psychiatry Report).  However, let’s just assume that the general public now has good reason to believe that current antidepressants are essentially worthless, thanks to the 60 Minutes story (not to mention—just a few weeks earlier—a report on NPR’s “Morning Edition,” as well as a two-part series by Marcia Angell in the New York Review of Books last summer).  Justifiably or not, our patients will be skeptical of psychopharmacology going forward.  If we psychiatrists are hell-bent on defending antidepressants, we’d better have even stronger reasons for doing so than simply “we know they work.”

But why are psychiatrists defending antidepressants in the first place?  If anyone should be defending antidepressants, it should be the drug companies, not psychiatrists.  Why didn’t 60 Minutes interview a Lilly medical expert to explain how they did the initial studies of Prozac, or a Pfizer scientist to explain why patients should be put on Pristiq?  (Now that would have been fun!!)  I would have loved to hear Michael Thase—or anyone from the psychiatric establishment—say to Lesley Stahl:

“You know, Dr. Kirsch might just be onto something.  His research is telling us that maybe antidepressants really don’t work as well as we once thought.  As a result, we psychiatrists want drug companies to do better studies on their drugs before approval, and stop marketing their drugs so aggressively to us—and to our patients—until they can show us better data.  In the meantime we want to get paid to provide therapy along with—or instead of—medications, and we hope that the APA puts more of an emphasis on non-biological treatments for depression in the future.”

Wouldn’t that have been great?  For those of us (like me) who think the essence of depression is far more than faulty biology to be corrected with a pill, it would have been very refreshing to hear.  Moreover, it would help our field to reclaim some of the “territory” we’ve been abdicating to others (therapists, psychologists, social workers)—territory that may ultimately be shown to be more relevant for most patients than drugs.  (By the way, I don’t mean to drive a wedge between psychiatry and these other specialties, as I truly believe we can coexist and complement each other.  But as I wrote in my last post, psychiatry really needs to stand up for something, and this would have been a perfect opportunity to do exactly that.)

To his credit, Dr. Oldham wrote an editorial two weeks ago in Psychiatric News (the APA’s weekly newsletter) explaining that he was asked to contribute to the 60 Minutes piece, but CBS canceled his interview at the last minute.  He wrote a response but CBS refused to post it on its website (the official APA response can be found here).  Interestingly, he went on to acknowledge that “good care” (i.e., whatever works) is what our patients need, and also conceded that, at least for “milder forms of depression,” the “nonspecific [placebo] effect dwarfs the specific [drug] effect.”

I think the APA would have a pretty powerful argument if it emphasized this message (i.e., that the placebo effect might be much greater than we believe, and that we should study this more closely—maybe even harness it for the sake of our patients) over what sounds like a knee-jerk defense of drugs.  It’s a message that would demand better science, prioritize our patients’ well-being, and, perhaps even reduce treatment costs in the long run.  If, instead, we call “foul” on anyone who criticizes medications, not only do we send the message that we put our faith in only one form of therapy (out of many), but we also become de facto spokespersons for the pharmaceutical industry.  If the APA wants to change that perception among the general public, this would be a great place to start.


%d bloggers like this: