Is Clinical Psychopharmacology a Pseudoscience?

October 24, 2011

I know I write a lot about my disillusionment with modern psychiatry.  I have lamented the overriding psychopharmacological imperative, the emphasis on rapid diagnosis and medication management, at the expense of understanding the whole patient and developing truly “personalized” treatments.  But at the risk of sounding like even more of a heretic, I’ve noticed that not only do psychopharmacologists really believe in what they’re doing, but they often believe it even in the face of evidence to the contrary.

It all makes me wonder whether we’re practicing a sort of pseudoscience.

For those of you unfamiliar with the term, check out Wikipedia, which defines “pseudoscience” as:  “a claim, belief, or practice which is presented as scientific, but which does not adhere to a valid scientific method, lacks supporting evidence or plausibility, cannot be reliably tested…. [is] often characterized by the use of vague, exaggerated or unprovable claims [and] an over-reliance on confirmation rather than rigorous attempts at refutation…”

Among the medical-scientific community (of which I am a part, by virtue of my training), the label of “pseudoscience” is often reserved for practices like acupuncture, naturopathy, and chiropractic.  Each may have its own adherents, its own scientific language or approach, and even its own curative power, but taken as a whole, their claims are frequently “vague or exaggerated,” and they fail to generate hypotheses which can then be proven or (even better) refuted in an attempt to refine disease models.

Does clinical psychopharmacology fit in the same category?

Before going further, I should emphasize I’m referring to clinical psychopharmacology: namely, the practice of prescribing medications (or combinations thereof) to actual patients, in an attempt to treat illness.  I’m not referring to the type of psychopharmacology practiced in research laboratories or even in clinical research settings, where there is an accepted scientific method, and an attempt to test hypotheses (even though some premises, like DSM diagnoses or biological mechanisms, may be erroneous) according to established scientific principles.

The scientific method consists of: (1) observing a phenomenon; (2) developing a hypothesis; (3) making a prediction based on that hypothesis; (4) collecting data to attempt to refute that hypothesis; and (5) determining whether the hypothesis is supported or not, based on the data collected.

In psychiatry, we are not very good at this.  Sure, we may ask questions and listen to our patients’ answers (“observation”), come up with a diagnosis (a “hypothesis”) and a treatment plan (a “prediction”), and evaluate our patients’ response to medications (“data collection”).  But is this only a charade?

First of all, the diagnoses we give are not based on a valid understanding of disease.  As the current controversy over DSM-5 demonstrates, even experts find it hard to agree on what they’re describing.  Maybe if we viewed DSM diagnoses as “suggestions” or “prototypes” rather than concrete diagnoses, we’d be better off.  But clinical psychopharmacology does the exact opposite: it puts far too much emphasis on the diagnosis, which predicts the treatment, when in fact a diagnosis does not necessarily reflect biological reality but rather a “best guess.”  It’s subject to change at any time, as are the fluctuating symptoms that real patients present with.  (Will biomarkers help?  I’m not holding my breath.)

Second, our predictions (i.e., the medications we choose for our patients) are always based on assumptions that have never been proven.  What do I mean by this?  Well, we have “animal models” of depression and theories of errant dopamine pathways in schizophrenia, but for “real world” patients—the patients in our offices—if you truly listen to what they say, the diagnosis is rarely clear.  Instead, we try to “make the patients fit the diagnosis” (which becomes easier to do as appointment lengths shorten), and then concoct treatment plans which perfectly fit the biochemical pathways that our textbooks, drug reps, and anointed leaders lay out for us, but which may have absolutely nothing to do with what’s really happening in the bodies and minds of our patients.

Finally, the whole idea of falsifiability is absent in clinical psychopharmacology.  If I prescribe an antidepressant or even an anxiolytic or sedative drug to my patient, and he returns two weeks later saying that he “feels much better” (or is “less anxious” or is “sleeping better”), how do I know it was the medication?  Unless all other variables are held strictly constant—which is impossible to do even in a well-designed placebo-controlled trial, much less the real world—I can make no assumption about the effect of the drug in my patient’s body.

It gets even more absurd when one listens to a so-called “expert psychopharmacologist,” who uses complicated combinations of 4, 5, or 6 medications at a time to achieve “just the right response,” or who constantly tweaks medication doses to address a specific issue or complaint (e.g., acne, thinning hair, frequent cough, yawning, etc, etc), using sophisticated-sounding pathways or models that have not been proven to play a role in the symptom under consideration.  Even if it’s complete guesswork (which it often is), the patient may improve 33% of the time (“Success! My explanation was right!”), get worse 33% of the time (“I didn’t increase the dose quite enough!”), and stay the same 33% of the time (“Are any other symptoms bothering you?”).

Of course, if you’re paying good money to see an “expert psychopharmacologist,” who has diplomas on her wall and who explains complicated neurochemical pathways to you using big words and colorful pictures of the brain, you’ve already increased your odds of being in the first 33%.  And this is the main reason psychopharmacology is acceptable to most patients: not only does our society value the biological explanation, but psychopharmacology is practiced by people who sound so intelligent and … well, rational.  Even though the mind is still a relatively impenetrable black box and no two patients are alike in how they experience the world.  In other words, psychopharmacology has capitalized on the placebo response (and the ignorance & faith of patients) to its benefit.

Psychopharmacology is not always bad.  Sometimes psychotropic medication can work wonders, and often very simple interventions provide patients with the support they need to learn new skills (or, in rare cases, to stay alive).  In other words, it is still a worthwhile endeavor, but our expectations and our beliefs unfortunately grow faster than the evidence base to support them.

Similarly, “pseudoscience” can give results.  It can heal, too: some health-care plans willingly pay for acupuncture, and some patients swear by Ayurvedic medicine or Reiki.  And who knows, there might still be a valid scientific basis for the benefits professed by advocates of these practices.

In the end, though, we need to stand back and remind ourselves what we don’t know.  Particuarly at a time when clinical psychopharmacology has come to dominate the national psyche—and command a significant portion of the nation’s enormous health care budget—we need to be extra critical and ask for more persuasive evidence of its successes.  And we should not bring to the mainstream something that might more legitimately belong in the fringe.


Biomarker Envy V: BDNF and Cocaine Relapse

October 18, 2011

The future of psychiatric diagnosis and treatment lies in the discovery and development of “biomarkers” of pathological processes.  A biomarker, as I’ve written before, is something that can be measured or quantified, usually from a biological specimen like a blood sample, which helps to diagnose a disease or predict response to a treatment.

Biomarkers are the embodiment of the new “personalized medicine”:  instead of wasting time talking to a patient, asking questions, and possibly drawing incorrect conclusions, the holy grail of a biomarker allows the clinician to order a simple blood test (or brain scan, or genotype) and make a decision about that specific patient’s case.  But “holy grail” status is elusive, and a recent study from the Yale University Department of Psychiatry, published this month in the journal Biological Psychiatry, provides yet another example of a biomarker which is not quite there—at least not yet.

The Yale group, led by Rajita Sinha, PhD, were interested in the question, what makes newly-abstinent cocaine addicts relapse?, and set out to identify a biological marker for relapse potential.  If such a biomarker exists, they argue, then it could not only tell us more about the biology of cocaine dependence, craving, and relapse, but it might also be used clinically, as a way to identify patients who might need more aggressive treatment or other measures to maintain their abstinence.

The researchers chose BDNF, or brain-derived neurotrophic factor, as their biomarker.  In studies of cocaine-dependent animals who are forced into prolonged abstinence, those animals show elevations in BDNF when exposed to a stressor; moreover, cocaine-seeking is associated with BDNF elevations, and BDNF injections can promote cocaine-seeking behavior in these same abstinent animals.  In their recent study, Sinha’s group took 35 cocaine-dependent (human) patients and admitted them to the hospital for 4 weeks.  After three weeks of NO cocaine, they measured blood levels of BDNF and compared these numbers to the levels measured in “healthy controls.”  Then they followed all 35 cocaine users for the next 90 days to determine which of them would relapse during this three-month period.

The results showed that the abstinent cocaine users generally had higher BDNF levels than the healthy controls (see figure below, A).  However, when the researchers looked at the patients who relapsed on cocaine during the 3-month follow-up (n = 23), and compared them to those who stayed clean (n = 12), they found that the relapsers, on average, had higher BDNF levels than the non-relapsers (see figure, B).  Their conclusion is that high levels of BDNF may predict relapse.

These results are intriguing, and Dr Sinha presented her findings at the California Society of Addiction Medicine (CSAM) annual conference last week.  Audience members—all of whom treat drug and alcohol addiction—asked about how they might measure BDNF levels in their patients, and whether the same BDNF elevations might be found in dependence on other drugs.

But one question really got to what I think is the heart of the matter.  Someone asked Dr Sinha: “Looking back at the 35 patients during their four weeks in the hospital, were there any characteristics that separated the high BDNF patients from those with low BDNF?”  In other words, were there any behavioral or psychological features that might, in retrospect, be correlated with elevated BDNF?  Dr Sinha responded, “The patients in the hospital who seemed to be experiencing the most stress or who seemed to be depressed had higher BDNF levels.”

Wait—you mean that the patients at high risk for relapse could be identified by talking to them?  Dr Sinha’s answer shows why biomarkers have little place in clinical medicine, at least at this point.  Sure, her group showed correlations of BDNF with relapse, but nowhere in their paper did they describe personal features of the patients (psychological test scores, psychiatric complaints, or even responses to a checklist of symptoms).  So those who seemed “stressed or depressed” had higher BDNF levels, and—as one might predict—relapsed.  Did this (clinical) observation really require a BDNF blood test?

Dr Sinha’s results (and the results of others who study BDNF and addiction) make a strong case for the role of BDNF in relapse or in recovery from addiction.  But as a clinical tool, not only is it not ready for prime time, but it distracts us from what really matters.  Had Dr Sinha’s group spent four weeks interviewing, analyzing, or just plain talking with their 35 patients instead of simply drawing blood on day 21, they might have come up with some psychological measures which would be just as predictive of relapse—and, more importantly, which might help us develop truly “personalized” treatments that have nothing to do with BDNF or any biochemical feature.

But I wouldn’t hold my breath.  As Dr Sinha’s disclosures indicate, she is on the Scientific Advisory Board of Embera NeuroTherapeutics, a small biotech company working to develop a compound called EMB-001.  EMB-001 is a combination of oxazapam (a benzodiazepine) and metyrapone.  Metyrapone inhibits the synthesis of cortisol, the primary stress hormone in humans.  Dr Sinha, therefore, is probably more interested in the stress responses of her patients (which would include BDNF and other stress-related proteins and hormones) than in whether they say they feel like using cocaine or not.

That’s not necessarily a bad thing.  Science must proceed this way.  If EMB-001 (or a treatment based on BDNF) turns out to be an effective therapy for addiction, it may save hundreds or thousands of lives.  But until science gets to that point, we clinicians must always remember that our patients are not just lab values, blood samples, or brain scans.  They are living, thinking, and speaking beings, and sometimes the best biomarker of all is our skilled assessment and deep understanding of the patient who comes to us for help.


Playing The Role

October 16, 2011

One of the most time-honored pedagogical tools in medicine is the “role play.”  The concept is simple:  one individual plays the part of another person (usually a patient) while a trainee examines or questions him or her, for the purposes of learning ways to diagnose, treat, and communicate more effectively.

Last week I had the privilege of attending a motivational interviewing training seminar.  Motivational interviewing (or MI) is a therapeutic technique in which the clinician helps “motivate” the patient into making healthy lifestyle choices through the use of open-ended questions, acknowledging and “rolling with” the patient’s resistance, and eliciting the patient’s own commitment to change.  The goal is to help the patient make a decision for himself, rather than requiring the clinician to provide a directive or an “order” to change a behavior.

MI is an effective and widely employed strategy, frequently used in the treatment of addictions.  Despite its apparent simplicity, however, it is important to practice one’s skills in order to develop proficiency.  Here, simulations like role-playing exercises can be valuable.  As part of my seminar, I engaged in such an exercise, in which our trainer played the part of a methamphetamine addict while a trainee served as the clinician.

The discussion went something like this:

Clinician:  “How would you like things to be different in your life?”
Patient:  “Well, I think I might be using too much meth.”
Clinician:  “So you think you’re using too much methamphetamine.”
Patient:  “Yeah, my friends are urging me to cut back.”
Clinician:  “How important is it for you to decrease your use?”
Patient:  “Oh, it would really make things easier for me.”
Clinician:  “How confident are you that you could cut back?”
Patient:  “Well, it would be tough.”
Clinician:  “What would make you even more confident?”
Patient:  “If I had some support from other people.”
Clinician:  “Who could provide you with that support?”
Patient:  “Hmm… I do have some friends who don’t use meth.”
Clinician:  “I see.  Can you think of some ways to spend more time with those friends?”
Patient:  “I do know that they go swimming on Thursday nights.  Maybe I can ask if I can join them.”
Clinician:  “I think this would be a good decision.  Can I help you to do this by giving you a telephone call on Wednesday?”
Patient:  “Yes, thank you.”

Of course, I’m paraphrasing somewhat.  But the bottom line is that the whole exercise lasted about ten minutes, and in that ten-minute span, the trainee had taken an ambivalent methamphetamine addict and convinced him to spend an evening with some non-meth-using friends, all through the magic of motivational interviewing.

In real clinical practice, nothing is quite so simple.  And none of us in the room (I hope) were so naïve as to think that this would happen in real life.  But the strategies we employed were so basic (right “out of the book,” so to speak), we could have used this time—and the expertise of our trainer—to practice our skills in a more difficult (i.e., real-world) situation.

It reminded me of a similar exercise in a class during my psychiatry residency, in which our teacher, a psychiatrist in private practice in our community, asked me to role-play a difficult patient, while he would act as therapist and demonstrate his skills in front of our class.  The patient I chose was a particularly challenging one—especially to a novice therapist like myself—who had a habit of repeating back my questions word-for-word with a sarcastic smile on her face, and openly questioning my abilities as a therapist.

During the role-play, I played the part quite well (I thought), giving him the uncomfortable looks and critical comments that my patient routinely gave me.  But this didn’t sit well with him.  He got visibly angry, and after just a few minutes he abruptly stood up and told me to leave the class.  Later that day I received a very nasty email from him accusing me of “sabotaging” his class and “making [him] look like a fool.”  He called my actions “insubordination” and asked me not to return to the class, also suggesting that my actions were “grounds for dismissal from the residency.”

[He also went off on a tangent about some perfectly reasonable—even amiable—emails we had exchanged several weeks earlier, accusing me now of having used “too many quotation marks” which, he said, seemed “unprofessional” and “inappropriate” and demanded an apology!!  He also wrote that in the several weeks of class I had shown him a “tangible tone of disrespect,” even though he had never said anything to me before.  While I believe his paranoid stance may have betrayed some underlying mental instability, I must admit I have not spoken to him since, although he continues to teach and to supervise residents.]

Anyway, these experiences—and others over the years—have led me to question the true meaning of a role-playing exercise.  In its ideal form, a simulation provides the novice with an opportunity to observe a skilled clinician practicing his or her craft, even under challenging circumstances, or provides a safe environment for the novice to try new approaches—and to make mistakes doing so.  But more often than not, a role-playing exercise is a staged production, in which the role-player is trying to make a point.  In actual practice, no patient is a “staged” patient (and those who do give rehearsed answers often have some ulterior motive).  Real patients have a nearly infinite variety of histories, concerns, and personal idiosyncracies for which no “role playing” exercise can ever prepare a therapist.

I’m probably being too harsh.  Role-plays and simulations will always be part of a clinician’s training, and I do recognize their value in learning the basic tools of therapy.  The take-home message, however, is that we should never expect real patients to act as if they’re reading off a script from our textbooks.  And as a corollary, we should use caution when taking our patients’ words and making them fit our own preconceived script.  By doing so, we may be fooling ourselves, and we might miss what the patient really wants us to hear.


Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.


Latuda-Palooza: Marketing or Education?

October 2, 2011

In my last blog post, I wrote about an invitation I received to a symposium on Sunovion Pharmaceuticals’ new antipsychotic Latuda.  I was concerned that my attendance might be reported as a “payment” from Sunovion under the requirements of the Physicians Payment Sunshine Act.  I found it a bit unfair that I might be seen as a recipient of “drug money” (and all the assumptions that go along with that) when, in fact, all I wanted to do was learn about a new pharmaceutical agent.

As it turns out, Sunovion confirmed that my participation would NOT be reported (they start reporting to the feds on 1/1/12), so I was free to experience a five-hour Latuda extravaganza yesterday in San Francisco.  I was prepared for a marketing bonanza of epic proportion—a la the Viagra launch scene in “Love And Other Drugs.”  And in some ways, I got what I expected:  two outstanding and engaging speakers (Dr Stephen Stahl of NEI and Dr Jonathan Meyer of UCSD); a charismatic “emcee” (Richard Davis of Arbor Scientia); an interactive “clicker” system which allowed participants to answer questions throughout the session and check our responses in real time; full lunch & breakfast, coffee and snacks; all in a posh downtown hotel.  (No pens or mugs, though.)

The educational program consisted of a plenary lecture by Dr Stahl, followed by workshops in which we broke up into “teams” and participated in three separate activities:  first, a set of computer games (modeled after “Pyramid” and “Wheel Of Fortune”) in which we competed to answer questions about Latuda and earn points for our team; second, a “scavenger hunt” in which we had 5 minutes to find answers from posters describing Latuda’s clinical trials (sample question: “In Study 4 (229), what proportion of subjects withdrew from the Latuda 40 mg/d treatment arm due to lack of efficacy?”); and finally, a series of case studies presented by Dr Meyer which used the interactive clicker system to assess our comfort level in prescribing Latuda for a series of sample patients.  My team came in second place.

I must admit, the format was an incredibly effective way for Sunovion to teach doctors about its newest drug.  It reinforced my existing knowledge—and introduced me to a few new facts—while it was also equally accessible to physicians who had never even heard about Latuda.

Moreover, the information was presented in an unbiased fashion.  Unbiased?, you may ask.  But wasn’t the entire presentation sponsored by Sunovion?  Yes, it was, but in my opinion the symposium achieved its stated goals:  it summarized the existing data on Latuda (although see here for some valid criticism of that data); presented it in a straightforward, effective (and, at times, fun) way; and allowed us doctors to make our own decisions.  (Stahl did hint that the 20-mg dose is being studied for bipolar depression, not an FDA-approved indication, but that’s also publicly available on the clinicaltrials.gov website.)  No one told us to prescribe Latuda; no one said it was better than any other existing antipsychotic; no one taught us how to get insurance companies to cover it; and—in case any “pharmascold” is still wondering—no one promised us any kickbacks for writing prescriptions.

(Note:  I did speak with Dr Stahl personally after his lecture.  I asked him about efforts to identify patient-specific factors that might predict a more favorable response to Latuda than to other antipsychotics.  He spoke about current research in genetic testing, biomarkers, and fMRI to identify responders, but he also admitted that it’s all guesswork at this point.  “I might be entirely wrong,” he admitted, about drug mechanisms and how they correlate to clinical response, and he even remarked “I don’t believe most of what’s in my book.”  A refreshing—and surprising—revelation.)

In all honesty, I’m no more likely to prescribe Latuda today than I was last week.  But I do feel more confident in my knowledge about it.  It is as if I had spent five hours yesterday studying the Latuda clinical trials and the published Prescribing Information, except that I did it in a far more engaging forum.  As I mentioned to a few people (including Mr Davis), if all drug companies were to hold events like this when they launch new agents, rather than letting doctors decipher glossy drug ads in journals or from their drug reps, doctors would be far better educated than they are now when new drugs hit the market.

But this is a very slippery slope.  In fact, I can’t help but wonder if we may be too far down that slope already.  For better or for worse, Steve Stahl’s books have become de facto “standard” psychiatry texts, replacing classics like Kaplan & Sadock, the Oxford Textbook, and the American Psychiatric Press books.  Stahl’s concepts are easy to grasp and provide the paradigm under which most psychiatry is practiced today (despite his own misgivings—see above).  However, his industry ties are vast, and his “education” company, Neuroscience Education Institute (NEI), has close connections with medical communications companies who are basically paid mouthpieces for the pharmaceutical industry.  Case in point: Arbor Scientia, which was hired by Sunovion to organize yesterday’s symposium—and similar ones in other cities—shares its headquarters with NEI in Carlsbad, CA, and Mr Davis sits on NEI’s Board.

We may have already reached a point in psychiatry where the majority of what we consider “education” might better be described as marketing.  But where do we draw the line between the two?  And even after we answer that question, we must ask, (when) is this a bad thing?  Yesterday’s Sunovion symposium may have been an infomercial, but I still felt there was a much greater emphasis on the “info-” part than the “-mercial.”  (And it’s unfortunate that I’d be reported as a recipient of pharmaceutical money if I had attended the conference after January 1, 2012, but that’s for another blog post.)  The question is, who’s out there to make sure it stays that way?

I’ve written before that I don’t know whom to trust anymore in this field.  Seemingly “objective” sources—like lectures from my teachers in med school and residency—can be heavily biased, while “advertising” (like yesterday’s symposium) can, at times, be fair and informative.  The end result is a very awkward situation in modern psychiatry that is easy to overlook, difficult to resolve, and, unfortunately, still ripe for abuse.


“Dollars For Docs” – What It Really Means

September 25, 2011

A few weeks ago I received an invitation to an October 1 symposium on Latuda, a new antipsychotic from Sunovion (formerly known as Sepracor).  Latuda (lurasidone) was released about six months ago amidst much fanfare—and very aggressive marketing—as a new atypical antipsychotic with, among other advantages, pro-cognitive properties.

I have only prescribed Latuda to three patients, so I have only limited experience with it.  (In case you’re wondering:  one success, one failure, one equivocal.)  However, I have read several papers about Latuda, and I am interested in learning more about it.  The symposium’s plenary speaker is Stephen Stahl from the Neuroscience Education Institute.  Dr Stahl has received money from Sunovion (which is obvious from his publications and disclosures) but he is also a very knowledgeable neuroscientist.  I figured he would be able to describe the differences between Latuda and the other atypical antipsychotics currently on the market.  So I accepted the invitation.

However, upon further thought, I wondered whether my attendance might represent a “payment” from the Sunovion Corporation.  I was not offered any money from Sunovion to attend this event (in fact, you can see my invitation here: page1, page2).  Nevertheless, according to the Physician Payment Sunshine Act, which was passed as part of PPACA (i.e., “Obamacare”), all pharmaceutical companies and medical device manufacturers, as of 2013, are required to report payments to physicians, including direct compensation as well as “food, entertainment, research funding, education or conference funding,” and so forth.

Despite the mandatory 2013 reporting date, several companies have already started reporting.  Other major drug firms to self-report thus far include AstraZeneca, Eli Lilly, Merck, and Pfizer.  Their reports have been widely publicized at sites such as “Dollars For Docs,” which “allows the public to search for individual physicians to see whether they’ve been on pharma’s payroll.”  Several other sites encourage patients to use this site to ask “Does your doc get money from drug companies?”

A quick search of my own name reveals that I received $306 from Pfizer in the year 2010.  Wow!  I had no idea!  What exactly does this mean?  Am I a Pfizer slave?  Did my Pfizer rep walk up to me on 12/31/10, hand me a personal check for $306 and say, “Thank you, Dr. Balt, for prescribing Geodon and Pristiq this year—here’s $306 for your work, and we look forward to more in 2011”?

The answer is no.  I received no money from Pfizer (and, to be frank, I didn’t prescribe any Pristiq last year, because it’s essentially Effexor).  As it happens, during 2010 I worked part-time at a community mental health clinic.  The clinic permitted drug reps to come to the office, bring lunch, and distribute information about their products.  We had lunches 1-2 days out of the week, consisting of modest fare:  Panera sandwiches, trays of Chinese food, or barbecued ribs.  Most of the doctors didn’t have time to eat—or if we did, we scarfed it down in between patients—but we would often talk to the reps, ask questions about their drugs, and accept product literature (which virtually always went straight into the trash), reprints, and educational materials from them.

We were visited by most of the major drug companies in 2010.  (BTW, this continued into 2011, but we are no longer allowed—under our contract with the County mental health department—to accept free samples, and we no longer accept lunches.  Interestingly, my Pfizer rep told us that payments would be reported only as of 1/1/11 and NOT earlier; obviously that was untrue.)  All of the lunches were generally the same, and consisted of inexpensive, modest food, mainly consumed by the clinic staff—secretaries, administrators, assistants—since the doctors were actually working through the lunch hour.  I have since learned that the formula for calculating doctors’ payouts was to take the full cost of the lunch (including all staff members, remember), divide it by the number of doctors in the office, and report that sum.   That’s where you get my $306.00.

[In the interest of full disclosure, in my four years of practice post-residency, I have only been offered one “material” non-food gift: about three years ago, Janssen gave me a $100 voucher for a textbook; I used it to purchase Glen Gabbard’s psychodynamic psychotherapy text.]

Anyway, back to the Latuda symposium.  Knowing what I now know about drug companies, I wouldn’t be surprised if Sunovion reports a $1000+ payout to me if I attend this half-day symposium.  (Facility rental + A/V costs + Xeroxing/handouts + coffee service + refreshments, all divided by the # of docs in attendance.)  I frankly don’t want my future patients searching my name on Dollars For Docs and finding I received a huge “payment” from Sunovion in Q3 2011.  On the other hand, I would like to learn more about Latuda and whether/how it differs from other antipsychotics on the market (including generic first-generation agents).  If possible, I would also like to question Steve Stahl directly about some of what he’s written about this drug (including his Sunovion-funded articles).  What better forum to do this than in a public symposium??

[Note: please see ADDENDUM below.]  I have contacted two different Sunovion sales reps to ask whether my attendance will be “reported” as a payment, and if so, how much.  I have not received a response.  I also called the RSVP number for the symposium.  The registration is being managed by Arbor Scientia, a medical communications company contracted by Sunovion to manage these events.  I was directed to Heather of Arbor Scientia; I left her a message but have not yet received a return call.

So at this point, I am looking forward to attending an event to learn more about a new drug—and the opportunity to challenge the experts on the advantages (if any) of this drug over others—but in doing so, I might also be reported as having “received” a large payment from Sunovion, perhaps even larger than what Pfizer reported they paid me in 2010.

Patients should recognize that sometimes the only way for their doctors to learn about new drugs is to attend such events (assuming they can remain objective, which can be hard when the wine is freely flowing!).  Admittedly, there are doctors who accept much larger sums as speakers or “key opinion leaders,” but organizations like ProPublica should differentiate those doctors (with whom I, personally, have an ethical gripe) from those who are simply workaday folks like me who want to get as much information as they can, provide effective and cost-efficient care—and maybe inhale a free sandwich every once in a while.

ADDENDUM Sept. 26:  Today I received a phone call from Arbor Scientia (from a number that is actually registered as NEI’s main number—as it turns out, they are located in the same building) to assure me that Sunovion adheres to the Physician’s Sunshine Act provision: namely, that they’ll report “payments” to doctors only after January 1, 2012.  (See also here.)  Interestingly, my local Sunovion rep had told me 1/1/11.  (This is only somewhat reassuring: my Pfizer rep had told me they would start reporting as of 1/1/11, but clearly my “payments” from 2010 were reported.)


Rosenhan Redux

September 20, 2011

“If sanity and insanity exist, how shall we know them?”

Those are the opening words of a classic paper in the history of psychology, David Rosenhan’s famous “pseudopatient” study (pdf), published in the prestigious journal Science in 1973.  In his experiment, Rosenhan and seven other people—none of whom had a mental illness—went to 12 different hospitals and complained of “hearing voices.”  They explained to hospital staff that the voices said “empty,” “hollow,” and “thud.”  They reported no other symptoms.

Surprisingly, all patients were admitted.  And even though, upon admission, they denied hearing voices any longer, they all received antipsychotic medication (Rosenhan had instructed his pseudopatients to “cheek” their meds and spit them out later) and were hospitalized for anywhere from 7 to 52 days (average = 19 days).  They behaved normally, yet all of their behaviors—for example, writing notes in a notebook—were interpreted by staff as manifestations of their disease.  All were discharged with a diagnosis of “schizophrenia in remission.”

Rosenhan’s experiment was a landmark study not only for its elegance and simplicity, but for its remarkable conclusions.  Specifically, that psychiatric diagnosis often rests solely upon a patient’s words, and, conversely, that “the normal are not detectably sane.”

Would a similar experiment performed today yield different results?  Personally, I think not.  (Well, actually, admission to a psychiatric hospital these days is determined more by the availability of beds, a patient’s insurance status, and the patient’s imminent dangerousness to self or others, than by the severity or persistence of the symptoms a patient reports, so maybe we’d be a bit less likely to admit these folks.)  At any rate, I’m not so sure that our diagnostic tools are any better today, nearly 40 years later.

In a very controversial book, Opening Skinner’s Box, published in 2003, journalist Lauren Slater claimed to have replicated Rosenhan’s study by visiting nine psychiatric emergency rooms and reporting a single symptom: hearing the word “thud.”  She wrote that “almost every time” she was given a diagnosis of psychotic depression and was prescribed a total of 60 antidepressants and 25 antipsychotics (that’s an average of 9.4 medications per visit!).  But her report was widely criticized by the scientific community, and Slater even confessed in the November 2005 Journal of Nervous and Mental Disease, that “I never did such a study: it simply does not exist.”

While I’m deeply disturbed by the dishonesty exhibited by Slater, whose words had great power to change the public perception of psychiatry (and I am offended, as a professional, by the attitude she demonstrated in her response to her critics… BTW, if you want a copy of her response—for entertainment purposes only, of course—email me), I think she may have been onto something.  In fact, I would invite Slater to repeat her study.  For real, this time.

Here’s what I would like Slater to do.  Instead of visiting psychiatric ERs, I invite her to schedule appointments with a number of outpatient psychiatrists.  I would encourage her to cast a wide net:  private, cash-only practices; clinics in academic medical centers; community mental health clinics; and, if accessible, VA and HMO psychiatrists.  Perhaps she can visit a few family practice docs or internists, for good measure.

When she arrives for her appointment, she should report one of the following chief complaints:  “I feel depressed.”  “I’m under too much stress.”  “I see shadows out of the corner of my eyes sometimes.”  “My mood is constantly going from one extreme to the other, like one minute I’m okay, the next minute I’m all hyper.”  “My nerves are shot.” “I feel like lashing out at people sometimes.”  “I can’t pay attention at work [or school].” “I sometimes drink [or use drugs] to feel better.”  Or anything similar.

She will most certainly be asked some follow-up questions.  Maybe some family history.  Maybe a mental status exam.  She will, most likely, be asked whether she’s suicidal or whether she hears voices.  I encourage her to respond honestly, sticking to her initial, vague, symptom, but without reporting anything else significant.

In the vast majority of cases, she will probably receive a diagnosis, most likely an “NOS” diagnosis (NOS = “not otherwise specified,” or psychiatric shorthand for “well, it’s sort of like this disorder, but I’m not sure”).  She is also likely to be offered a prescription.  Depending on her chief complaint, it may be an antidepressant, an atypical antipsychotic, or a benzodiazepine.

I don’t encourage otherwise healthy people to play games with psychiatrists, and I don’t promote dishonesty in the examination room.  I also don’t mean to suggest that all psychiatrists arrive at diagnoses from a single statement.  But the reality is that in many practice settings, the tendency is to make a diagnosis and prescribe a drug, even if the doctor is unconvinced of the seriousness of the patient’s reported symptoms.  Sometimes the clinic can’t bill for the service without a diagnosis code, or the psychiatrist can’t keep seeing a patient unless he or she is prescribing medication.  There’s also the liability that comes with potentially “missing” a diagnosis, even if everything else seems normal.

And on the patient’s side, too, the forces are often in favor of receiving a diagnosis.  Sure, there are some patients who report symptoms solely because they seek a Xanax Rx or their Seroquel fix, and other patients who are trying to strengthen a disability case.  But an even greater number of patients are frustrated by very real stressors in their lives and/or just trying to make sense out of difficult situations in which they find themselves.  For many, it’s a relief to know that one’s troubles can be explained by a psychiatric diagnosis, and that a medication might make at least some aspect of their lives a little easier.

As Rosenhan demonstrated, doctors (and patients, often) see things through lenses that are colored by the diagnostic paradigm.  In today’s era, that’s the DSM-IV.  But even more so today than in 1973, other factors—like the pharmaceutical industry, the realities of insurance billing, shorter appointment times, and electronic medical records—all encourage us to read much more into a patient’s words and draw conclusions much more rapidly than might be appropriate.  It’s just as nonsensical as it was 40 years ago, but, unfortunately, it’s the way psychiatry works.


How Abilify Works, And Why It Matters

September 13, 2011

One lament of many in the mental health profession (psychiatrists and pharmascolds alike) is that we really don’t know enough about how our drugs work.  Sure, we have hypothetical mechanisms, like serotonin reuptake inhibition or NMDA receptor antagonism, which we can observe in a cell culture dish or (sometimes) in a PET study, but how these mechanisms translate into therapeutic effect remains essentially unknown.

As a clinician, I have noticed certain medications being used more frequently over the past few years.  One of these is Abilify (aripiprazole).  I’ve used Abilify for its approved indications—psychosis, acute mania, maintenance treatment of bipolar disorder, and adjunctive treatment of depression.  It frequently (but not always) works.  But I’ve also seen Abilify prescribed for a panoply of off-label indications: “anxiety,” “obsessive-compulsive behavior,” “anger,” “irritability,” and so forth.  Can one medication really do so much?  And if so, what does this say about psychiatry?

From a patient’s perspective, the Abilify phenomenon might best be explained by what it does not do.  If you ask patients, they’ll say that—in general—they tolerate Abilify better than other atypical antipsychotics.  It’s not as sedating as Seroquel, it doesn’t cause the same degree of weight gain as Zyprexa, and the risk of contracting uncomfortable movement disorders or elevated prolactin is lower than that of Risperdal.  To be sure, many people do experience side effects of Abilify, but as far as I can tell, it’s an acceptable drug to most people who take it.

Abilify is a unique pharmacological animal.  Like other atypical antipsychotics, it binds to several different neurotransmitter receptors; this “signature” theoretically accounts for its therapeutic efficacy and side effect profile.  But unlike others in its class, it doesn’t block dopamine (specifically, dopamine D2) or serotonin (specifically, 5-HT1A) receptors.  Rather, it’s a partial agonist at those receptors.  It can activate those receptors, but not to the full biological effect.  In lay terms, then, it can both enhance dopamine and serotonin signaling where those transmitters are deficient, and inhibit signaling where they’re in excess.

Admittedly, that’s a crude oversimplification of Abilify’s effects, and an inadequate description of how a “partial agonist” works.  Nevertheless, it’s the convenient shorthand that most psychiatrists carry around in their heads:  with respect to dopamine and serotonin (the two neurotransmitters which, at least in the current vernacular, are responsible for a significant proportion of pathological behavior and psychiatric symptomatology), Abilify is not an all-or-none drug.  It’s not an on-off switch. It’s more of a “stabilizer,” or, in the words of Stephen Stahl, a “Goldilocks drug.”

Thus, Abilify can be seen, at the same time, as both an antipsychotic, and not an antipsychotic.  It’s both an antidepressant, and not an antidepressant.  And when you have a drug that is (a) generally well tolerated, (b) seems to work by “stabilizing” two neurotransmitter systems, and (c) resists conventional classification in this way, it opens the floodgates for all sorts of potential uses in psychiatry.

Consider the following conditions, all of which are subjects of Abilify clinical trials currently in progress (thanks to clinicaltrials.gov):  psychotic depression; alcohol dependence; “aggression”; improvement of insulin sensitivity; antipsychotic-induced hyperprolactinemia; cocaine dependence; Tourette’s disorder; postpartum depression; methamphetamine dependence; obsessive-compulsive disorder (OCD); late-life bipolar disorder; post-traumatic stress disorder (PTSD); cognitive deficits in schizophrenia; alcohol dependence; autism spectrum disorders; fragile X syndrome; tardive dyskinesia; “subsyndromal bipolar disorder” (whatever that is) in children; conduct disorder; ADHD; prodromal schizophrenia; “refractory anxiety”; psychosis in Parkinson’s disease; anorexia nervosa; substance-induced psychosis; prodromal schizophrenia; trichotillomania; and Alzheimers-related psychosis.

Remember, these are the existing clinical trials of Abilify.  Each one has earned IRB approval and funding support.  In other words, they’re not simply the fantasies of a few rogue psychiatrists; they’re supported by at least some preliminary evidence, or at least a very plausible hypothesis.  The conclusion one might draw from this is that Abilify is truly a wonder drug, showing promise in nearly all of the conditions we treat as psychiatrists.  We’ll have to wait for the clinical trial results, but what we can say at this point is that a drug which works as a “stabilizer” of two very important neurotransmitter systems can be postulated to work in virtually any way a psychopharmalogist might want.

But even if these trials are negative, my prediction is that this won’t stop doctors from prescribing Abilify for each of the above conditions.  Why?  Because the mechanism of Abilify allows for such elegant explanations of pathology (“we need to tune down the dopamine signal to get rid of those flashbacks” or “the serotonin 1A effect might help with your anxiety” – yes, I’ve heard both of these in the last week), that it would be anathema, at least to current psychiatric practice, not to use it in this regard.

This fact alone should lead us to ask what this says about psychiatry as a whole.  The fact that one drug is prescribed so widely—owing to its relatively nonspecific effects and a good deal of creative psychopharmacology on the part of doctors like me—and is so broadly accepted by patients, should call into question our hypotheses about the pathophysiology of mental illness, and how psychiatric disorders are distinguished from one another.  It should challenge our theories of neurotransmitters and receptors and how their interactions underlie specific symptoms.  And it should give us reason to question whether the “stories” we tell ourselves and our patients carry more weight than the medications we prescribe.


How To Retire At Age 27

September 4, 2011

A doctor’s primary responsibility is to heal, and all of our efforts and resources should be devoted to that goal.  At times, it is impossible to restore a patient to perfect health and he or she must unfortunately deal with some degree of chronic disability.  Still other times, though, the line between “perfect health” and “disability” is blurred, and nowhere (in my opinion) is this more problematic than in psychiatry.

To illustrate, consider the following example from my practice:

Keisha (not her real name), a 27 year-old resident of a particularly impoverished and crime-ridden section of a large city, came to my office for a psychiatric intake appointment.  I reviewed her intake questionnaire; under the question “Why are you seeking help at this time?” she wrote: “bipolar schizophrenia depression mood swings bad anxiety ADHD panic attacks.”  Under “past medications,” she listed six different psychiatric drugs (from several different categories).  She had never been hospitalized.

When I first saw her, she appeared overweight but otherwise in no distress.  An interview revealed no obvious thought disorder, no evidence of hallucinations or delusions, nor did she complain of significant mood symptoms.  During the interview, she told me, “I just got my SSDI so I’m retired now.”  I asked her to elaborate.  “I’m retired now,” she said.  “I get my check every month, I just have to keep seeing a doctor.”  When I asked why she’s on disability, she replied, “I don’t know, whatever they wrote, bipolar, mood swings, panic attacks, stuff like that.”  She had been off medications for over two months (with no apparent symptoms); she said she really “didn’t notice” any effect of the drugs, except the Valium 20 mg per day, which “helped me settle down and relax.”

Keisha is a generally healthy 27 year-old.  She graduated high school (something rare in this community, actually) and took some nursing-assistant classes at a local vocational school.  She dropped out, however, because “I got stressed out.”  She tried looking for other work but then found out from a family member that she could “apply for disability.”  She applied and was denied, but then called a lawyer who specialized in disability appeals and, after about a year of resubmissions, received the good news that she can get Social Security Disability, ensuring a monthly check.

How is Keisha “disabled”?  She’s disabled because she went to see a doctor and, presumably, told that doctor that she can’t work because of “stress.”  That doctor probably asked her a series of questions like “are you unable to work because of your depressed mood?”, “Do you find it hard to deal in social situations because of your mood swings?” etc., and she answered them in the affirmative.  I’ve seen dozens—if not hundreds—of disability questionnaires, which ask the same questions.

I have no doubt that Keisha lives a stressful life.  I’ve driven through her part of town.  I’ve read about the turf wars being waged by the gangs there.  I know that her city has one of the highest murder rates in America, unemployment is high, schools are bad, and drug abuse and criminal activity are widespread.  I would be surprised if anyone from her neighborhood was not anxious, depressed, moody, irritable, or paranoid.

But I am not convinced that Keisha has a mental illness.

Lest you think that I don’t care about Keisha’s plight, I do.  Keisha may very well be struggling, but whether this is “major depression,” a true “anxiety disorder,” or simply a reaction to her stressful situation is unclear.  Unfortunately, psychiatry uses simple questions to arrive at a diagnosis—and there are no objective tests for mental illness—so a careless (or unscrupulous) provider can easily apply a label, designating Keisha’s situation as a legitimate medical problem.  When combined with the law firms eager to help people get “the government money they deserve,” and the very real fact that money and housing actually do help people like Keisha, we’ve created the illusion that mental illness is a direct consequence of poverty, and the way to treat it is to give out monthly checks.

As a physician, I see this as counter-therapeutic for a number of reasons.  With patients like Keisha, I often wonder, what exactly am I “treating”?  What constitutes success?  An improvement in symptoms?  (What symptoms?)  Or successfully getting her on the government dole?  And when a patient comes to me, already on disability after receiving a diagnosis of MDD (296.34) or panic disorder (300.21) from some other doctor or clinic, I can’t just say, “I’m sorry about your situation, but let’s see what we can do to overcome it together,” because there’s no incentive to overcome it.  (This is from someone who dealt with severe 307.51 for sixteen years, but who also had the promise of a bright future to help overcome it.)

Moreover, making diagnoses where there is no true pathology artificially inflates disease prevalence, further enlarging state and county mental health bureaucracies.  It enables massive over-prescription of expensive (e.g., atypical antipsychotics like Seroquel and Zyprexa), addictive (like stimulants and benzodiazepines), or simply ineffective (like SSRIs) medications.  And far from helping the downtrodden who claim to be its “victims,” this situation instead rewards drug companies and doctors, some of whom prefer serving this population because of the assembly-line nature of this sort of practice:  see the patient, make the diagnosis, write the script, and see them again in 3-6 months.

The bottom line is, here in America we’ve got thousands (perhaps millions?) of able-bodied people who, for one socioeconomic (i.e., not psychiatric) reason or another, can’t find work and have fallen upon psychiatric “disability” as their savior.  I’d love to help them, but, almost by definition, I cannot.  And neither can any other doctor.  Sure, they struggle and suffer, but their suffering is relieved by a steady job, financial support, and yes, direct government assistance.  These are not part of the psychiatric armamentarium.  It’s not medicine.

Psychiatry should not be a tool for social justice.  (We’ve tried that before.  It failed.)  Using psychiatric labels to help patients obtain taxpayers’ money, unless absolutely necessary and legitimate, is wasteful and dishonest.  More importantly, it harms the very souls we have pledged an oath to protect.


Psychopharm R&D Cutbacks II: A Response to Stahl

August 28, 2011

A lively discussion has emerged on the NEI Global blog and on Daniel Carlat’s psychiatry blog about a recent post by Stephen Stahl, NEI chairman, pop(ular) psychiatrist, and promoter of psychopharmaceuticals.  The post pertains to the exodus of pharmaceutical companies from neuroscience research (something I’ve blogged about too), and the changing face of psychiatry in the process.

Dr Stahl’s post is subtitled “Be Careful What You Ask For… You Just Might Get It” and, as one might imagine, it reads as a scathing (some might say “ranting”) reaction against several of psychiatry’s detractors: the “anti-psychiatry” crowd, the recent rules restricting pharmaceutical marketing to doctors, and those who complain about Big Pharma funding medical education.  He singles out Dr Carlat, in particular, as an antipsychiatrist, implying that Carlat believes mental illnesses are inventions of the drug industry, medications are “diabolical,” and drugs exist solely to enrich pharmaceutical companies.  [Not quite Carlat’s point of view, as  a careful reading of his book, his psychopharmacology newsletter, and, yes, his blog, would prove.]

While I do not profess to have the credentials of Stahl or Carlat, I have expressed my own opinions on this matter in my blog, and wanted to enter my opinion on the NEI post.

With respect to Dr Stahl (and I do respect him immensely), I think he must re-evaluate his influence on our profession.  It is huge, and not always in a productive way.  Case in point: for the last two months I have worked in a teaching hospital, and I can say that Stahl is seen as something of a psychiatry “god.”  He has an enormous wealth of knowledge, his writing is clear and persuasive, and the materials produced by NEI present difficult concepts in a clear way.  Stahl’s books are directly quoted—unflinchingly—by students, residents, and faculty.

But there’s the rub.  Stahl has done such a good job of presenting his (i.e., the psychopharmacology industry’s) view of things that it is rarely challenged or questioned.  The “pathways” he suggests for depression, anxiety, psychosis, cognition, insomnia, obsessions, drug addiction, medication side effects—basically everything we treat in psychiatry—are accompanied by theoretical models for how some new pharmacological agent might (or will) affect these pathways, when in fact the underlying premises or the proposed drug mechanisms—or both—may be entirely wrong.  (BTW, this is not a criticism of Stahl, this is simply a statement of fact; psychiatry as a neuroscience is decidedly still in its infancy.)

When you combine Stahl’s talent with his extensive relationships with drug companies, it makes for a potentially dangerous combination.  To cite just two examples, Stahl has written articles (in widely distributed “throwaway” journals) making compelling arguments for the use of low-dose doxepin (Silenor) and L-methylfolate (Deplin) in insomnia and depression, respectively, when the actual data suggest that their generic (or OTC) equivalents are just as effective.  Many similar Stahl productions are included as references or handouts in drug companies’ promotional materials or websites.

How can this be “dangerous”?  Isn’t Stahl just making hypotheses and letting doctors decide what to do with them?  Well, not really.  In my experience, if Stahl says something, it’s no longer a hypothesis, it becomes the truth.

I can’t tell you how many times a student (or even a professor of mine) has explained to me “Well, Stahl says drug A works this way, so it will probably work for symptom B in patient C.”  Unfortunately, we don’t have the follow-up discussion when drug A doesn’t treat symptom B; or patient C experiences some unexpected side effect (which was not predicted by Stahl’s model); or the patient improves in some way potentially unrelated to the medication.  And when we don’t get the outcome we want, we invoke yet another Stahl pathway to explain it, or to justify the addition of another agent.  And so on and so on, until something “works.”  Hey, a broken clock is still correct twice a day.

I don’t begrudge Stahl for writing his articles and books; they’re very well written, and the colorful pictures are fun to look at– it makes psychiatry almost as easy as painting by numbers.  I also (unlike Carlat) don’t get annoyed when doctors do speaking gigs to promote new drugs.  (When these paid speakers are also responsible for teaching students in an academic setting, however, that’s another issue.)  Furthermore, I accept the fact that drug companies will try to increase their profits by expanding market share and promoting their drugs aggressively to me (after all, they’re companies—what do we expect them to do??), or by showing “good will” by underwriting CME, as long as it’s independently confirmed to be without bias.

The problem, however, is that doctors often don’t ask for the data.  We don’t  ask whether Steve Stahl’s models might be wrong (or biased).  We don’t look closely at what we’re presented (either in a CME lesson or by a drug rep) to see whether it’s free from commercial influence.  And, perhaps most distressingly, we don’t listen enough to our patients to determine whether our medications actually do what Stahl tells us they’ll do.

Furthermore, our ignorance is reinforced by a diagnostic tool (the DSM) which requires us to pigeonhole patients into a small number of diagnoses that may have no biological validity; a reimbursement system that encourages a knee-jerk treatment (usually a drug) for each such diagnosis; an FDA approval process that gives the illusion that diagnoses are homogeneous and that all patients will respond the same way; and only the most basic understanding of what causes mental illness.  It creates the perfect opportunity for an authority like Stahl to come in and tell us what we need to know.  (No wonder he’s a consultant for so many pharmaceutical companies.)

As Stahl writes, the departure of Big Pharma from neuroscience research is unfortunate, as our existing medications are FAR from perfect (despite Stahl’s texts making them sound pretty darn effective).  However, this “breather” might allow us to pay more attention to our patients and think about what else—besides drugs—we can use to nurse them back to health.  Moreover, refocusing our research efforts on the underlying psychology and biology of mental illness (i.e., research untainted by the need to show a clinical drug response or to get FDA approval) might open new avenues for future drug development.

Stahl might be right that the anti-pharma pendulum has swung too far, but that doesn’t mean we can’t use this opportunity to make great strides forward in patient care.  The paychecks of some docs might suffer.  Hopefully our patients won’t.