Depression in Family Practice

October 31, 2011

Which medical professional is best equipped to diagnose and treat depression?  Speaking as a psychiatrist, I am (of course) inclined to respond that only those with a sophisticated understanding of psychological assessment, DSM-IV criteria, and differential diagnosis, can accurately diagnose depression in their patients.  In fact, I’ve written about how the overreliance on checklists and rapid interviewing (which may happen more frequently in primary-care than in mental health settings) may inflate rates of depression, mislead patients, and increase health care costs.

Maybe it’s time for me to reconsider that notion.  In a fascinating article published this month in the journal Family Practice, authors from the Munich Technical University reviewed 13 studies surveying 239 primary-care providers about their experience in diagnosing depression.  The paper sheds some much-needed light on this topic, on which I have (apparently) been rather misinformed.

The take-home message from this study is one that should have been obvious to me from the start:  namely, that family practice doctors, almost by definition, are perfectly positioned to recognize depressive symptoms in their patients, simply by virtue of their long-term knowledge of their patients’ history and “baseline functioning.”  As the authors write:  “A relationship that has developed over the years [helped to] reveal symptoms of depression… When clinicians were not familiar with the patient, they acknowledged that the patient was less likely to share personal information, making diagnosis more difficult.”  In other words, how a patient’s symptoms fit into the “big picture” of his or her life is far more informative than the “snapshot” offered by a single point in time.

Similarly, I was encouraged by the fact that most family practitioners (“FPs”) don’t rely on checklists or lists of symptoms—or even the DSM-IV—to make their diagnoses.  In fact, the authors write that FPs, rightly, “express doubts about the validity of the diagnostic concept of depressive disorders” and that they “consider depressive disorders to be syndromes … where etiological and contextual thinking are more relevant than symptom counts.”  They see depression “as a problem they are faced with in their everyday work rather than as an objective diagnostic category.”  Finally, FPs typically rely on a “rule-out” strategy using a “wait and see [or] watchful waiting” approach, rather than a need to ask about a list of specific symptoms.

Wow!  Here’s a challenge:  spend an hour or two reading the DSM-IV (or any of the dozens of papers on STAR*D, or any clinical trial of a new antidepressant), then read the above statements.  Which strategy most accurately captures the phenomenology, the reality, the experience, of a patient with depression?  Yeah, I thought so, too.

It seems to me that, if we are to trust these authors and their conclusions, then family practice docs really do understand their patients, and can recognize the subtle emergence of depressive symptoms over time, more so than the specialists (like me) who might be asked for an opinion long after the disease has taken hold.

But then there’s reality.

For one thing, the depiction of the FP as “being aware of their patients’ everyday lives” might have been true for the Marcus Welbys of the 60s and 70s, but (unfortunately) not for the HMO drones, the docs-in-a-box, or the clinic rotators of today, who are saddled with more bureaucratic paperwork—or, excuse me, EMR work—than anything resembling patient care.  Family practice docs (at least in the US) are more likely to be 9-to-5 employees than true advocates for the patients on their roster.

Secondly, FPs may not be diagnosing the same “depression” that my colleagues and I see in the psychiatric setting.  The article states that FPs see depression “mainly as a reaction to emotionally draining circumstances such as other illnesses, situation at work or social factors,” and that “only a minority of FPs saw depression as … a biochemical imbalance.”  People can indeed become “depressed” after breaking up with a girlfriend, losing a job, or being diagnosed with cancer.  But is this a “chemical imbalance” that we should treat with Prozac and Abilify?  Eh, maybe not.

In a similar vein, the authors admit that “primary care patients with depressive disorders tend to be less severely depressed [and] experience a milder course of illness.”  As it turns out, those are precisely the patients who tend not to respond to antidepressant therapy.  Furthermore, the authors write that these patients have “more complaints of fatigue and somatic symptoms, and are more likely to have accompanying physical complaints.”  This seems like exactly what one might expect:  patients with fatigue, exhaustion, or any somatic complaint (nausea, diarrhea, constipation, headache, sexual dysfunction, acid reflux, congestion, abdominal tenderness, morning stiffness, sciatica, you name it…) are the most likely to say that they feel generally pretty crappy.  Or, to use the authors’ words—and the psychiatric vernacular—”less severely depressed.”

So family practice docs—those on the “front lines”—may be the most qualified to diagnose depression (or any mental illness, for that matter) because of their experience and knowledge of their patients.  But this is only true if they actually KNOW their patients (which is less likely in this age of mangled managed care), and if these same doctors recognize the difference between clinical depression and environmental triggers that, for lack of a better word, just plain suck.

Given the above caveats, can (or should) depression be diagnosed and treated in the family practice setting?  Perhaps “depression” can, while “Depression” may still be within the realm of the psychiatric professional.  Until we determine the best way to distinguish between the two, let’s make sure we attend to patients’ symptoms in context:  in the context of their long-term history, environmental triggers, and life events.  These may be precisely the situations in which the family practitioner knows best.

[If any reader would like a PDF copy of the article referenced here, I’d be happy to send it to you.  Please email me.]

Is Clinical Psychopharmacology a Pseudoscience?

October 24, 2011

I know I write a lot about my disillusionment with modern psychiatry.  I have lamented the overriding psychopharmacological imperative, the emphasis on rapid diagnosis and medication management, at the expense of understanding the whole patient and developing truly “personalized” treatments.  But at the risk of sounding like even more of a heretic, I’ve noticed that not only do psychopharmacologists really believe in what they’re doing, but they often believe it even in the face of evidence to the contrary.

It all makes me wonder whether we’re practicing a sort of pseudoscience.

For those of you unfamiliar with the term, check out Wikipedia, which defines “pseudoscience” as:  “a claim, belief, or practice which is presented as scientific, but which does not adhere to a valid scientific method, lacks supporting evidence or plausibility, cannot be reliably tested…. [is] often characterized by the use of vague, exaggerated or unprovable claims [and] an over-reliance on confirmation rather than rigorous attempts at refutation…”

Among the medical-scientific community (of which I am a part, by virtue of my training), the label of “pseudoscience” is often reserved for practices like acupuncture, naturopathy, and chiropractic.  Each may have its own adherents, its own scientific language or approach, and even its own curative power, but taken as a whole, their claims are frequently “vague or exaggerated,” and they fail to generate hypotheses which can then be proven or (even better) refuted in an attempt to refine disease models.

Does clinical psychopharmacology fit in the same category?

Before going further, I should emphasize I’m referring to clinical psychopharmacology: namely, the practice of prescribing medications (or combinations thereof) to actual patients, in an attempt to treat illness.  I’m not referring to the type of psychopharmacology practiced in research laboratories or even in clinical research settings, where there is an accepted scientific method, and an attempt to test hypotheses (even though some premises, like DSM diagnoses or biological mechanisms, may be erroneous) according to established scientific principles.

The scientific method consists of: (1) observing a phenomenon; (2) developing a hypothesis; (3) making a prediction based on that hypothesis; (4) collecting data to attempt to refute that hypothesis; and (5) determining whether the hypothesis is supported or not, based on the data collected.

In psychiatry, we are not very good at this.  Sure, we may ask questions and listen to our patients’ answers (“observation”), come up with a diagnosis (a “hypothesis”) and a treatment plan (a “prediction”), and evaluate our patients’ response to medications (“data collection”).  But is this only a charade?

First of all, the diagnoses we give are not based on a valid understanding of disease.  As the current controversy over DSM-5 demonstrates, even experts find it hard to agree on what they’re describing.  Maybe if we viewed DSM diagnoses as “suggestions” or “prototypes” rather than concrete diagnoses, we’d be better off.  But clinical psychopharmacology does the exact opposite: it puts far too much emphasis on the diagnosis, which predicts the treatment, when in fact a diagnosis does not necessarily reflect biological reality but rather a “best guess.”  It’s subject to change at any time, as are the fluctuating symptoms that real patients present with.  (Will biomarkers help?  I’m not holding my breath.)

Second, our predictions (i.e., the medications we choose for our patients) are always based on assumptions that have never been proven.  What do I mean by this?  Well, we have “animal models” of depression and theories of errant dopamine pathways in schizophrenia, but for “real world” patients—the patients in our offices—if you truly listen to what they say, the diagnosis is rarely clear.  Instead, we try to “make the patients fit the diagnosis” (which becomes easier to do as appointment lengths shorten), and then concoct treatment plans which perfectly fit the biochemical pathways that our textbooks, drug reps, and anointed leaders lay out for us, but which may have absolutely nothing to do with what’s really happening in the bodies and minds of our patients.

Finally, the whole idea of falsifiability is absent in clinical psychopharmacology.  If I prescribe an antidepressant or even an anxiolytic or sedative drug to my patient, and he returns two weeks later saying that he “feels much better” (or is “less anxious” or is “sleeping better”), how do I know it was the medication?  Unless all other variables are held strictly constant—which is impossible to do even in a well-designed placebo-controlled trial, much less the real world—I can make no assumption about the effect of the drug in my patient’s body.

It gets even more absurd when one listens to a so-called “expert psychopharmacologist,” who uses complicated combinations of 4, 5, or 6 medications at a time to achieve “just the right response,” or who constantly tweaks medication doses to address a specific issue or complaint (e.g., acne, thinning hair, frequent cough, yawning, etc, etc), using sophisticated-sounding pathways or models that have not been proven to play a role in the symptom under consideration.  Even if it’s complete guesswork (which it often is), the patient may improve 33% of the time (“Success! My explanation was right!”), get worse 33% of the time (“I didn’t increase the dose quite enough!”), and stay the same 33% of the time (“Are any other symptoms bothering you?”).

Of course, if you’re paying good money to see an “expert psychopharmacologist,” who has diplomas on her wall and who explains complicated neurochemical pathways to you using big words and colorful pictures of the brain, you’ve already increased your odds of being in the first 33%.  And this is the main reason psychopharmacology is acceptable to most patients: not only does our society value the biological explanation, but psychopharmacology is practiced by people who sound so intelligent and … well, rational.  Even though the mind is still a relatively impenetrable black box and no two patients are alike in how they experience the world.  In other words, psychopharmacology has capitalized on the placebo response (and the ignorance & faith of patients) to its benefit.

Psychopharmacology is not always bad.  Sometimes psychotropic medication can work wonders, and often very simple interventions provide patients with the support they need to learn new skills (or, in rare cases, to stay alive).  In other words, it is still a worthwhile endeavor, but our expectations and our beliefs unfortunately grow faster than the evidence base to support them.

Similarly, “pseudoscience” can give results.  It can heal, too: some health-care plans willingly pay for acupuncture, and some patients swear by Ayurvedic medicine or Reiki.  And who knows, there might still be a valid scientific basis for the benefits professed by advocates of these practices.

In the end, though, we need to stand back and remind ourselves what we don’t know.  Particuarly at a time when clinical psychopharmacology has come to dominate the national psyche—and command a significant portion of the nation’s enormous health care budget—we need to be extra critical and ask for more persuasive evidence of its successes.  And we should not bring to the mainstream something that might more legitimately belong in the fringe.

Biomarker Envy V: BDNF and Cocaine Relapse

October 18, 2011

The future of psychiatric diagnosis and treatment lies in the discovery and development of “biomarkers” of pathological processes.  A biomarker, as I’ve written before, is something that can be measured or quantified, usually from a biological specimen like a blood sample, which helps to diagnose a disease or predict response to a treatment.

Biomarkers are the embodiment of the new “personalized medicine”:  instead of wasting time talking to a patient, asking questions, and possibly drawing incorrect conclusions, the holy grail of a biomarker allows the clinician to order a simple blood test (or brain scan, or genotype) and make a decision about that specific patient’s case.  But “holy grail” status is elusive, and a recent study from the Yale University Department of Psychiatry, published this month in the journal Biological Psychiatry, provides yet another example of a biomarker which is not quite there—at least not yet.

The Yale group, led by Rajita Sinha, PhD, were interested in the question, what makes newly-abstinent cocaine addicts relapse?, and set out to identify a biological marker for relapse potential.  If such a biomarker exists, they argue, then it could not only tell us more about the biology of cocaine dependence, craving, and relapse, but it might also be used clinically, as a way to identify patients who might need more aggressive treatment or other measures to maintain their abstinence.

The researchers chose BDNF, or brain-derived neurotrophic factor, as their biomarker.  In studies of cocaine-dependent animals who are forced into prolonged abstinence, those animals show elevations in BDNF when exposed to a stressor; moreover, cocaine-seeking is associated with BDNF elevations, and BDNF injections can promote cocaine-seeking behavior in these same abstinent animals.  In their recent study, Sinha’s group took 35 cocaine-dependent (human) patients and admitted them to the hospital for 4 weeks.  After three weeks of NO cocaine, they measured blood levels of BDNF and compared these numbers to the levels measured in “healthy controls.”  Then they followed all 35 cocaine users for the next 90 days to determine which of them would relapse during this three-month period.

The results showed that the abstinent cocaine users generally had higher BDNF levels than the healthy controls (see figure below, A).  However, when the researchers looked at the patients who relapsed on cocaine during the 3-month follow-up (n = 23), and compared them to those who stayed clean (n = 12), they found that the relapsers, on average, had higher BDNF levels than the non-relapsers (see figure, B).  Their conclusion is that high levels of BDNF may predict relapse.

These results are intriguing, and Dr Sinha presented her findings at the California Society of Addiction Medicine (CSAM) annual conference last week.  Audience members—all of whom treat drug and alcohol addiction—asked about how they might measure BDNF levels in their patients, and whether the same BDNF elevations might be found in dependence on other drugs.

But one question really got to what I think is the heart of the matter.  Someone asked Dr Sinha: “Looking back at the 35 patients during their four weeks in the hospital, were there any characteristics that separated the high BDNF patients from those with low BDNF?”  In other words, were there any behavioral or psychological features that might, in retrospect, be correlated with elevated BDNF?  Dr Sinha responded, “The patients in the hospital who seemed to be experiencing the most stress or who seemed to be depressed had higher BDNF levels.”

Wait—you mean that the patients at high risk for relapse could be identified by talking to them?  Dr Sinha’s answer shows why biomarkers have little place in clinical medicine, at least at this point.  Sure, her group showed correlations of BDNF with relapse, but nowhere in their paper did they describe personal features of the patients (psychological test scores, psychiatric complaints, or even responses to a checklist of symptoms).  So those who seemed “stressed or depressed” had higher BDNF levels, and—as one might predict—relapsed.  Did this (clinical) observation really require a BDNF blood test?

Dr Sinha’s results (and the results of others who study BDNF and addiction) make a strong case for the role of BDNF in relapse or in recovery from addiction.  But as a clinical tool, not only is it not ready for prime time, but it distracts us from what really matters.  Had Dr Sinha’s group spent four weeks interviewing, analyzing, or just plain talking with their 35 patients instead of simply drawing blood on day 21, they might have come up with some psychological measures which would be just as predictive of relapse—and, more importantly, which might help us develop truly “personalized” treatments that have nothing to do with BDNF or any biochemical feature.

But I wouldn’t hold my breath.  As Dr Sinha’s disclosures indicate, she is on the Scientific Advisory Board of Embera NeuroTherapeutics, a small biotech company working to develop a compound called EMB-001.  EMB-001 is a combination of oxazapam (a benzodiazepine) and metyrapone.  Metyrapone inhibits the synthesis of cortisol, the primary stress hormone in humans.  Dr Sinha, therefore, is probably more interested in the stress responses of her patients (which would include BDNF and other stress-related proteins and hormones) than in whether they say they feel like using cocaine or not.

That’s not necessarily a bad thing.  Science must proceed this way.  If EMB-001 (or a treatment based on BDNF) turns out to be an effective therapy for addiction, it may save hundreds or thousands of lives.  But until science gets to that point, we clinicians must always remember that our patients are not just lab values, blood samples, or brain scans.  They are living, thinking, and speaking beings, and sometimes the best biomarker of all is our skilled assessment and deep understanding of the patient who comes to us for help.

Playing The Role

October 16, 2011

One of the most time-honored pedagogical tools in medicine is the “role play.”  The concept is simple:  one individual plays the part of another person (usually a patient) while a trainee examines or questions him or her, for the purposes of learning ways to diagnose, treat, and communicate more effectively.

Last week I had the privilege of attending a motivational interviewing training seminar.  Motivational interviewing (or MI) is a therapeutic technique in which the clinician helps “motivate” the patient into making healthy lifestyle choices through the use of open-ended questions, acknowledging and “rolling with” the patient’s resistance, and eliciting the patient’s own commitment to change.  The goal is to help the patient make a decision for himself, rather than requiring the clinician to provide a directive or an “order” to change a behavior.

MI is an effective and widely employed strategy, frequently used in the treatment of addictions.  Despite its apparent simplicity, however, it is important to practice one’s skills in order to develop proficiency.  Here, simulations like role-playing exercises can be valuable.  As part of my seminar, I engaged in such an exercise, in which our trainer played the part of a methamphetamine addict while a trainee served as the clinician.

The discussion went something like this:

Clinician:  “How would you like things to be different in your life?”
Patient:  “Well, I think I might be using too much meth.”
Clinician:  “So you think you’re using too much methamphetamine.”
Patient:  “Yeah, my friends are urging me to cut back.”
Clinician:  “How important is it for you to decrease your use?”
Patient:  “Oh, it would really make things easier for me.”
Clinician:  “How confident are you that you could cut back?”
Patient:  “Well, it would be tough.”
Clinician:  “What would make you even more confident?”
Patient:  “If I had some support from other people.”
Clinician:  “Who could provide you with that support?”
Patient:  “Hmm… I do have some friends who don’t use meth.”
Clinician:  “I see.  Can you think of some ways to spend more time with those friends?”
Patient:  “I do know that they go swimming on Thursday nights.  Maybe I can ask if I can join them.”
Clinician:  “I think this would be a good decision.  Can I help you to do this by giving you a telephone call on Wednesday?”
Patient:  “Yes, thank you.”

Of course, I’m paraphrasing somewhat.  But the bottom line is that the whole exercise lasted about ten minutes, and in that ten-minute span, the trainee had taken an ambivalent methamphetamine addict and convinced him to spend an evening with some non-meth-using friends, all through the magic of motivational interviewing.

In real clinical practice, nothing is quite so simple.  And none of us in the room (I hope) were so naïve as to think that this would happen in real life.  But the strategies we employed were so basic (right “out of the book,” so to speak), we could have used this time—and the expertise of our trainer—to practice our skills in a more difficult (i.e., real-world) situation.

It reminded me of a similar exercise in a class during my psychiatry residency, in which our teacher, a psychiatrist in private practice in our community, asked me to role-play a difficult patient, while he would act as therapist and demonstrate his skills in front of our class.  The patient I chose was a particularly challenging one—especially to a novice therapist like myself—who had a habit of repeating back my questions word-for-word with a sarcastic smile on her face, and openly questioning my abilities as a therapist.

During the role-play, I played the part quite well (I thought), giving him the uncomfortable looks and critical comments that my patient routinely gave me.  But this didn’t sit well with him.  He got visibly angry, and after just a few minutes he abruptly stood up and told me to leave the class.  Later that day I received a very nasty email from him accusing me of “sabotaging” his class and “making [him] look like a fool.”  He called my actions “insubordination” and asked me not to return to the class, also suggesting that my actions were “grounds for dismissal from the residency.”

[He also went off on a tangent about some perfectly reasonable—even amiable—emails we had exchanged several weeks earlier, accusing me now of having used “too many quotation marks” which, he said, seemed “unprofessional” and “inappropriate” and demanded an apology!!  He also wrote that in the several weeks of class I had shown him a “tangible tone of disrespect,” even though he had never said anything to me before.  While I believe his paranoid stance may have betrayed some underlying mental instability, I must admit I have not spoken to him since, although he continues to teach and to supervise residents.]

Anyway, these experiences—and others over the years—have led me to question the true meaning of a role-playing exercise.  In its ideal form, a simulation provides the novice with an opportunity to observe a skilled clinician practicing his or her craft, even under challenging circumstances, or provides a safe environment for the novice to try new approaches—and to make mistakes doing so.  But more often than not, a role-playing exercise is a staged production, in which the role-player is trying to make a point.  In actual practice, no patient is a “staged” patient (and those who do give rehearsed answers often have some ulterior motive).  Real patients have a nearly infinite variety of histories, concerns, and personal idiosyncracies for which no “role playing” exercise can ever prepare a therapist.

I’m probably being too harsh.  Role-plays and simulations will always be part of a clinician’s training, and I do recognize their value in learning the basic tools of therapy.  The take-home message, however, is that we should never expect real patients to act as if they’re reading off a script from our textbooks.  And as a corollary, we should use caution when taking our patients’ words and making them fit our own preconceived script.  By doing so, we may be fooling ourselves, and we might miss what the patient really wants us to hear.

Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.

Latuda-Palooza: Marketing or Education?

October 2, 2011

In my last blog post, I wrote about an invitation I received to a symposium on Sunovion Pharmaceuticals’ new antipsychotic Latuda.  I was concerned that my attendance might be reported as a “payment” from Sunovion under the requirements of the Physicians Payment Sunshine Act.  I found it a bit unfair that I might be seen as a recipient of “drug money” (and all the assumptions that go along with that) when, in fact, all I wanted to do was learn about a new pharmaceutical agent.

As it turns out, Sunovion confirmed that my participation would NOT be reported (they start reporting to the feds on 1/1/12), so I was free to experience a five-hour Latuda extravaganza yesterday in San Francisco.  I was prepared for a marketing bonanza of epic proportion—a la the Viagra launch scene in “Love And Other Drugs.”  And in some ways, I got what I expected:  two outstanding and engaging speakers (Dr Stephen Stahl of NEI and Dr Jonathan Meyer of UCSD); a charismatic “emcee” (Richard Davis of Arbor Scientia); an interactive “clicker” system which allowed participants to answer questions throughout the session and check our responses in real time; full lunch & breakfast, coffee and snacks; all in a posh downtown hotel.  (No pens or mugs, though.)

The educational program consisted of a plenary lecture by Dr Stahl, followed by workshops in which we broke up into “teams” and participated in three separate activities:  first, a set of computer games (modeled after “Pyramid” and “Wheel Of Fortune”) in which we competed to answer questions about Latuda and earn points for our team; second, a “scavenger hunt” in which we had 5 minutes to find answers from posters describing Latuda’s clinical trials (sample question: “In Study 4 (229), what proportion of subjects withdrew from the Latuda 40 mg/d treatment arm due to lack of efficacy?”); and finally, a series of case studies presented by Dr Meyer which used the interactive clicker system to assess our comfort level in prescribing Latuda for a series of sample patients.  My team came in second place.

I must admit, the format was an incredibly effective way for Sunovion to teach doctors about its newest drug.  It reinforced my existing knowledge—and introduced me to a few new facts—while it was also equally accessible to physicians who had never even heard about Latuda.

Moreover, the information was presented in an unbiased fashion.  Unbiased?, you may ask.  But wasn’t the entire presentation sponsored by Sunovion?  Yes, it was, but in my opinion the symposium achieved its stated goals:  it summarized the existing data on Latuda (although see here for some valid criticism of that data); presented it in a straightforward, effective (and, at times, fun) way; and allowed us doctors to make our own decisions.  (Stahl did hint that the 20-mg dose is being studied for bipolar depression, not an FDA-approved indication, but that’s also publicly available on the website.)  No one told us to prescribe Latuda; no one said it was better than any other existing antipsychotic; no one taught us how to get insurance companies to cover it; and—in case any “pharmascold” is still wondering—no one promised us any kickbacks for writing prescriptions.

(Note:  I did speak with Dr Stahl personally after his lecture.  I asked him about efforts to identify patient-specific factors that might predict a more favorable response to Latuda than to other antipsychotics.  He spoke about current research in genetic testing, biomarkers, and fMRI to identify responders, but he also admitted that it’s all guesswork at this point.  “I might be entirely wrong,” he admitted, about drug mechanisms and how they correlate to clinical response, and he even remarked “I don’t believe most of what’s in my book.”  A refreshing—and surprising—revelation.)

In all honesty, I’m no more likely to prescribe Latuda today than I was last week.  But I do feel more confident in my knowledge about it.  It is as if I had spent five hours yesterday studying the Latuda clinical trials and the published Prescribing Information, except that I did it in a far more engaging forum.  As I mentioned to a few people (including Mr Davis), if all drug companies were to hold events like this when they launch new agents, rather than letting doctors decipher glossy drug ads in journals or from their drug reps, doctors would be far better educated than they are now when new drugs hit the market.

But this is a very slippery slope.  In fact, I can’t help but wonder if we may be too far down that slope already.  For better or for worse, Steve Stahl’s books have become de facto “standard” psychiatry texts, replacing classics like Kaplan & Sadock, the Oxford Textbook, and the American Psychiatric Press books.  Stahl’s concepts are easy to grasp and provide the paradigm under which most psychiatry is practiced today (despite his own misgivings—see above).  However, his industry ties are vast, and his “education” company, Neuroscience Education Institute (NEI), has close connections with medical communications companies who are basically paid mouthpieces for the pharmaceutical industry.  Case in point: Arbor Scientia, which was hired by Sunovion to organize yesterday’s symposium—and similar ones in other cities—shares its headquarters with NEI in Carlsbad, CA, and Mr Davis sits on NEI’s Board.

We may have already reached a point in psychiatry where the majority of what we consider “education” might better be described as marketing.  But where do we draw the line between the two?  And even after we answer that question, we must ask, (when) is this a bad thing?  Yesterday’s Sunovion symposium may have been an infomercial, but I still felt there was a much greater emphasis on the “info-” part than the “-mercial.”  (And it’s unfortunate that I’d be reported as a recipient of pharmaceutical money if I had attended the conference after January 1, 2012, but that’s for another blog post.)  The question is, who’s out there to make sure it stays that way?

I’ve written before that I don’t know whom to trust anymore in this field.  Seemingly “objective” sources—like lectures from my teachers in med school and residency—can be heavily biased, while “advertising” (like yesterday’s symposium) can, at times, be fair and informative.  The end result is a very awkward situation in modern psychiatry that is easy to overlook, difficult to resolve, and, unfortunately, still ripe for abuse.

%d bloggers like this: