The Problem With Organized Psychiatry

March 27, 2012

Well, it happened again.  I attended yet another professional conference this weekend (specifically, the annual meeting of my regional psychiatric society), and—along with all the talks, exhibits, and networking opportunities—came the call I’ve heard over and over again in venues like this one:  We must get psychiatrists involved in organized medicine.  We must stand up for what’s important to our profession and make our voices heard!!

Is this just a way for the organization to make money?  One would be forgiven for drawing this conclusion.  Annual dues are not trivial: membership in the society costs up to $290 per person, and also requires APA membership, which ranges from $205 to $565 per year.  But setting the money aside, the society firmly believes that we must protect ourselves and our profession.  Furthermore, the best way to do so is to recruit as many members as possible, and encourage members to stand up for our interests.

This raises one important question:  what exactly are we standing up for?  I think most psychiatrists would agree that we’d like to keep our jobs, and we’d like to get paid well, too.  (Oh, and benefits would be nice.)  But that’s about all the common ground that comes to mind.  The fact that we work in so many different settings makes it impossible for us to speak as a single voice or even (gasp!) to unionize.

Consider the following:  the conference featured a panel discussion by five early-career psychiatrists:  an academic psychiatrist; a county mental health psychiatrist; a jail psychiatrist; an HMO psychiatrist; and a cash-only private-practice psychiatrist.  What might all of those psychiatrists have in common?  As it turns out, not much.  The HMO psychiatrist has a 9-to-5 job, a stable income, and extraordinary benefits, but a restricted range of services, a very limited medication formulary and not much flexibility in what she can provide.  The private-practice guy, on the other hand, can do (and charge) essentially whatever he wants (a lot, as it turns out); but he also has to pay his own overhead.  The county psychiatrist wants his patients to have access to additional services (therapy, case management, housing, vocational training, etc) that might be irrelevant—or wasteful—in other settings.  The academic psychiatrist is concerned about his ability to obtain research funding, to keep his inpatient unit afloat, and to satisfy his department chair.  The jail psychiatrist wants access to substance abuse treatment and other vital services, and to help inmates make the transition back into their community safely.

Even within a given practice setting, different psychiatrists might disagree on what they want:  Some might want to see more patients, while delegating services like psychotherapy and case management to other providers.  On the other hand, some might want to spend more time with fewer patients and to be paid to provide these services themselves.  Some might want a more generous medication formulary, while others might argue that the benefits of medication are too exaggerated and want patients to have access to other types of treatment.  Finally, some might lobby for greater access to pharmaceutical companies and the benefits they provide (samples, coupons, lectures, meals, etc), while others might argue that pharmaceutical promotion has corrupted our field.

For most of the history of modern medicine, doctors have had a hard time “organizing” because there has been no entity worth organizing against.  Today, we have some definite targets: the Affordable Care Act, big insurance companies, hospital employers, pharmacy benefits managers, state and local governments, malpractice attorneys, etc.  But not all doctors see those threats equally.  (Many, in fact, welcome the Affordable Care Act with open arms.)  So even though there are, for instance, several unanswered questions as to how the ACA (aka “Obamacare”) might change the health-care-delivery landscape, the ramifications are, in the eyes of most doctors, too far-removed from the day-to-day aspects of patient care for any of us to worry about.  Just like everything else in the above list, we shrug them off as nuisances—the costs of doing business—and try to devote attention to our patients instead of agitating for change.

In psychiatry, the conflicts are particularly  wide-ranging, and the stakes more poorly defined than elsewhere in medicine, making the targets of our discontent less clear.  One of the panelists put it best when she said: “there’s a lot of white noise in psychiatry.”  In other words, we really can’t figure out where we’re headed—or even where we want to head.  At one extreme, for instance, are those psychiatrists who argue (sometimes convincingly) that all psychiatry is a farce, that diagnoses are socially constructed entities with no external validity, and that “treatment” produces more harm than good.  At the other extreme are the DSM promoters and their ilk, arguing for greater access to effective treatment, the medicalization of human behavior, and the early recognition and treatment of mental illness—sometimes even before it develops.

Until we psychiatrists determine what we want the future of psychiatric care to look like, it will be difficult for us to jump on any common bandwagon.  In the meantime, the future of our field will be determined by those who do have a well-formed agenda and who can rally around a common goal.  At present, that includes the APA, insurance companies, Big Pharma, and government.  As for the rest of us, we’ll just pick up whatever scraps are left over, and “organize” after we’ve finished our charts, returned our calls, completed the prior authorizations, filed the disability paperwork, paid our bills, and said good-night to our kids.


The Well Person

March 21, 2012

What does it mean to be “normal”?  We’re all unique, aren’t we?  We differ from each other in so many ways.  So what does it mean to say someone is “normal,” while someone else has a “disorder”?

This is, of course, the age-old question of psychiatric diagnosis.  The authors of the DSM-5, in fact, are grappling with this very question right now.  Take grieving, for example.  As I and others have written, grieving is “normal,” although its duration and intensity vary from person to person.  At some point, a line may be crossed, beyond which a person’s grief is no longer adaptive but dangerous.  Where that line falls, however, cannot be determined by a book or by a committee.

Psychiatrists ought to know who’s healthy and who’s not.  After all, we call ourselves experts in “mental health,” don’t we?  Surprisingly, I don’t think we’re very good at this.  We are acutely sensitive to disorder but have trouble identifying wellness.  We can recognize patients’ difficulties in dealing with other people but are hard-pressed to describe healthy interpersonal skills.  We admit that someone might be able to live with auditory hallucinations but we still feel an urge to increase the antipsychotic dose when a patient says she still hears “those voices.”   We are quick to point out how a patient’s alcohol or marijuana use might be a problem, but we can’t describe how he might use these substances in moderation.  I could go on and on.

Part of the reason for this might lie in how we’re trained.  In medical school we learn basic psychopathology and drug mechanisms (and, by the way, there are no drugs whose mechanism “maintains normality”—they all fix something that’s broken).  We learn how to do a mental status exam, complete with full descriptions of the behavior of manic, psychotic, depressed, and anxious people—but not “normals.”  Then, in our postgraduate training, our early years are spent with the most ill patients—those in hospitals, locked facilities, or emergency settings.  It’s not until much later in one’s training that a psychiatrist gets to see relatively more functional individuals in an office or clinic.  But by that time, we’re already tuned in to deficits and symptoms, and not to personal strengths, abilities, or resilience-promoting factors.

In a recent discussion with a colleague about how psychiatrists might best serve a large population of patients (e.g., in a “medical home” model), I suggested  that perhaps each psychiatrist could be responsible for a handful of people (say, 300 or 400 individuals).  Our job would be to see each of these 300-400 people at least once in a year, regardless of whether they have psychiatric diagnosis or not.  Those who have emotional or psychiatric complaints or who have a clear mental illness could be seen more frequently; the others would get their annual checkup and their clean bill of (mental) health.  It would be sort of like your annual medical visit or a “well-baby visit” in pediatrics:  a way for a person to be seen by a doctor, implement preventive measures,  and undergo screening to make sure no significant problems go unaddressed.

Alas, this would never fly in psychiatry.  Why not?  Because we’re too accustomed to seeing illness.  We’re too quick to interpret “sadness” as “depression”; to interpret “anxiety” or “nerves” as a cue for a benzodiazepine prescription; or to interpret “inattention” or poor work/school performance as ADHD.  I’ve even experienced this myself.  It is difficult to tell a person “you’re really doing just fine; there’s no need for you to see me, but if you want to come back, just call.”  For one thing, in many settings, I wouldn’t get paid for the visit if I said this.  But another concern, of course, is the fear of missing something:  Maybe this person really is bipolar [or whatever] and if I don’t keep seeing him, there will be a bad outcome and I’ll be responsible.

There’s also the fact that psychiatry is not a primary care specialty:  insurance plans don’t pay for an annual “well-person visit” with the a psychiatrist.  Patients who come to a psychiatrist’s office are usually there for a reason.  Maybe the patient deliberately sought out the psychiatrist to ask for help.  Maybe their primary care provider saw something wrong and wanted the psychiatrist’s input.  In the former, telling the person he or she is “okay” risks losing their trust (“but I just know something’s wrong, doc!“).  In the latter, it risks losing a referral source or professional relationship.

So how do we fix this?  I think we psychiatrists need to spend more time learning what “normal” really is.  There are no classes or textbooks on “Normal Adults.”  For starters, we can remind ourselves that the “normal” people around whom we’ve been living our lives may in fact have features that we might otherwise see as a disorder.  Learning to accept these quirks, foibles, and idiosyncrasies may help us to accept them in our patients.

In terms of using the DSM, we need to become more willing to use the V71.09 code, which means, essentially, “No diagnosis or condition.”  Many psychiatrists don’t even know this code exists.  Instead, we give “NOS” diagnoses (“not otherwise specified”) or “rule-outs,” which eventually become de facto diagnoses because we never actually take the time to rule them out!  A V71.09 should be seen as a perfectly valid (and reimbursable) diagnosis—a statement that a person has, in fact, a clean bill of mental health.  Now we just need to figure out what that means.

It is said that when Pope Julius II asked Michelangelo how he sculpted David out of a marble slab, he replied: “I just removed the parts that weren’t David.”  In psychiatry, we spend too much time thinking about what’s not David and relentlessly chipping away.  We spend too little time thinking about the healthy figure that may already be standing right in front of our eyes.


Is The Criticism of DSM-5 Misguided? Part II

March 14, 2012

A few months ago, I wrote about how critics of the DSM-5 (led by Allen Frances, editor of the DSM-IV) might be barking up the wrong tree.  I argued that many of the problems the critics predict are not the fault of the book, but rather how people might use it.  Admittedly, this sounds a lot like the “guns don’t kill people, people do” argument against gun control (as one of my commenters pointed out), or a way for me to shift responsibility to someone else (as another commenter wrote).  But it’s a side of the issue that no one seems to be addressing.

The issue emerges again with the ongoing controversy over the “bereavement exclusion” in the DSM-IV.  Briefly, our current DSM says that grieving over a loved one does not constitute major depression (as long as it doesn’t last more than two months) and, as such, should not be treated.  However, some have argued that this exclusion should be removed in DSM-5.  According to Sidney Zisook, a UCSD psychiatrist, if we fail to recognize and treat clinical depression simply because it occurs in the two-month bereavement period, we do those people a “disservice.”  Likewise, David Kupfer, chair of the DSM-5 task force, defends the removal of the bereavement exclusion because “if patients … want help, they should not be prevented from getting [it] because somebody tells them that this is what everybody has when they have a loss.”

The NPR news program “Talk of the Nation” featured a discussion of this topic on Tuesday’s broadcast, but the guests and callers described the issue in a more nuanced (translation: “real-world”) fashion.  Michael Craig Miller, Editor of the Harvard Mental Health Letter, referred to the grieving process by saying: “The reality is that there is no firm line, and it is always a judgment call…. labels tend not to matter as much as the practical concern, that people shouldn’t feel a sense of shame.  If they feel they need some help to get through something, then they should ask for it.”  Bereavement and the need for treatment, therefore, is not a yes/no, either/or proposition, but something individually determined.

This sentiment was echoed in a February 19 editorial in Lancet by the psychiatrist/anthropologist Arthur Kleinman, who wrote that the experience of loss “is always framed by meanings and values, which themselves are affected by all sorts of things like one’s age, health, financial and work conditions, and what is happening in one’s life and in the wider world.”  Everyone seems to be saying pretty much the same thing:  people grieve in different ways, but those who are suffering should have access to treatment.

So why the controversy?  I can only surmise it’s because the critics of DSM-5 believe that mental health clinicians are unable to determine who needs help and, therefore, have to rely on a book to do so.  Listening to the arguments of Allen Frances et al, one would think that we have no ability to collaborate, empathize, and relate with our patients.  I think that attitude is objectionable to anyone who has made it his or her life’s work to treat the emotional suffering of others, and underestimates the effort that many of us devote to the people we serve.

But in some cases the critics are right.  Sometimes clinicians do get answers from the book, or from some senseless protocol (usually written by a non-clinician).  One caller to the NPR program said she was handed an antidepressant prescription upon her discharge from the hospital after a stillbirth at 8 months of pregnancy.  Was she grieving?  Absolutely.  Did she need the antidepressant?  No one even bothered to figure that out.  It’s like the clinicians who see “bipolar” in everyone who has anger problems; “PTSD” in everyone who was raised in a turbulent household; or “ADHD” in every child who does poorly in school.

If a clinician observes a symptom and makes a diagnosis simply on the basis of a checklist from a book, or from a single statement by a patient, and not on the basis of his or her full understanding, experience, and clinical assessment of that patient, then the clinician (and not the book) deserves to take full responsibility for any negative outcome of that treatment.  [And if this counts as acceptable practice, then we might as well fire all the psychiatrists and hire high-school interns—or computers!—at a mere fraction of the cost, because they could do this job just as well.]

Could the new DSM-5 be misused?  Yes.  Drug companies could (and probably will) exploit it to develop expensive and potentially harmful drugs.  Researchers will use it to design clinical trials on patients that, regrettably, may not resemble those in the “real world.”  Unskilled clinicians will use it to make imperfect diagnoses and give inappropriate labels to their patients.  Insurance companies will use the labels to approve or deny treatment.  Government agencies will use it to determine everything from who’s “disabled” to who gets access to special services in preschool.  And, of course, the American Psychiatric Association will use it as their largest revenue-generating tool, written by authors with extensive drug-industry ties.

To me, those are the places where critics should focus their rage.  But remember, to most good clinicians, it’s just a book—a field guide, helping us to identify potential concerns, and to guide future research into mental illness and its treatment.  What we choose to do with such information depends upon our clinical acumen and our relationship with our patients.  To assume that clinicians will blindly use it to slap the “depression” label and force antidepressants on anyone whose spouse or parent just died “because the book said so,” is insulting to those of us who actually care about our patients, and about what we do to improve their lives.


Two Psychiatries

March 12, 2012

A common—and ever-increasing—complaint of physicians is that so many variables interfere with our ability to diagnose and treat disease:  many patients have little or no access to preventive services; lots of people are uninsured; insurance plans routinely deny necessary care; drug formularies are needlessly restrictive; paperwork never ends; and the list goes on and on.  Beneath the frustration (and, perhaps, part of the source of it) is the fact that medical illness, for the most part, has absolutely nothing to do with these external burdens or socioeconomic inequalities.  Whether a patient is rich or poor, black or white, insured or uninsured—a disease is a disease, and everyone deserves the same care.

I’m not so sure whether the same can be said for psychiatry.  Over the last four years, I’ve spent at least part of my time working in community mental health (and have written about it here and here).  Before that, though—and for the majority of my training—I worked in a private, academic hospital setting.  I saw patients who had good health insurance, or who could pay for health care out of pocket.  I encountered very few restrictions in terms of access to medications or other services (including multiple types of psychotherapy, partial hospitalization programs, ECT, rTMS, clinical trials of new treatments, etc).  I was fortunate enough to see patients in specialty referral clinics, where I saw fascinating “textbook” cases of individuals who had failed to respond to years of intensive treatment.  It was exciting, stimulating, thought-provoking, and—for lack of a better word—academic.  (Perhaps it’s not surprising that this the environment in which textbooks, and the DSM, are written.)

When I started working in community psychiatry, I tried to approach patients with the same curiosity and to employ the same diagnostic strategies and treatment approach.  It didn’t take long for me to learn, however, that these patients had few of the resources I had taken for granted elsewhere.  For instance, psychotherapy was difficult to arrange, and I was not reimbursed for doing it myself.  Access to medications depended upon capricious, unpredictable (and illogical) formularies.  Patients found it difficult to get to regular appointments or to come up with the co-payment, not to mention pay the electric bill or deal with crime in their neighborhood.  It was often hard to obtain a coherent and reliable history, and records obtained from elsewhere were often spotty and unhelpful.

It all made for a very challenging place in which to practice what I (self-righteously) called “true” psychiatry.  But maybe community psychiatry needs to be redefined.  Maybe the social stressors encountered by community psych patients—not the least of which is substandard access to “quality” medical and psychiatric services—result in an entirely different type of mental distress, and demand an entirely different type of intervention.

(I should point out that I did see, at times, examples of the same sort of mental illness I saw in the private hospital, and which did respond to the same interventions that the textbooks predicted.  While this reaffirmed my hope in the validity of [at least some] mental illnesses, this was a small fraction of the patients I saw.)

Should we alter our perceptions and definitions of illness—and of “psychiatry” itself—in public mental health?  Given the obstacles found in community psychiatry settings (absurdly brief appointment times; limited psychotherapy; poor prescription drug coverage; high rates of nonadherence and substance abuse; reliance on ERs for non-emergency care, often resulting in complicated medication regimens, like dangerous combinations of narcotics and benzodiazepines), should we take an entirely different approach?  Does it even make sense to diagnose the same disorders—not to mention put someone on “disability” for these disorders—when there are so many confounding factors involved?

One of my colleagues suggested: just give everyone an “adjustment disorder” diagnosis until you figure everything out.  Good idea, but you won’t get paid for diagnosing “adjustment disorder.”  So a more “severe” diagnosis must be given, followed closely thereafter by a medication (because many systems won’t let a psychiatrist continue seeing a patient unless a drug is prescribed).  Thus, in a matter of one or two office visits (totaling less than an hour in most cases), a Medicaid or uninsured patient might end up with a major Axis I diagnosis and medication(s), while the dozens of stressors that may have contributed to the office visit in the first place go unattended.

Can this change?  I sure hope so.  I firmly believe that everyone deserves access to mental health care.  (I must also point out that questionable diagnoses and inappropriate medication regimens can be found in any demographic.)  But we psychiatrists who work in community settings must not delude ourselves into thinking that what’s written in the textbooks or tested on our Board exams always holds true for the patients we see.  It’s almost as if we’re practicing a “different psychiatry,” one that requires its own diagnostic system, different criteria for “disability” determinations, a different philosophy of “psychotherapy,” and where medications should be used much more conservatively.  (It might also help to perform clinical trials with subjects representative of those in community psychiatry, but due to their complexity, this is highly unlikely).

Fortunately, a new emphasis on the concept of “recovery” is taking hold in many community mental health settings.  This involves patient empowerment, self-direction, and peer support, rather than a narrow focus on diagnosis and treatment.  For better or for worse, such an approach relies less on the psychiatrist and more on peers and the patient him- or herself.  It also just seems much more rational, emphasizing what patients want and what helps them to succeed.

Whether psychiatrists—and community mental health as a whole—are able to follow this trend remains to be seen.  Unless we do, however, I fear that we may continue to mislead ourselves into believing that we’re doing good, when in fact we’re perpetuating a cycle of invalid diagnoses, potentially harmful treatment, and, worst of all, over-reliance on a system designed for a distinctly different type of “care” than what these individuals need and deserve.


How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2” mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1” hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


Sleeping Pills Are Deadly? Says Who, Exactly?

March 1, 2012

As most readers know, we’re paying more attention than ever before to conflicts of interest in medicine.   If an individual physician, researcher, speaker, or author is known to have a financial relationship with a drug company, we publicize it.  It’s actually federal law now.  The idea is that doctors might be biased by drug companies who “pay” them (either directly—through gifts, meals, or cash—or indirectly, through research or educational grants) to say or write things that are favorable to their drug.

A recent article on the relationship between sedative/hypnotics and mortality, published this week in BMJ Open (the online version of the British Medical Journal) and widely publicized, raises additional questions about the conflicts and biases that individual researchers bring to their work.

Co-authors Daniel Kripke, of UC San Diego, and Robert Langer, of the Jackson Hole Center for Preventive Medicine, reviewed the electronic charts of over 30,000 patients in a rural Pennsylvania health plan.  Approximately 30% of those patients received at least one prescription for a hypnotic (a benzodiazepine like Klonopin or Restoril, or a sleeping agent like Lunesta or Ambien) during the five-year study period, and there was a strong relationship between hypnotics and risk of death.  The more prescriptions one received, the greater the likelihood that one would die during the study period.  There was also a specifically increased risk of cancer in groups receiving the largest number of hypnotic prescriptions.

The results have received wide media attention.  Mainstream media networks, major newspapers, popular websites, and other outlets have run with sensational headlines like “Higher Death Risk With Sleeping Pills” and “Sleeping Pills Can Bring On the Big Sleep.”

But the study has received widespread criticism, too.  Many critics have pointed out that concurrent psychiatric diagnoses were not addressed, so mortality may have been related more to suicide or substance abuse.  Others point out the likelihood of Berkson’s Bias—the fact that the cases (those who received hypnotic prescriptions) may have been far sicker than controls, despite attempts to match them.  The study also failed to report other medications patients received (like opioids, which can be dangerous when given with sedative/hypnotics) or to control for socioeconomic status.

What has not received a lot of attention, however, is the philosophical (and financial) bias of the authors.  Lead author Daniel Kripke has been, for many years, an outspoken critic of the sleeping pill industry.  He has also widely criticized the conventional wisdom that people need 8 or more hours of sleep per night.  He has written books about it, and was even featured on the popular Showtime TV show “Penn & Teller: Bullshit!” railing against drug companies (and doctors) who profit by prescribing sleep meds.  Kripke is also one of the pioneers of “bright light therapy” (using high-intensity light to affect circadian rhythms)—first in the area of depression, and, most recently, to improve sleep.  To the best of my knowledge, he has no financial ties to the makers of light boxes.  Then again, light boxes are technically not medical devices and, therefore, are not regulated by the FDA, so he may not be required to report any affiliation.  Nevertheless, he clearly has had a decades-long professional interest in promoting light therapy and demonizing sleeping pills.

Kripke’s co-author, Robert Langer, is an epidemiologist, a past site coordinator of the Women’s Health Initiative, and a staunch advocate of preventive medicine.  More importantly, though (and advertised prominently on his website), he is an expert witness in litigation involving hormone replacement therapy (HRT), and also in cancer malpractice cases.  Like Kripke, he has also found a place in the media spotlight; he will be featured in “Hot Flash Havoc,” a movie about HRT in menopausal women, to be released later this month.

[Interestingly, Kripke and Langer also collaborated on a 2011 study showing that sleep times >6.5 hrs or <5 hrs were associated with increased mortality.  One figure looked virtually identical to figure 1 in their BMJ paper (see below).  It would be interesting to know whether mortality in the current study is indeed due to sedative prescriptions or, if the results of their earlier paper are correct, simply due to the fact that the people requesting sedative prescriptions in the first place are the ones with compromised sleep and, therefore, increased mortality.  In other words, maybe the sedative is simply a marker for something else causing mortality—the same argument raised above.]

Do the authors’ backgrounds bias their results?  If Kripke and Langer were receiving grants and speakers’ fees from Forest Labs, and published an article extolling the benefits of Viibryd, Forest’s new antidepressant, how would we respond?  Might we dig a little deeper?  Approach the paper with more skepticism?  Is the media publicizing this study (largely uncritically) because its conclusion resonates with the “politically correct” idea that psychotropic medications are bad?  Michael Thase (a long-time pharma-sponsored researcher and U Penn professor) was put in the hot seat on “60 Minutes” a few weeks ago about whether antidepressants provide any benefit, but Kripke and Langer—two equally prominent researchers—seem to be getting a free ride, as far as the media are concerned.

I’m not trying to defend the drug industry, and I’m certainly not defending sedatives.  My own bias is that I prefer to minimize my use of hypnotics in my patients—although my opposition is not so much because of their cancer or mortality risk, but rather the risk of abuse, dependence, and their effect on other psychiatric and medical symptoms.  The bottom line is, I want to believe the BMJ study.  But more importantly, I want the medical literature to be objective, fair, and unbiased.

Unfortunately, it’s hard—if not impossible—to avoid bias, particularly when you’ve worked in a field for many years (like Kripke and Langer) and have a strong belief about why things are the way they are.  In such a case, it seems almost natural that you’d want to publish research providing evidence in support of your belief.  But when does a strongly held belief become a conflict of interest?  Does it contribute to a bias in the same way that a psychopharmacologist’s financial affiliation with a drug company might?

These are just a few questions that we’ll need to pay closer attention to, as we continue to disclose conflicts of interest among medical professionals.  Sometimes bias is obvious and driven by one’s pocketbook, other times it is more subtle and rooted in one’s beliefs and experience.  But we should always be wary of the ways in which it can compromise scientific objectivity or lead us to question what’s really true.


%d bloggers like this: