Advertisements
 

Sleeping Pills Are Deadly? Says Who, Exactly?

March 1, 2012

As most readers know, we’re paying more attention than ever before to conflicts of interest in medicine.   If an individual physician, researcher, speaker, or author is known to have a financial relationship with a drug company, we publicize it.  It’s actually federal law now.  The idea is that doctors might be biased by drug companies who “pay” them (either directly—through gifts, meals, or cash—or indirectly, through research or educational grants) to say or write things that are favorable to their drug.

A recent article on the relationship between sedative/hypnotics and mortality, published this week in BMJ Open (the online version of the British Medical Journal) and widely publicized, raises additional questions about the conflicts and biases that individual researchers bring to their work.

Co-authors Daniel Kripke, of UC San Diego, and Robert Langer, of the Jackson Hole Center for Preventive Medicine, reviewed the electronic charts of over 30,000 patients in a rural Pennsylvania health plan.  Approximately 30% of those patients received at least one prescription for a hypnotic (a benzodiazepine like Klonopin or Restoril, or a sleeping agent like Lunesta or Ambien) during the five-year study period, and there was a strong relationship between hypnotics and risk of death.  The more prescriptions one received, the greater the likelihood that one would die during the study period.  There was also a specifically increased risk of cancer in groups receiving the largest number of hypnotic prescriptions.

The results have received wide media attention.  Mainstream media networks, major newspapers, popular websites, and other outlets have run with sensational headlines like “Higher Death Risk With Sleeping Pills” and “Sleeping Pills Can Bring On the Big Sleep.”

But the study has received widespread criticism, too.  Many critics have pointed out that concurrent psychiatric diagnoses were not addressed, so mortality may have been related more to suicide or substance abuse.  Others point out the likelihood of Berkson’s Bias—the fact that the cases (those who received hypnotic prescriptions) may have been far sicker than controls, despite attempts to match them.  The study also failed to report other medications patients received (like opioids, which can be dangerous when given with sedative/hypnotics) or to control for socioeconomic status.

What has not received a lot of attention, however, is the philosophical (and financial) bias of the authors.  Lead author Daniel Kripke has been, for many years, an outspoken critic of the sleeping pill industry.  He has also widely criticized the conventional wisdom that people need 8 or more hours of sleep per night.  He has written books about it, and was even featured on the popular Showtime TV show “Penn & Teller: Bullshit!” railing against drug companies (and doctors) who profit by prescribing sleep meds.  Kripke is also one of the pioneers of “bright light therapy” (using high-intensity light to affect circadian rhythms)—first in the area of depression, and, most recently, to improve sleep.  To the best of my knowledge, he has no financial ties to the makers of light boxes.  Then again, light boxes are technically not medical devices and, therefore, are not regulated by the FDA, so he may not be required to report any affiliation.  Nevertheless, he clearly has had a decades-long professional interest in promoting light therapy and demonizing sleeping pills.

Kripke’s co-author, Robert Langer, is an epidemiologist, a past site coordinator of the Women’s Health Initiative, and a staunch advocate of preventive medicine.  More importantly, though (and advertised prominently on his website), he is an expert witness in litigation involving hormone replacement therapy (HRT), and also in cancer malpractice cases.  Like Kripke, he has also found a place in the media spotlight; he will be featured in “Hot Flash Havoc,” a movie about HRT in menopausal women, to be released later this month.

[Interestingly, Kripke and Langer also collaborated on a 2011 study showing that sleep times >6.5 hrs or <5 hrs were associated with increased mortality.  One figure looked virtually identical to figure 1 in their BMJ paper (see below).  It would be interesting to know whether mortality in the current study is indeed due to sedative prescriptions or, if the results of their earlier paper are correct, simply due to the fact that the people requesting sedative prescriptions in the first place are the ones with compromised sleep and, therefore, increased mortality.  In other words, maybe the sedative is simply a marker for something else causing mortality—the same argument raised above.]

Do the authors’ backgrounds bias their results?  If Kripke and Langer were receiving grants and speakers’ fees from Forest Labs, and published an article extolling the benefits of Viibryd, Forest’s new antidepressant, how would we respond?  Might we dig a little deeper?  Approach the paper with more skepticism?  Is the media publicizing this study (largely uncritically) because its conclusion resonates with the “politically correct” idea that psychotropic medications are bad?  Michael Thase (a long-time pharma-sponsored researcher and U Penn professor) was put in the hot seat on “60 Minutes” a few weeks ago about whether antidepressants provide any benefit, but Kripke and Langer—two equally prominent researchers—seem to be getting a free ride, as far as the media are concerned.

I’m not trying to defend the drug industry, and I’m certainly not defending sedatives.  My own bias is that I prefer to minimize my use of hypnotics in my patients—although my opposition is not so much because of their cancer or mortality risk, but rather the risk of abuse, dependence, and their effect on other psychiatric and medical symptoms.  The bottom line is, I want to believe the BMJ study.  But more importantly, I want the medical literature to be objective, fair, and unbiased.

Unfortunately, it’s hard—if not impossible—to avoid bias, particularly when you’ve worked in a field for many years (like Kripke and Langer) and have a strong belief about why things are the way they are.  In such a case, it seems almost natural that you’d want to publish research providing evidence in support of your belief.  But when does a strongly held belief become a conflict of interest?  Does it contribute to a bias in the same way that a psychopharmacologist’s financial affiliation with a drug company might?

These are just a few questions that we’ll need to pay closer attention to, as we continue to disclose conflicts of interest among medical professionals.  Sometimes bias is obvious and driven by one’s pocketbook, other times it is more subtle and rooted in one’s beliefs and experience.  But we should always be wary of the ways in which it can compromise scientific objectivity or lead us to question what’s really true.

Advertisements

Mental Illness and Social Realities

May 2, 2011

Does the definition of “mental illness” differ from place to place?  Is there a difference between “depression” in a poor individual and in one of means?  Are the symptoms identical?  What about the neurobiology?  The very concept of a psychiatric “disease” implies that certain core features of one’s illness transcend the specifics of a person’s social or cultural background.  Nevertheless, we know that disorders look quite different, depending on the setting in which they arise.  This is why people practice psychiatry, not computers or checklists.  (Not yet, at least.)

However, sometimes a person’s environment can elicit reactions and behaviors that might appear—even to a trained observer—as mental illness.  If unchecked, this may create an epidemic of “disease” where true disease does not exist.  And the consequences could be serious.

—–

For the last three years, I have had the pleasure of working part-time in a community mental health setting.  Our clinic primarily serves patients on Medicaid and Medicare, in a gritty, crime-ridden expanse of a major city.  Our patients are, for the most part, impoverished, poorly educated, have little or no access to primary care services, and live in communities ravaged by substance abuse, crime, unemployment, familial strife, and a deep, pervasive sense of hopelessness.

Even though our resources are extremely limited, I can honestly say that I have made a difference in the lives of hundreds, if not thousands, of individuals.  But the experience has led me to question whether we are too quick to make psychiatric diagnoses for the sake of convenience and expediency, rather than on the basis of a fair, objective, and thorough evaluation.

Almost predictably, patients routinely present with certain common complaints:  anxiety, “stress,” insomnia, hopelessness, fear, worry, poor concentration, cognitive deficits, etc.  Each of these could be considered a feature of a deeper underlying disorder, such as an anxiety disorder, major depression, psychosis, thought disorder, or ADHD.  Alternatively, they might also simply reflect the nature of the environment in which the patients live, or the direct effects of other stressors that are unfortunately too familiar in this population.

Given the limitations of time, personnel, and money, we don’t usually have the opportunity for a thorough evaluation, collaborative care with other professionals, and frequent follow-up.  But psychiatric diagnostic criteria are vague, and virtually everyone who walks into my office endorses symptoms for which it would be easy to justify a diagnosis.  The “path of least” resistance” is often to do precisely that, and move to the next person in the long waiting-room queue.

This tendency to “knee-jerk” diagnosis is even greater when patients have already had some interaction—however brief—with the mental health system:  for example, a patient who visited a local crisis clinic and was given a diagnosis of “bipolar disorder” (on the basis of a 5-minute evaluation) and a 14-day supply of Zyprexa, and told to “go see a psychiatrist”; or the patient who mentioned “anxiety” to the ER doc in our county hospital (note: he has no primary care MD), was diagnosed with panic disorder, and prescribed PRN Ativan.

We all learned in our training (if not from a careful reading of the DSM-IV) that a psychiatric diagnosis should be made only when other explanations for symptoms can be ruled out.  Psychiatric treatment, moreover, should be implemented in the safest possible manner, and include close follow-up to monitor patients’ response to these interventions.

But in my experience, once a patient has received a diagnosis, it tends to stick.  I frequently feel an urge to un-diagnose patients, or, at the very least, to have a discussion with them about their complaints and develop a course of treatment—which might involve withholding medications and implementing lifestyle changes or other measures.  Alas, this takes time (and money—at least in the short run).  Furthermore, if a person already believes she has a disorder (even if it’s just “my mother says I must be bipolar because I have mood swings all the time!!!”), or has experienced the sedative, “calming,” “relaxing” effect of Seroquel or Klonopin, it’s difficult to say “no.”

There are consequences of a psychiatric diagnosis.  It can send a powerful message.  It might absolve a person of his responsibility to make changes in his life—changes which he might indeed have the power to make.  Moreover, while some see a diagnosis as stigmatizing, others may see it as a free ticket to powerful (and potentially addictive) medications, as well as a variety of social services, from a discounted annual bus pass, to in-home support services, to a lifetime of Social Security disability benefits.  Very few people consciously abuse the system for their own personal gain, but the system is set up to keep this cycle going.  For many, “successful” treatment means staying in that cycle for the rest of their lives.

—–

The patients who seek help in a community mental health setting are, almost without exception, suffering in many ways.  That’s why they come to see us.  Some clinics do provide a wide assortment of services, including psychotherapy, case management, day programs, and the like.  For the truly mentally ill, these can be a godsend.

For many who seek our services, however, the solutions that would more directly address their suffering—like safer streets, better schools, affordable housing, stable families, less access to illicit drugs, etc.—are difficult or costly to implement, and entirely out of our hands.  In cases such as these, it’s unfortunately easier to diagnose a disease, prescribe a drug which (in the words of one of my colleagues) “allows them to get through just one more night,” and make poor, unfortunate souls even more dependent on a system which sees them as hopeless and unable to emerge from the chaos of their environment.

In my opinion, that’s not psychiatry.  But it’s being practiced every day.


Obesity-Related Anxiety: A Me-Too Disease?

April 15, 2011

Psychiatry seems to have a strange fascination with labels.  (I would say it has an obsession with labels, but then it would be labeled OCD.)  We’re so concerned with what we call something that we sometimes ignore the real phenomena staring us in the face every day.

Consider social anxiety disorder (SAD).  Some have argued that this is simply a technical, high-falutin’ label for general shyness, which even “normal” people experience in varying degrees.  There are indeed cases in which someone’s shyness can be horribly incapacitating—and these cases usually benefit from specialized treatment—but there also exists a broad gradient of social anxiety that we all experience.  If I spend too much time worrying about whether the shy patient in my office meets specific criteria for SAD, I might lose sight of why he came to my office in the first place.

So a news story this week caught my eye, with the headline “Obese People Can Suffer From Social Anxiety Due to Weight Alone.”  To a non-psychiatrist, this statement probably seems self-evident: people who are overweight or obese (just like people with any other aspect of their physical appearance that makes them appear “different from normal”) might be anxious or uncomfortable in social settings, simply because of their weight.

This discomfort doesn’t meet criteria for a DSM-IV diagnosis, though.  (At this point, you might ask, but who cares?  Good question—I’ll get to that below.)  The DSM-IV specifies that the symptoms of social anxiety must be unrelated to any medical condition (of which obesity could be considered one).  So if you’re overly self-conscious in social situations due to your weight, or due to an unsightly mole on your face, or due to a psoriasis flare-up, or because you’re a dwarf, sorry, you don’t “qualify” as SAD.

Apparently some researchers want to change this.  In a study to be published this month in the journal Depression and Anxiety, researchers at Brown University and Rhode Island Hospital investigated a large number of obese individuals and found that some of them have social anxiety due to their weight and nothing else, resulting in “greater impairment in social life and greater distress about their social anxiety” than those obese patients who had been diagnosed with (non-obesity-related) SAD earlier in life.  They argue that we should expand the diagnostic criteria in the upcoming DSM-5 to include these folks.  (Indeed, the subtitle of the article in question is “Implications for a Proposed Change in DSM-5.”)

An investigation of their methods, though, reveals that their key finding may have been a foregone conclusion from the start.  Here’s what they did: They interviewed 1,800 people who were being evaluated for weight loss surgery.  (A pre-op comprehensive psychiatric evaluation is often a requirement for bariatric surgery.)  616 people had no psychiatric history whatsoever, while 135 of them had been diagnosed with SAD at some point in their lives.  But then they found 40 additional people whom they labeled as having something they called “modified SAD,” or “clinically significant social anxiety … only related to weight concerns.”  The paper demonstrates that this “modified SAD” group had psychosocial characteristics (like work/social impairment, past/current social functioning, etc) which were strikingly similar to patients with SAD.

But wait a minute… they admit they “labeled” a subset of patients with something that resembled SAD.  So in other words, they pre-selected people with SAD-like symptoms, and then did the analysis to show that, sure enough, they looked like they have SAD!  It’s sort of like taking all the green M&Ms out of a bowl and then performing a series of chemical and physical tests to prove that they are green.  OK, maybe I shouldn’t have used a food analogy, but you get my point…

I don’t mean to be weigh too heavily (no pun intended) on study’s authors (for one thing, the lead author shared a draft of the article with me prior to publication).  I know why articles like this are written; I’m aware that the medical exclusion has made it impossible for us to diagnose SAD in many people who actually have debilitating anxiety due to some obvious cause, like obesity or stuttering.  And this is relevant because we have to give a DSM code in order to be paid for the services we provide.  As with much in life, it’s often all about the money.

But if that’s the only reason we’re squabbling over whether obesity-related anxiety deserves the DSM seal of approval, then I’m sorry, but it’s another example of psychiatrists and psychologists missing the point.  Whether we call something SAD—or depression, or panic disorder, or ADHD, or bipolar disorder, or whatever—means less to the patient than what he or she actually experiences.  Admittedly, we do have to give a “diagnosis” at some point, but we need to ensure our diagnoses don’t become so homogenized that we end up looking at all of our patients through the same lens.

The 40 obese Rhode Islanders who are socially distressed due to their weight probably don’t care whether they’re labeled “SAD,” “modified SAD,” or anythingelse, they just want help.  They want to feel better, and we owe it to them to get our heads out of our DSMs and back into the therapeutic setting where they belong.


Stress, Illness, and Biological Determinism

March 27, 2011

Two interesting articles caught my attention this week, on the important subject of “stress” and its relationship to human disease—both psychological and physical.  Each offers some promising ways to prevent stress-related disease, but they also point out some potential biases in precisely how we might go about doing so.

A piece by Paul Tough in the New Yorker profiled Nadine Burke, a San Francisco pediatrician (the article is here, but it’s subscription-only; another link might be here).  Burke works in SF’s poverty-stricken Bayview-Hunters Point neighborhood, where health problems are rampant.  She recognized that in this population, the precursors of disease are not just the usual suspects like poor access to health care, diet/lifestyle, education, and high rates of substance use, but also the impact of “adverse childhood experiences” or ACEs.

Drawing upon research by Vincent Felitti and Robert Anda, Burke found that patients who were subjected to more ACEs (such as parental divorce, physical abuse, emotional neglect, being raised by a family member with a drug problem, etc.) had worse outcomes as adults.  These early traumatic experiences had an effect on the development of illnesses such as cancer, heart disease, respiratory illness, and addiction.

The implication for public health, obviously, is that we must either limit exposure to stressful events in childhood, or decrease their propensity to cause long-term adverse outcomes.  The New Yorker article briefly covers some biological research in the latter area, such as how early stress affects DNA methylation in rats, and how inflammatory markers like C-reactive protein are elevated in people who were mistreated as children.  Burke is quoted as saying, “In many cases, what looks like a social situation is actually a neurochemical situation.”  And a Harvard professor claims, “this is a very exciting opportunity to bring biology into early-childhood policy.”

With words like “neurochemical” and “biology” (not to mention “exciting”) being used this way, it doesn’t take much reading-between-the-lines to assume that the stage is being set for a neurochemical intervention, possibly even a “revolution.”  One can almost hear the wheels turning in the minds of academics and pharmaceutical execs, who are undoubtedly anticipating an enormous market for endocrine modulators, demethylating agents, and good old-fashioned antidepressants as ways to prevent physical disease in the children of Hunters Point.

To its credit, the article stops short of proposing that all kids be put on drugs to eliminate the effects of stress.  The author emphasizes that Burke’s clinic engages in biofeedback, child-parent therapy, and other non-pharmacological interventions to promote secure attachment between child and caregiver.  But in a society that tends to favor the “promises” of neuropharmacology—not to mention patients who might prefer the magic elixir of a pill—is this simply window-dressing?  A way to appease patients and give the impression of doing good, until the “real” therapies, medications, become available?

More importantly, are we expecting drugs to reverse the effects of social inequities, cultural disenfranchisement, and personal irresponsibility?

***

The other paper is a study published this month in the Journal of Epidemiology and Community Health.  In this paper, researchers from Sweden measured “psychological distress” and its effects on long-term disability in more than 17,000 “average” Swedish adults.  The subjects were given a baseline questionnaire in 2002, and researchers followed them over a five-year period to see how many received new disability benefits for medical or psychiatric illness.

Not surprisingly, there was a direct correlation between high “psychological distress” and high rates of disability.  It is, of course, quite possible that people who had high baseline distress were distressed about a chronic and disabling health condition, which worsened over the next five years.  But the study also found that even low levels of psychological stress at baseline were significantly correlated with the likelihood of receiving a long-term disability benefit, for both medical and psychiatric illness.

The questionnaire used by the researchers was the General Health Questionnaire, a deceptively simple, 12-question survey of psychological distress (a typical question is “Have you recently felt like you were under constant strain?” with four possible answers, “not at all” up to “much more than usual”)  and scored on a 12-point scale.  Interestingly, people who scored only 1 point out of 12 were twice as likely to receive a disability reward than those who scored zero, and the rates only went up from there.

I won’t delve into other details of the results here, but as Sweden resembles the US in its high rates of psychiatric “disability” (between 1990 and 2007, the percentage of disability rewards due to psychiatric illness rose from ~15% to over 40%), the implication is clear: even mild psychological “distress” is a risk factor for future illness—both physical and mental—and to reverse this trend, the effects of this distress must be treated or prevented in some way.

***

Both of these articles—from different parts of the world, using different measurement instruments, and looking at somewhat different outcomes—nevertheless reach the same conclusion:  early life stress is a risk factor for future disease.  This is a long-recognized phenomenon (for an easily accessible exploration of the topic, read Why Zebras Don’t Get Ulcers, by Stanford’s Robert Sapolsky, a former mentor of mine).

But what do we do with this knowledge?  My fear is that, rather than looking at ways to minimize “stress” in the first place (through social programs, education, and other efforts to raise awareness of the detrimental effects of stress), we as a society are instead conditioned to think about how we can intervene with a drug or some other way to modulate the “neurochemical situation,” as Nadine Burke put it.  In other words, we’re less inclined to act than to react, and our reactions are essentially chemical in nature.

As a psychiatrist who has worked with an inner-city population for many years, I’m already called upon to make diagnoses and prescribe medications not for what are obviously (to me) clear-cut cases of significant and disabling mental illness, but, rather, the accumulated effects of stress and trauma.  (I’ll write more about this fascinating interface of society and biology in the future.)   True, sometimes the diagnoses do “fit,” and indeed sometimes the medications work.  But I am doing nothing to prevent the initial trauma, nor do I feel that I am helping people cope with their stress by telling them to take a pill once or twice a day.

We as a society need to make sure we don’t perpetuate the false promises of biological determinism.  I applaud Nadine Burke and I’m glad epidemiologists (and the New Yorker) are asking serious questions about precursors of disease.  But let’s think about what really helps, rather than looking solely to biology as our savior.

(Thanks to Michael at The Trusting Heart for leading me to the New Yorker article.)


Kids gaming pathologically

January 19, 2011

Today’s New York Times “Well” blog shares the results of a recent study suggesting that video games may contribute to depression in teenagers.  Briefly, the study found that grade-school and middle-school students who were “more impulsive and less comfortable with other children” spent more time playing video games than other teens.  Two years later, these same students were more likely to suffer from depression, anxiety, and social phobias.  The authors are careful to say that there’s no evidence the games caused depression, but there’s a strong correlation.

I pulled up the original article, and the authors’ objectives were to “measure the prevalence…of pathological video gaming, …to identify risk and protective factors, …and to identify outcomes for individuals who become pathological gamers.”  They didn’t use the word “addiction” in their paper (well, actually, they did, but they put it in quotes), but of course the take-home message from the NY Times story is quite clear:  kids can be addicted to video game playing, and this could lead to depression.

As with any extreme activity, I would not be surprised to learn that there are some kids who play games compulsively, who sacrifice food, sleep, hygiene, and other responsibilities for long periods of time.  But to use words like ‘addiction’– or even the less loaded and more clinical-sounding ‘pathological gaming’– risks labeling a potentially harmless behavior as a problem, and may have little to do with the underlying motives.

What’s so pathological, anyway, about pathological gaming?  Is the kid who plays video games for 30 hours a week playing more “pathologically” than the one who plays for only 10?  Does the kid with lots of friends, who gets plenty of fresh air, is active in extracurriculars, and has lots of friends face a more promising future than the one who would prefer to sit at home on the XBOX360 and sometimes forgets to do his homework?  Which friends are more valuable in life—the Facebook friends or the “real” friends?  We know the intuitive answer to these questions, but where are the data to back up these assumptions?

The behavior itself is not the most important factor.  I know some “workaholics” who work 80-plus-hour weeks; they are absolutely committed to their work but they also have rich, fulfilling personal lives and are extremely well-adjusted.  I’ve also met some substance abusers who have never been arrested, never lost a job, and who seem to control their use (they often describe themselves as “functional” addicts) but who nonetheless have all the psychological and emotional hallmarks of a hard-core addict and desperately need rehabilitation.


I have no problem with researchers looking at a widespread activity like video game playing and asking whether it is changing how kids socialize, or whether it may affect learning styles or family dynamics.  But when we take an activity that some kids do “a lot” and label it as pathological or an “addiction,” without defining what those terms mean, or asking what benefit these kids might derive from it, we are, at best, imposing our own standards of acceptable behavior on a generation that sees things much differently, or, at worst, creating a whole new generation of addicts that we now must treat.


%d bloggers like this: