Community Psychiatry And Its Unintended Consequences

May 10, 2011

What impact can psychiatry have on the health of a community?

For three years, I have worked part-time in a non-profit psychopharmacology clinic, treating a wide range of individuals from a poor, underserved urban area.  In a recent post, I wrote that many of the complaints endorsed by patients from this population may be perceived as symptoms of a mental illness.  At one point or another (if not chronically), people complain of “anxiety,” “depression,” “insomnia,” “hopelessness,” etc.—even if these complaints simply reflect their response to environmental stressors, and not an underlying mental illness.

However, because diagnostic criteria are so nonspecific, these complaints can easily lead to a psychiatric diagnosis, especially when the diagnostic evaluation is limited to a self-report questionnaire and a 15- or 20-minute intake appointment.

Personally, I struggle with two opposing biases:  On the one hand, I want to believe that mental illness is a discrete entity, a pathological deviation from “normal,” and presents differently (longer duration, greater intensity, etc) from one’s expected reaction to a situation, however distressing that situation may be.  On the other hand, if I take people’s complaints literally, everyone who walks into my office can be diagnosed as mentally ill.

Where do we draw the line?  The question is an important one.  The obvious answer is to use clinical judgment and experience to distinguish “illness” from “health.”  But this boundary is vague, even under ideal circumstances.  It breaks down entirely when patients have complicated, confusing, chaotic histories (or can’t provide one) and our institutions are designed for the rapid diagnosis and treatment of symptoms rather than the whole person.  As a result, patients may be given a diagnosis where a true disorder doesn’t exist.

This isn’t always detrimental.  Sometimes it gives patients access to interventions from which they truly benefit—even if it’s just a visit with a clinician every couple of months and an opportunity to talk.  Often, however, our tendency to diagnose and to pathologize creates new problems, unintended diversions, and potentially dire consequences.

The first consequence is the overuse of powerful (and expensive) medications which, at best, may provide no advantage over a placebo and, at worst, may cause devastating side effects, not to mention extreme costs to our overburdened health care system.  Because Medicaid and Medicare reimbursements are better for medication management than for other non-pharmacological interventions, “treatment” often consists of brief “med check” visits every one to six months, with little time for follow-up or exploring alternative approaches.  I have observed colleagues seeing 30 or 40 patients in a day, sometimes prescribing multiple antipsychotics with little justification, frequently in combination with benzodiazepines or other sedatives, and asking for follow-up appointments at six-month intervals.  How this is supposed to improve one’s health, I cannot fathom.

Second, overdiagnosis and overtreatment diverts resources from where they are truly needed.  For instance, the number of patients who can access a mental health clinic like ours but who do not have a primary care physician is staggering.  Moreover, patients with severe, persistent mental illness (and who might be a danger to themselves or others when not treated) often don’t have access to assertive, multidisciplinary treatment.  Instead, we’re spending money on medications and office visits by large numbers of patients for whom diagnoses are inaccurate, and medications provide dubious benefit.

Third, this overdiagnosis results in a massive population of “disabled,” causing further strain on scarce resources.  The increasing number of patients on disability due to mental illness has long been chronicled.  Some argue that the disability is itself a consequence of medication.  It is also possible that some people may abuse the system to obtain certain resources.  More commonly, however, I believe that the failure of the system (i.e., we clinicians) to perform an adequate evaluation—and our inclination to jump to a diagnosis—has swollen the disability ranks to an unsustainably high level.

Finally—and perhaps most distressingly—is the false hope that a psychiatric diagnosis communicates to a patient.  I believe it can be quite disempowering for a person to hear that his normal response to a situation which is admittedly dire represents a mental illness.  A diagnosis may provide a transient sense of relief (or, at the very least, alleviate one’s guilt), but it also tells a person that he is powerless to change his situation, and that a medication can do it for him.  Worse, it makes him dependent upon a “system” whose underlying motives aren’t necessarily in the interest of empowering the neediest, weakest members of our society.  I agree with a quote in a recent BBC Health story that a lifetime on disability “means that many people lose their sense of self-worth, identity, and esteem.”  Again, not what I set out to do as a psychiatrist.

With these consequences, why does the status quo persist?  For any observer of the American health care system, the answer seems clear:  vested interests, institutional inertia, a clear lack of creative thought.  To make matters worse, none of the examples described above constitute malpractice; they are, rather, the standard of practice.

As a lone clinician, I am powerless to reverse this trend.  That’s not to say I haven’t tried.  Unfortunately, my attempts to change the way we practice have been met with resistance at many levels:  county mental health administrators who have not returned detailed letters and emails asking to discuss more cost-effective strategies for care; fellow clinicians who have looked with suspicion (if not derision) upon my suggestions to rethink our approach; and—most painfully to me—supervisors who have labeled me a bigot for wanting to deprive people of the diagnoses and medications our (largely minority) patients “need.”

The truth is, my goal is not to deprive anyone.  Rather, it is to encourage, to motivate, and to empower.  Diagnosing illness where it doesn’t exist, prescribing medications for convenience and expediency, and believing that we are “helping” simply because we have little else to offer, unfortunately do none of the above.


Mental Illness and Social Realities

May 2, 2011

Does the definition of “mental illness” differ from place to place?  Is there a difference between “depression” in a poor individual and in one of means?  Are the symptoms identical?  What about the neurobiology?  The very concept of a psychiatric “disease” implies that certain core features of one’s illness transcend the specifics of a person’s social or cultural background.  Nevertheless, we know that disorders look quite different, depending on the setting in which they arise.  This is why people practice psychiatry, not computers or checklists.  (Not yet, at least.)

However, sometimes a person’s environment can elicit reactions and behaviors that might appear—even to a trained observer—as mental illness.  If unchecked, this may create an epidemic of “disease” where true disease does not exist.  And the consequences could be serious.

—–

For the last three years, I have had the pleasure of working part-time in a community mental health setting.  Our clinic primarily serves patients on Medicaid and Medicare, in a gritty, crime-ridden expanse of a major city.  Our patients are, for the most part, impoverished, poorly educated, have little or no access to primary care services, and live in communities ravaged by substance abuse, crime, unemployment, familial strife, and a deep, pervasive sense of hopelessness.

Even though our resources are extremely limited, I can honestly say that I have made a difference in the lives of hundreds, if not thousands, of individuals.  But the experience has led me to question whether we are too quick to make psychiatric diagnoses for the sake of convenience and expediency, rather than on the basis of a fair, objective, and thorough evaluation.

Almost predictably, patients routinely present with certain common complaints:  anxiety, “stress,” insomnia, hopelessness, fear, worry, poor concentration, cognitive deficits, etc.  Each of these could be considered a feature of a deeper underlying disorder, such as an anxiety disorder, major depression, psychosis, thought disorder, or ADHD.  Alternatively, they might also simply reflect the nature of the environment in which the patients live, or the direct effects of other stressors that are unfortunately too familiar in this population.

Given the limitations of time, personnel, and money, we don’t usually have the opportunity for a thorough evaluation, collaborative care with other professionals, and frequent follow-up.  But psychiatric diagnostic criteria are vague, and virtually everyone who walks into my office endorses symptoms for which it would be easy to justify a diagnosis.  The “path of least” resistance” is often to do precisely that, and move to the next person in the long waiting-room queue.

This tendency to “knee-jerk” diagnosis is even greater when patients have already had some interaction—however brief—with the mental health system:  for example, a patient who visited a local crisis clinic and was given a diagnosis of “bipolar disorder” (on the basis of a 5-minute evaluation) and a 14-day supply of Zyprexa, and told to “go see a psychiatrist”; or the patient who mentioned “anxiety” to the ER doc in our county hospital (note: he has no primary care MD), was diagnosed with panic disorder, and prescribed PRN Ativan.

We all learned in our training (if not from a careful reading of the DSM-IV) that a psychiatric diagnosis should be made only when other explanations for symptoms can be ruled out.  Psychiatric treatment, moreover, should be implemented in the safest possible manner, and include close follow-up to monitor patients’ response to these interventions.

But in my experience, once a patient has received a diagnosis, it tends to stick.  I frequently feel an urge to un-diagnose patients, or, at the very least, to have a discussion with them about their complaints and develop a course of treatment—which might involve withholding medications and implementing lifestyle changes or other measures.  Alas, this takes time (and money—at least in the short run).  Furthermore, if a person already believes she has a disorder (even if it’s just “my mother says I must be bipolar because I have mood swings all the time!!!”), or has experienced the sedative, “calming,” “relaxing” effect of Seroquel or Klonopin, it’s difficult to say “no.”

There are consequences of a psychiatric diagnosis.  It can send a powerful message.  It might absolve a person of his responsibility to make changes in his life—changes which he might indeed have the power to make.  Moreover, while some see a diagnosis as stigmatizing, others may see it as a free ticket to powerful (and potentially addictive) medications, as well as a variety of social services, from a discounted annual bus pass, to in-home support services, to a lifetime of Social Security disability benefits.  Very few people consciously abuse the system for their own personal gain, but the system is set up to keep this cycle going.  For many, “successful” treatment means staying in that cycle for the rest of their lives.

—–

The patients who seek help in a community mental health setting are, almost without exception, suffering in many ways.  That’s why they come to see us.  Some clinics do provide a wide assortment of services, including psychotherapy, case management, day programs, and the like.  For the truly mentally ill, these can be a godsend.

For many who seek our services, however, the solutions that would more directly address their suffering—like safer streets, better schools, affordable housing, stable families, less access to illicit drugs, etc.—are difficult or costly to implement, and entirely out of our hands.  In cases such as these, it’s unfortunately easier to diagnose a disease, prescribe a drug which (in the words of one of my colleagues) “allows them to get through just one more night,” and make poor, unfortunate souls even more dependent on a system which sees them as hopeless and unable to emerge from the chaos of their environment.

In my opinion, that’s not psychiatry.  But it’s being practiced every day.


The Power Of No

April 3, 2011

Why is it that when someone tells us we can’t have something, we just want it more?  Marketers (those masters of neuropsychology) use this to their great advantage.  “Call now!  Offer expires in ten minutes!”  “Only one more available at this price!”  “Limited edition—Act now!”  Talk about incentive salience!!!

This phenomenon is known as the Scarcity Effect—a psychological principle saying that individuals don’t want to be left alone without an item—particularly something they believe they cannot have.  We’ve all experienced this in our personal lives.  Tight budgets often invite wasteful expenditures.  Obsession over “forbidden foods” has ruined many a diet.  Saying “no” to a child is frequently a trigger for constant begs and pleas.

Given the apparent universality of this concept, it’s surprising that we fall victim to it in medicine as often as we do, particularly at times when we want to motivate behavior change.  Saying “no” to a patient usually doesn’t work—it’s human nature.  In fact, if anything, the outcome is usually the opposite.  Reciting the dangers of cigarette smoking or obesity, for example, or admonishing a patient for these behaviors, rarely eliminates them.  The patient instead experiences shame or guilt that, paradoxically, strengthens his resistance to change.

But if we understand the Scarcity Effect, we doctors can outsmart it and use it to our advantage.  This can be important when we prescribe medications which are likely to be misused or abused, like sleep medications or benzodiazepines (Valium, Xanax, and others).  These drugs are remarkably effective for management of insomnia and anxiety, but their overuse has led to great morbidity, mortality, and increased health costs.  Similarly, narcotic pain medications are also effective but may be used excessively, with unfortunate results.  We discourage excessive use of these drugs because of side effects, the development of physical dependence, and something I call “psychological dependence”: the self-defeating belief I see in many patients that taking a pill is absolutely necessary to do what the patient should be able to do by him- or herself.

If I give a patient a prescription and say something like “Here’s a script for 15 pills, but I’m not giving you a refill until next month,” I’m almost inviting failure.  Just as expected by the Scarcity Effect, the patient’s first thought is usually “but what if I need 16?”

(I’ve worked extensively in addiction medicine, and the same principle is at work here, too.  When an alcoholic in early recovery is told that he can never have a drink again, he immediately starts to crave one.  Now I know that most alcoholics in early recovery are not in the position to say “no” to a drink, but this is the ultimate goal.  Their ability and willingness to say “no” is far more effective for long-term sobriety than someone else saying “no” for them.)

So why exactly does inaccessibility lead to craving?  Because even when it’s clear that we cannot have something, our repeated efforts to get it sometimes pay off.  And here’s where another psychological principle—that of intermittent reinforcement—comes in to play.  People who play the lottery are victims of this.  They know (most of them!!) that the odds of their winning are vanishingly low.  Most people never win, and those who play regularly are almost always losers.  However, every once in a while they’ll get lucky and win a $5 scratcher (and see the news stories about the $80 million jackpot winner just like them!) and this is incredibly reinforcing.

Similarly, if a doctor tells a patient that she should use only 10 Ambien tablets in 30 days– and that no refills will be allowed– but she calls the doctor on day #12 and asks for a refill anyway, getting the refill is incredibly reinforcing.  In the drug and alcohol treatment center where I used to work, if someone’s withdrawal symptoms did not require an additional Valium according to a very clear detox protocol, he might beg to a nurse or staff member, and occasionally get one—precisely what we do not want to do to an addict trying to get clean.

The danger is not so much in the reinforcement per se, but in the fact that the patient is led to believe (for very therapeutic reasons) that there will be no reinforcement, and yet he or she receives it anyway.  This, in my view, potentially thwarts the whole therapeutic alliance.  It permits the patient’s unhealthy behaviors to prevail over the strict limits that were originally set, despite great efforts (by patient and doctor alike) to adhere to these limits.  As a result, the unhealthy behaviors override conscious, healthy decisions that the patient is often perfectly capable of making.

One solution is, paradoxically, to give more control back to the patient.  For example, prescribing 30 Ambien per month but encouraging the patient to use only 10.  If she uses 12 or 15, no big deal—but it’s fodder for discussion at the next visit.  Similarly, instead of making a statement that “no narcotic refills will be given,” we can give some rough guidelines in the beginning but let the patient know that requests will be evaluated if and when they occur.  Recovering addicts, too, need to know that relapses and craving are not only common, but expected, and instead of seeing them as failures of treatment (the big “no”), they are a natural part of recovery and worthy of discussion and understanding.

In medicine, as in all sciences dealing with human behavior, ambivalence is common.  Preserving and respecting the patient’s ability to make decisions, even those which might be unhealthy, may seem like giving in to weakness.  I disagree.  Instead, it teaches patients to make more thoughtful choices for themselves (both good and bad)—exactly what we want to encourage for optimal health.


The FDA Should Really Look Into This Drug

April 1, 2011

The Food and Drug Administration (FDA) regulates and approves medications for use in humans.  Their approval process is rigorous, and they do extensive monitoring of drugs they’ve already approved (“postmarketing surveillance”) to ensure their continued safety and efficacy.

As a psychiatrist, I see one particular drug used very frequently by my patients, usually prescribed by another physician for management of a physical or mental disorder.  In certain cases, however, I wonder whether it might actually worsen the symptoms I’m treating.  Furthermore, I’ve seen several recent references in the medical literature describing how this particular drug can increase the risk of psychotic symptoms and might even cause schizophrenia, a lifelong condition with high morbidity and mortality.

This month’s British Medical Journal, for instance, contains an article showing that users of this drug are more than twice as likely to have psychotic symptoms than nonusers; it “significantly increased the risk of psychotic experiences” over a 10-year period, and “adjustment for other psychiatric diagnoses did not change the results.”  Similarly, a meta-analysis published last month in the Archives of General Psychiatry showed that “the age at onset of psychosis for users [of this drug] was 2.70 years younger than for nonusers,” and that exposure to this drug “is associated with a decline in cognitive performance in young people.”  Finally, an article in the August 2010 American Journal of Psychiatry reported that use of this drug “is associated with an adverse course of psychotic symptoms, even after taking into account other clinical, substance use, and demographic variables.”

Given all this bad news, it’s surprising that this drug is still on the market—particularly in light of the FDA’s recent decisions to pull the plug on other medications with potentially dangerous side effects:  heart rhythm abnormalities (in the case of Darvon), coronary heart disease (Vioxx), increased risk of myocardial infarction (Avandia), fainting and non-cancerous ovarian cysts (Zelnorm), and so on.  So what gives?

And what is this horrible drug anyway?

It’s medical marijuana.

To be fair, asking why the FDA hasn’t withdrawn medical marijuana from the market is not exactly a reasonable question because, technically, it has never been on the market.  The DEA labels it a schedule I drug, which means, according to their definition, that it has a high potential for abuse, no recognized medical use, and no “accepted safety profile.”  However, several states (sixteen at last count, including my home state of California) have approved its use, and annual sales are around $1.7 billion, rivaling the annual sales of Viagra.

Let me point out that I have no official position on medical cannabis.  (See my previous post on the subject.)  I do not prescribe it, but that’s a professional decision, not a personal or moral one.  As noted above, I’ve seen some of my patients benefit from it, and others harmed by it.

Many other substances that are readily available to my patients—whether legal or not—have the potential for both benefit and harm.  People “self-medicate” in all kinds of ways, and we physicians have a responsibility to ensure that they’re doing so in a way that doesn’t cause long-term damage.  We encourage people to stop smoking cigarettes, for instance, and make sure that patients use alcohol in moderation.  (Alcohol and nicotine, of course, are legal but not “prescribed” as medications.)

But once we (i.e., the medical profession and the government) designate a drug as a “medication,” this should imply a whole new level of scientific rigor and safety.  This designation communicates to patients that the substance will provide some sort of measurable benefit and the relative lack of adverse effects when used as prescribed.  I’m not sure medical marijuana passes this test.  Not only have its benefits not been rigorously proven (a fact that is probably due to the reluctance of the NIH to fund such research), but, as demonstrated in the research above, it’s not really “safe” enough to meet criteria for an FDA-approved medication.

As a result, we have a situation where the “medical” label is being used to describe a product that is used for “medicinal” use (although, in my experience, patients often use it for recreational purposes only), but which also has the potential to exacerbate existing conditions or cause new ones.  Medications shouldn’t do this.  Medical cannabis, if subjected to the FDA approval process in its current form, would go nowhere.

Don’t get me wrong:  I understand the potential benefit of cannabis and the compounds in the natural marijuana product, and I support any measure to bring more effective treatments to our patients.  But the current awkward “medicalization” of marijuana imposes too much cognitive dissonance on prescribers and users.  We believe intuitively that it may “help” but also know its potential risks, and that’s hard for any honest physician to endorse.

I see two solutions to this dilemma:  Perform the rigorous, controlled studies to prove its efficacy and safety; or just do away with the whole “medical” façade and legalize it already.


Here’s A Disease. Do You Have It?

March 29, 2011

I serve as a consultant to a student organization at a nearby university.  These enterprising students produce patient-education materials (brochures, posters, handouts, etc) for several chronic diseases, and their mission—a noble one—is to distribute these materials to free clinics in underserved communities, with a goal to raise awareness of these conditions and educate patients on their proper management.

Because I work part-time in a community mental health clinic, I was, naturally, quite receptive to their offer to distribute some of their handiwork to my patients.  The group sent me several professional-looking flyers and brochures describing the key features of anxiety disorders, depression, PTSD, schizophrenia, and insomnia, and suggested that I distribute these materials to patients in my waiting room.

They do an excellent job at demystifying (and destigmatizing) mental illness, and describe, in layman’s terms, symptoms that may be suggestive of a significant psychiatric disorder (quoting from one, for example: “Certain neurotransmitters are out of balance when people are depressed.  They often feel sad, hopeless, helpless, lack energy, … If you think you may be depressed, talk to a doctor.”)  But just as I was about to print a stack of brochures and place them at the front door, I thought to myself, what exactly is our goal?

Experiencing symptoms of anxiety, depression, or insomnia doesn’t necessarily indicate mental illness or a need for medications or therapy; they might reflect a stressful period in one’s life or a difficult transition for which one might simply need some support or encouragement.  I feared that the questions posed in these materials may lead people to believe that there might be something “wrong” with them, when they are actually quite healthy.  (The target audience needs to be considered, too, but I’ll write more about that later.)

It led me to the question: when does “raising awareness” become “disease mongering”?

“Disease-mongering,” if you haven’t heard of it, is the (pejorative) term used to describe efforts to lead people to believe they have a disease when they most likely do not, or when the “disease” in question is so poorly defined as to be questionable in and of itself.  Accusations of disease-mongering have made in the area of bipolar disorder, fibromyalgia, restless legs syndrome, female sexual arousal disorder, “low testosterone,” and many others, and have mainly been directed toward pharmaceutical companies with a vested interest in getting people on their drugs.  (See this special issue of PLoS One for several articles on this topic.)

Psychiatric disorders are ripe for disease-mongering because they are essentially defined by subjective symptoms, rather than objective signs and tests.  In other words, if I simply recite the symptoms of depression to my doctor, he’ll probably prescribe me an antidepressant; but if I tell him I have an infection, he’ll check my temperature, my WBC count, maybe palpate some lymph nodes, and if all seems normal he probably won’t write me a script for an antibiotic.

It’s true that some patients might deliberately falsify or exaggerate symptoms in order to obtain a particular medication or diagnosis.  What’s far more likely, though, is that they are (unconsciously) led to believe they have some illness, simply on the basis of experiencing some symptoms that are, more or less, a slight deviation from “normal.”  This is problematic for a number of reasons.  Obviously, an improper diagnosis leads to the prescription of unnecessary medications (and to their undesirable side effects), driving up the cost of health care.  It may also harm the patient in other ways; it may prevent the patient from getting health insurance or a job, or—even more insidiously—lead them to believe they have less control over their thoughts or behaviors than they actually do.

When we educate the public about mental illness, and encourage people to seek help if they think they need it, we walk a fine line.  Some people who may truly benefit from professional help will ignore the message, saying they “feel fine,” while others with very minor symptoms which are simply part of everyday life may be drawn in.  (Here is another example, a flyer for childhood bipolar disorder, produced by the NIH; how many parents & kids might be “caught”?)  Mental health providers should never turn away someone who presents for an evaluation or assessment, but we also have an obligation to provide a fair and unbiased opinion of whether a person needs treatment or not.  After all, isn’t that our responsibility as professionals?  To provide our honest input as to whether someone is healthy or unhealthy?

I almost used the words “normal” and “abnormal” in the last sentence.  I try not to use these words (what’s “normal” anyway?), but keeping them in mind helps us to see things from the patient’s perspective.  When she hears constant messages touting “If you have symptom X then you might have disorder Y—talk to your doctor!” she goes to the doctor seeking guidance, not necessarily a diagnosis.

The democratization of medical and scientific knowledge is, in my opinion, a good thing.  Information about what we know (and what we don’t know) about mental illness should indeed be shared with the public.   But it should not be undertaken with the goal of prescribing more of a certain medication, bringing more patients into one’s practice, or doling out more diagnoses.  Prospective patients often can’t tell what the motives are behind the messages they see—magazine ads, internet sites, and waiting-room brochures may be produced by just about anyone —and this is where the responsibility and ethics of the professional are of utmost importance.

Because if the patient can’t trust us to tell them they’re okay, then are we really protecting and ensuring the public good?

(Thanks to altmentalities for the childhood bipolar flyer.)


Stress, Illness, and Biological Determinism

March 27, 2011

Two interesting articles caught my attention this week, on the important subject of “stress” and its relationship to human disease—both psychological and physical.  Each offers some promising ways to prevent stress-related disease, but they also point out some potential biases in precisely how we might go about doing so.

A piece by Paul Tough in the New Yorker profiled Nadine Burke, a San Francisco pediatrician (the article is here, but it’s subscription-only; another link might be here).  Burke works in SF’s poverty-stricken Bayview-Hunters Point neighborhood, where health problems are rampant.  She recognized that in this population, the precursors of disease are not just the usual suspects like poor access to health care, diet/lifestyle, education, and high rates of substance use, but also the impact of “adverse childhood experiences” or ACEs.

Drawing upon research by Vincent Felitti and Robert Anda, Burke found that patients who were subjected to more ACEs (such as parental divorce, physical abuse, emotional neglect, being raised by a family member with a drug problem, etc.) had worse outcomes as adults.  These early traumatic experiences had an effect on the development of illnesses such as cancer, heart disease, respiratory illness, and addiction.

The implication for public health, obviously, is that we must either limit exposure to stressful events in childhood, or decrease their propensity to cause long-term adverse outcomes.  The New Yorker article briefly covers some biological research in the latter area, such as how early stress affects DNA methylation in rats, and how inflammatory markers like C-reactive protein are elevated in people who were mistreated as children.  Burke is quoted as saying, “In many cases, what looks like a social situation is actually a neurochemical situation.”  And a Harvard professor claims, “this is a very exciting opportunity to bring biology into early-childhood policy.”

With words like “neurochemical” and “biology” (not to mention “exciting”) being used this way, it doesn’t take much reading-between-the-lines to assume that the stage is being set for a neurochemical intervention, possibly even a “revolution.”  One can almost hear the wheels turning in the minds of academics and pharmaceutical execs, who are undoubtedly anticipating an enormous market for endocrine modulators, demethylating agents, and good old-fashioned antidepressants as ways to prevent physical disease in the children of Hunters Point.

To its credit, the article stops short of proposing that all kids be put on drugs to eliminate the effects of stress.  The author emphasizes that Burke’s clinic engages in biofeedback, child-parent therapy, and other non-pharmacological interventions to promote secure attachment between child and caregiver.  But in a society that tends to favor the “promises” of neuropharmacology—not to mention patients who might prefer the magic elixir of a pill—is this simply window-dressing?  A way to appease patients and give the impression of doing good, until the “real” therapies, medications, become available?

More importantly, are we expecting drugs to reverse the effects of social inequities, cultural disenfranchisement, and personal irresponsibility?

***

The other paper is a study published this month in the Journal of Epidemiology and Community Health.  In this paper, researchers from Sweden measured “psychological distress” and its effects on long-term disability in more than 17,000 “average” Swedish adults.  The subjects were given a baseline questionnaire in 2002, and researchers followed them over a five-year period to see how many received new disability benefits for medical or psychiatric illness.

Not surprisingly, there was a direct correlation between high “psychological distress” and high rates of disability.  It is, of course, quite possible that people who had high baseline distress were distressed about a chronic and disabling health condition, which worsened over the next five years.  But the study also found that even low levels of psychological stress at baseline were significantly correlated with the likelihood of receiving a long-term disability benefit, for both medical and psychiatric illness.

The questionnaire used by the researchers was the General Health Questionnaire, a deceptively simple, 12-question survey of psychological distress (a typical question is “Have you recently felt like you were under constant strain?” with four possible answers, “not at all” up to “much more than usual”)  and scored on a 12-point scale.  Interestingly, people who scored only 1 point out of 12 were twice as likely to receive a disability reward than those who scored zero, and the rates only went up from there.

I won’t delve into other details of the results here, but as Sweden resembles the US in its high rates of psychiatric “disability” (between 1990 and 2007, the percentage of disability rewards due to psychiatric illness rose from ~15% to over 40%), the implication is clear: even mild psychological “distress” is a risk factor for future illness—both physical and mental—and to reverse this trend, the effects of this distress must be treated or prevented in some way.

***

Both of these articles—from different parts of the world, using different measurement instruments, and looking at somewhat different outcomes—nevertheless reach the same conclusion:  early life stress is a risk factor for future disease.  This is a long-recognized phenomenon (for an easily accessible exploration of the topic, read Why Zebras Don’t Get Ulcers, by Stanford’s Robert Sapolsky, a former mentor of mine).

But what do we do with this knowledge?  My fear is that, rather than looking at ways to minimize “stress” in the first place (through social programs, education, and other efforts to raise awareness of the detrimental effects of stress), we as a society are instead conditioned to think about how we can intervene with a drug or some other way to modulate the “neurochemical situation,” as Nadine Burke put it.  In other words, we’re less inclined to act than to react, and our reactions are essentially chemical in nature.

As a psychiatrist who has worked with an inner-city population for many years, I’m already called upon to make diagnoses and prescribe medications not for what are obviously (to me) clear-cut cases of significant and disabling mental illness, but, rather, the accumulated effects of stress and trauma.  (I’ll write more about this fascinating interface of society and biology in the future.)   True, sometimes the diagnoses do “fit,” and indeed sometimes the medications work.  But I am doing nothing to prevent the initial trauma, nor do I feel that I am helping people cope with their stress by telling them to take a pill once or twice a day.

We as a society need to make sure we don’t perpetuate the false promises of biological determinism.  I applaud Nadine Burke and I’m glad epidemiologists (and the New Yorker) are asking serious questions about precursors of disease.  But let’s think about what really helps, rather than looking solely to biology as our savior.

(Thanks to Michael at The Trusting Heart for leading me to the New Yorker article.)


The Perils of Checklist Psychiatry

March 16, 2011

It’s no secret that doctors in all specialties spend less and less time with patients these days.  Last Sunday’s NY Times cover article (which I wrote about here and here) gave a fairly stark example of how reimbursement incentives have given modern psychiatry a sort of assembly-line mentality:  “Come in, state your problems, and here’s your script.  Next in line!!”  Unfortunately, all the trappings of modern medicine—shrinking reimbursements, electronic medical record systems which favor checklists over narratives, and patients who frequently want a “quick fix”—contribute directly to this sort of practice.

To be fair, there are many psychiatrists who don’t work this way.  But this usually comes with a higher price tag, which insurance companies often refuse to pay.  Why?  Well, to use the common yet frustrating phrase, it’s not “evidence-based medicine.”  As it turns out, the only available evidence is for the measurement of specific symptoms (measured by a checklist) and the prescription of pills over (short) periods of time.  Paradoxically, psychiatry—which should know better—no longer sees patients as people with interesting backgrounds and multiple ongoing social and psychological dynamics, but as collections of symptoms (anywhere in the world!) which respond to drugs.

The embodiment of this mentality, of course, is the DSM-IV, the “diagnostic manual” of psychiatry, which is basically a collection of symptom checklists designed to make a psychiatric diagnosis.  Now, I know that’s a gross oversimplification, and I’m also aware that sophisticated interviewing skills can help to determine the difference between a minor disturbance in a patient’s mood or behavior and a pathological condition (i.e., betwen a symptom and a syndrome).  But often the time, or those skills, simply aren’t available, and a diagnosis is made on the basis of what’s on the list.  As a result, psychiatric diagnoses have become “diagnoses of inclusion”:  you say you have a symptom, you’ll get a diagnosis.

To make matters worse, the checklist mentality, aided by the Internet, has spawned a small industry of “diagnostic tools,” freely available to clinicians and to patients, and published in books, magazines, and web sites.  (The bestselling book The Checklist Manifesto may have contributed, too.  In it, author-surgeon Atul Gawande explains how simple checklists are useful in complex situations in which lives are on the line.  He has received much praise, but the checklists he describes help to narrow our focus, when in psychiatry it should be broadened.  In other words, checklists are great for preparing an OR for surgery, or a jetliner for takeoff, but not in identifying the underlying causes of an individual’s suffering.)

Anyway, a quick Google search for any mental health condition (or even a personality trait like shyness, irritability, or anger) will reveal dozens of free questionnaires, surveys, and checklists designed to make a tentative diagnosis.  Most give the disclaimer “this is not meant to be a diagnostic tool—please consult your physician.”

But why?  If the patient has already answered all the questions that the doctor will ask anyway in the 10 to 15 minutes allotted for their appointment, why can’t the patient just email the questionnaire directly to a doc in another state (or another country) from the convenience of their own home, enter their credit card information, and wait for a prescription in the mail?  Heck, why not eliminate the middleman and submit the questionnaire directly to the drug company for a supply of pills?

I realize I’m exaggerating here.  Questionnaires and checklists can be extremely helpful—when used responsibly—as a way to obtain a “snapshot” of a patient’s progress or of his/her active symptoms, and to suggest topics for discussion in a more thorough interview.  Also, people also have an innate desire to know how they “score” on some measure—the exercise can even be entertaining—and their results can sometimes reveal things they didn’t know about themselves.

But what makes psychiatry and psychology fascinating is the discovery of alternate, more parsimonious (or potentially more serious) explanations for a patient’s traits and behaviors; or, conversely, informing a patient that his or her “high score” is actually nothing to be worried about.  That’s where the expert comes in.  Unfortunately, experts can behave like Internet surveys, too, and when we do, the “rush to judgment” can be shortsighted, unfair, and wrong.


Dr. Quickfix, Redux

March 7, 2011

Last weekend’s NY Times article, which I wrote about in my last post, has, predictably, resulted in a deluge of responses from many observers.  The comments posted to the NYT “Well” blog (over 160 as of this writing) seem to be equally critical of Dr Levin and of our health care reimbursement system, which, according to the article, forced him to make the Faustian bargain to sacrifice good patient care in favor of a comfortable retirement.  Other bloggers and critics have used this as an opportunity to champion the talents and skills of psychologists, psychotherapists, and nurse practitioners, none of whom, according to the article, face the same financial pressures—or selfishness—of psychiatrists like Dr Levin.

While the above observations are largely valid (although one colleague pointed out that psychologists and NPs can have financial pressures too!), I chose to consider the patients’ point of view.  In my post, I pointed out that many patients seem to be satisfied with the rapid, seemingly slapdash approach of modern psychopharmacology.  I wrote how, in one of my clinic settings, a community mental health center, I see upwards of 20-30 patients a day, often for no more than 10-15 minutes every few months.  Although there are clear exceptions, many patients appreciate the attention I give them, and say they like me.  The same is also true for patients with “good insurance” or for those who pay out-of-pocket:  a 15-minute visit seems to work just fine for a surprising number of folks.

I remarked to a friend yesterday that maybe there are two types of patients:  those who want hour-long, intense therapy sessions on an ongoing basis (with or without medications), and those who are satisfied with quick, in-and-out visits and medication management alone.  My argument was that our culture has encouraged this latter approach in an unfortunate self-propagating feedback cycle:  Not only does our reimbursement process force doctors (and patients) to accept shorter sessions just to stay afloat, but our hyperactive, “manic” culture favors the quick visits, too; indeed, some patients just can’t keep seated in the therapist’s chair for more than ten minutes!

She responded, correctly, that I was being too simplistic.  And she’s right.  While there are certainly examples of the two populations I describe above, the vast majority of patients accept it because the only other option is no care at all.  (It’s like the 95% of people with health insurance who said during the health care reform debate that they were “satisfied” with their coverage; they said so because they feared the alternative.)  She pointed out that the majority of patients don’t know what good care looks like.  They don’t know what special skills a psychiatrist can bring to the table that a psychologist or other counselor cannot (and vice versa, for that matter).  They don’t know that 15 minutes is barely enough time to discuss the weather, much less reach a confident psychiatric diagnosis.  They don’t know that spending a little more money out of pocket for specialized therapy, coaching, acupuncture, Eastern meditation practice, a gym membership, or simply more face-time with a good doc, could result in treatment that is more inspiring and life-affirming than any antidepressant will ever be.

So while my colleagues all over the blogosphere whine about the loss of income wrought by the nasty HMOs and for-profit insurance companies (editorial comment: they are nasty) and the devolution of our once-noble profession into an army of pill pushers, I see this as a challenge to psychiatry.  We must make ourselves more relevant, and to do so we have to let patients know that what we can offer is much more than what they’re getting.  Patients should not settle for 10 minutes with a psychiatrist and a hastily written script. But they’ll only believe this if we can convince them otherwise.

It’s time for psychiatrists to think beyond medications, beyond the DSM, and beyond the office visit.  Psychiatrists need to make patients active participants in their care, and challenge them to become better people, not just receptacles for pills.  Psychiatrists also need to be doctors, and help patients to understand the physical basis of mental symptoms, how mental illness can disrupt physical homeostasis, and what our drugs do to our bodies.

Patients need to look at psychiatrists as true shepherds of the mind, soul, and body, and, in turn, the psychiatrist’s responsibility is to give them reason to do so.  It may cost a little more in terms of money and time, but in the long run it could be money well spent, for patients and for society.

Psychiatrists are highly educated professionals who entered this field not primarily to make money, but to help others.  If we can do this more effectively than we do now, the money will surely follow, and all will be better served.


Dr. Quickfix Will See You Now

March 5, 2011

A cover story by Gardiner Harris in Sunday’s New York Times spotlights the changes in modern psychiatry, from extensive, psychotherapy-based interaction to brief, medication-oriented “psychopharm” practice.  The shift has transpired over the last decade or longer; it was brilliantly described in T.R. Luhrmann’s 2000 book Of Two Minds, and has been explored ad nauseum in the psychiatric literature, countless blogs (including this one), and previously in the New York Times itself.

The article shares nothing new, particularly to anyone who has paid any attention to the rapid evolution of the psychiatric profession over the last ten years (or who has been a patient over the same period).  While the article does a nice job of detailing the effect this shift has had on Donald Levin, the psychiatrist profiled in the article, I believe it’s equally important to consider the effect it has had on patients, which, in my opinion, is significant.

First, I should point out that I have been fortunate to work in a variety of psychiatric settings.  I worked for years in a long-term residential setting, which afforded me the opportunity to engage with patients about much more than just transient symptoms culminating in a quick med adjustment.  I have also chosen to combine psychotherapy with medication management in my current practice (which is financially feasible—at least for now).

However, I have also worked in a psychiatric hospital setting, as well as a busy community mental health center.  Both have responded to the rapid changes in the health care reimbursement system by requiring shorter visits, more rushed appointments, and an emphasis on medications—because that’s what the system will pay for.  This is clearly the direction of modern psychiatry, as demonstrated in the Times article.

My concern is that when a patient comes to a clinic knowing that he’ll only have 10 or 15 minutes with a doctor, the significance of his complaints gets minimized.  He is led to believe that his personal struggles—which may in reality be substantial—only deserve a few minutes of the doctor’s time, or can be cured with a pill.  To be sure, it is common practice to refer patients to therapists when significant lifestyle or psychosocial issues may underlie their suffering (and if they’re lucky, insurance might pay for it), but when this happens, the visit with the doctor is even more rushed.

I could make an argument here for greater reimbursement for psychiatrists doing therapy, or even for prescribing privileges for psychologists (who provide the more comprehensive psychotherapy).  But what’s shocking to me is that patients often seem to be okay with this hurried, fragmented, disconnected care.

Quoting from the article (emphasis mine):

[The patient] said she likes Dr. Levin and feels that he listens to her.

Dr. Levin expressed some astonishment that his patients admire him as much as they do.

“The sad thing is that I’m very important to them, but I barely know them,” he said. “I feel shame about that, but that’s probably because I was trained in a different era.”

It is sad.  I’ve received the same sort of praise and positive feedback from a surprising number of patients, even when I feel that I’ve just barely scratched the surface of their distress (and might have even forgotten their names since their last visit!), and believe that I’m simply pacifying them with a prescription.  At times, calling myself a “psychiatrist” seems unfair, because I feel instead like a prescription dispenser with a medical school diploma on the wall.

And yet people tell me that they like me, just as they like Dr. Levin.  They believe I’m really helping them by listening to them for a few minutes, nodding my head, and giving a pill.  Are the pills really that effective?  (Here I think the answer is clearly no, because treatment failures are widespread in psychiatry, and many are even starting to question the studies that got these drugs approved in the first place.)  Or do my words—as brief as they may be—really have such healing power?

I’ve written about the placebo effect, which can be defined as either the ability of a substance to exert a much more potent effect than what would be anticipated, or as a person’s innate ability to heal oneself.  Perhaps what we’re seeing at work here is a different type of placebo effect—namely, the patient’s unconscious acceptance of this new way of doing things (i.e., spending less time trying to understand the origins of one’s suffering, and the belief that a pill will suffice) and, consequently, the efficacy of this type of ultra-rapid intervention, which goes against everything we were trained to do as psychiatrists and therapists.

In an era where a person’s deepest thoughts can be shared in a 140-character “tweet” or in a few lines on Facebook (and Charlie Sheen can be diagnosed in a five-minute Good Morning America interview), perhaps it’s not surprising that many Americans believe that depression, anxiety, mood swings, impulsivity, compulsions, addictions, eating disorders, personality disorders, and the rest of the gamut of human suffering can be treated in 12-minute office visits four months apart.

Either that, or health insurance and pharmaceutical companies have done a damn good job in training us that we’re much less complicated than we thought we were.


Getting Inside The Patient’s Mind

March 4, 2011

As a profession, medicine concerns itself with the treatment of individual human beings, but primarily through a scientific or “objective” lens.  What really counts is not so much a person’s feelings or attitudes (although we try to pay attention to the patient’s subjective experience), but instead the pathology that contributes to those feelings or that experience: the malignant lesion, the abnormal lab value, the broken bone, or the infected tissue.

In psychiatry, despite the impressive inroads of biology, pharmacology, molecular genetics into our field—and despite the bold predictions that accurate molecular diagnosis is right around the corner—the reverse is true, at least from the patient’s perspective.  Patients (generally) don’t care about which molecules are responsible for their depression or anxiety; they do know that they’re depressed or anxious and want help.  Psychiatry is getting ever closer to ignoring this essential reality.

Lately I’ve come across a few great reminders of this principle.  My colleagues over at Shrink Rap recently posted an article about working with patients who are struggling with problems that resemble those that the psychiatrist once experienced.  Indeed, a debate exists within the field as to whether providers should divulge details of their own personal experiences, or whether they should remain detached and objective.  Many psychiatrists see themselves in the latter group, simply offering themselves as a sounding board for the patient’s words and restricting their involvement to medications or other therapeutic interventions that have been planned and agreed to in advance.  This may, however, prevent them from sharing information that may be vital in helping the patient make great progress.

A few weeks ago a friend sent me a link to this video produced by the Janssen pharmaceutical company (makers of Risperdal and Invega, two atypical antipsychotic medications).

The video purports to simulate the experience of a person experiencing psychotic symptoms.  While I can’t attest to its accuracy, it certainly is consistent with written accounts of psychotic experiences, and is (reassuringly!) compatible with what we screen for in the evaluation of a psychotic patient.  Almost like reading a narrative of someone with mental illness (like Andrew Solomon’s Noonday Demon, William Styron’s Darkness Visible, or An Unquiet Mind by Kay Redfield Jamison), videos and vignettes like this one may help psychiatrists to understand more deeply the personal aspect of what we treat.

I also stumbled upon an editorial in the January 2011 issue of Schizophrenia Bulletin by John Strauss, a Yale psychiatrist, entitled “Subjectivity and Severe Psychiatric Disorders.” In it, he argues that in order to practice psychiatry as a “human science” we must pay as much attention to a patient’s subjective experience as we do to the symptoms they report or the signs we observe.  But he also points out that our research tools and our descriptors—the terms we use to describe the dimensions of a person’s disease state—fail to do this.

Strauss argues that, as difficult as it sounds, we must divorce ourselves from the objective scientific tradition that we value so highly, and employ different approaches to understand and experience the subjective phenomena that our patients encounter—essentially to develop a “second kind of knowledge” (the first being the textbook knowledge that all doctors obtain through their training) that is immensely valuable in understanding a patient’s suffering.  He encourages role-playing, journaling, and other experiential tools to help physicians relate to the qualia of a patient’s suffering.

It’s hard to quantify subjective experiences for purposes of insurance billing, or for standardized outcomes measurements like surveys or questionnaires, or for large clinical trials of new pharmaceutical agents.  And because these constitute the reality of today’s medical practice, it is hard for physicians to draw their attention to the subjective experience of patients.  Nevertheless, physicians—and particularly psychiatrists—should remind themselves every so often that they’re dealing with people, not diseases or symptoms, and to challenge themselves to know what that actually means.

By the same token, patients have a right to know that their thoughts and feelings are not just heard, but understood, by their providers.  While the degree of understanding will (obviously) not be precise, patients may truly benefit from a clinician who “knows” more than meets the eye.