Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.


Do What You’re Taught

February 5, 2012

In my mail yesterday was an invitation to an upcoming 6-hour seminar on the topic of “Trauma, Addiction, and Grief.”  The course description included topics such as “models of addiction and trauma/information processing” and using these models to plan treatment; recognizing “masked grief reactions” and manifestations of trauma in clients; and applying several psychotherapeutic techniques to help a patient through addiction and trauma recovery.

Sound relevant?  To any psychiatrist dealing with issues of addiction, trauma, grief, anxiety, and mood—which is pretty much all of us—and interested in integrative treatments for the above, this would seem to be an entirely valid topic to learn.  And, I was pleased to learn that the program offers “continuing education” credit, too.

But upon reading the fine print, credit is not available for psychiatrists.  Instead, you can get credit if you’re one the following mental health workers:  counselor, social worker, MFT, psychologist, addiction counselor, alcoholism & drug abuse counselor, chaplain/clergy, nurse, nurse practitioner, nurse specialist, or someone seeking “certification in thanatology” (whatever that is).  But not a psychiatrist.  In other words, psychiatrists need not apply.

Well, okay, that’s not entirely correct, psychiatrists can certainly attend, and–particularly if the program is a good one—my guess is that they would clearly benefit from it.  They just won’t get credit for it.

It’s not the first time I’ve encountered this.  Why do I think this is a big deal?  Well, in all of medicine, “continuing medical education” credit, or CME, is a rough guide to what’s important in one’s specialty.  In psychiatry, the vast majority of available CME credit is in psychopharmacology.  (As it turns out, in the same batch of mail, I received two “throwaway” journals which contained offers of free CME credits for reading articles about treating metabolic syndrome in patients on antipsychotics, and managing sexual side effects of antidepressants.)  Some of the most popular upcoming CME events are the Harvard Psychopharmacology Master Class and the annual Nevada Psychopharmacology Update.  And, of course, the NEI Global Congress in October is a can’t-miss event.  Far more psychiatrists will attend these conferences than a day-long seminar on “trauma, addiction, and grief.”  But which will have the most beneficial impact on patients?

To me, a more important question is, which will have the most beneficial impact on the future of the psychiatrist?   H. Steven Moffic, MD, recently wrote an editorial in Psychiatric Times in which he complained openly that the classical “territory” of the psychiatrist—diagnosis of mental disorder, psychotherapy, and psychopharmacology—have been increasingly ceded to others.  Well, this is a perfect example.  A seminar whose content is probably entirely applicable to most psychiatric patients, being marketed primarily to non-psychiatrists.

I’ve always maintained—on this blog and in my professional life—that psychiatrists should be just as (if not more) concerned about the psychological, cultural, and social aspects of their patients and their experience as in their proper psychopharmacological management.  It’s also just good common sense, especially when viewed from the patient’s perspective.  But if psychiatrists (and our leadership) don’t advocate for the importance of this type of experience, then of course others will do this work, instead of us.  We’re making ourselves irrelevant.

I’m currently experiencing this irony in my own personal life.  I’m studying for the American Board of Psychiatry and Neurology certification exam (the “psychiatry boards”), while looking for a new job at the same time.  On the one hand, while studying for the test I’m being forced to refresh my knowledge of human development, the history of psychiatry, the theory and practice of psychotherapy, the cognitive and psychological foundations of axis I disorders, theories of personality, and many other topics.  That’s the “core” subject matter of psychiatry, which is (appropriately) what I’ll be tested on.  Simultaneously, however, the majority of the jobs I’m finding require none of that.  I feel like I’m being hired instead for my prescription pad.

Psychiatry, as the study of human experience and the treatment of a vast range of human suffering, can still be a fascinating field, and one that can offer so much more to patients.  To be a psychiatrist in this classic sense of the word, it seems more and more like one has to blaze an independent trail: obtain one’s own specialized training, recruit patients outside of the conventional means, and—unless one wishes to live on a relatively miserly income—charge cash.  And because no one seriously promotes this version of psychiatry, this individual is rapidly becoming an endangered species.

Maybe I’ll get lucky and my profession’s leadership will advocate more for psychiatrists to be better trained in (and better paid for) psychotherapy, or, at the very least, encourage educators and continuing education providers to emphasize this aspect of our training as equally relevant.  But as long as rank-and-file psychiatrists sit back and accept that our primary responsibility is to diagnose and medicate, and rabidly defend that turf at the expense of all else, then perhaps we deserve the fate that we’re creating for ourselves.


How To Retire At Age 27

September 4, 2011

A doctor’s primary responsibility is to heal, and all of our efforts and resources should be devoted to that goal.  At times, it is impossible to restore a patient to perfect health and he or she must unfortunately deal with some degree of chronic disability.  Still other times, though, the line between “perfect health” and “disability” is blurred, and nowhere (in my opinion) is this more problematic than in psychiatry.

To illustrate, consider the following example from my practice:

Keisha (not her real name), a 27 year-old resident of a particularly impoverished and crime-ridden section of a large city, came to my office for a psychiatric intake appointment.  I reviewed her intake questionnaire; under the question “Why are you seeking help at this time?” she wrote: “bipolar schizophrenia depression mood swings bad anxiety ADHD panic attacks.”  Under “past medications,” she listed six different psychiatric drugs (from several different categories).  She had never been hospitalized.

When I first saw her, she appeared overweight but otherwise in no distress.  An interview revealed no obvious thought disorder, no evidence of hallucinations or delusions, nor did she complain of significant mood symptoms.  During the interview, she told me, “I just got my SSDI so I’m retired now.”  I asked her to elaborate.  “I’m retired now,” she said.  “I get my check every month, I just have to keep seeing a doctor.”  When I asked why she’s on disability, she replied, “I don’t know, whatever they wrote, bipolar, mood swings, panic attacks, stuff like that.”  She had been off medications for over two months (with no apparent symptoms); she said she really “didn’t notice” any effect of the drugs, except the Valium 20 mg per day, which “helped me settle down and relax.”

Keisha is a generally healthy 27 year-old.  She graduated high school (something rare in this community, actually) and took some nursing-assistant classes at a local vocational school.  She dropped out, however, because “I got stressed out.”  She tried looking for other work but then found out from a family member that she could “apply for disability.”  She applied and was denied, but then called a lawyer who specialized in disability appeals and, after about a year of resubmissions, received the good news that she can get Social Security Disability, ensuring a monthly check.

How is Keisha “disabled”?  She’s disabled because she went to see a doctor and, presumably, told that doctor that she can’t work because of “stress.”  That doctor probably asked her a series of questions like “are you unable to work because of your depressed mood?”, “Do you find it hard to deal in social situations because of your mood swings?” etc., and she answered them in the affirmative.  I’ve seen dozens—if not hundreds—of disability questionnaires, which ask the same questions.

I have no doubt that Keisha lives a stressful life.  I’ve driven through her part of town.  I’ve read about the turf wars being waged by the gangs there.  I know that her city has one of the highest murder rates in America, unemployment is high, schools are bad, and drug abuse and criminal activity are widespread.  I would be surprised if anyone from her neighborhood was not anxious, depressed, moody, irritable, or paranoid.

But I am not convinced that Keisha has a mental illness.

Lest you think that I don’t care about Keisha’s plight, I do.  Keisha may very well be struggling, but whether this is “major depression,” a true “anxiety disorder,” or simply a reaction to her stressful situation is unclear.  Unfortunately, psychiatry uses simple questions to arrive at a diagnosis—and there are no objective tests for mental illness—so a careless (or unscrupulous) provider can easily apply a label, designating Keisha’s situation as a legitimate medical problem.  When combined with the law firms eager to help people get “the government money they deserve,” and the very real fact that money and housing actually do help people like Keisha, we’ve created the illusion that mental illness is a direct consequence of poverty, and the way to treat it is to give out monthly checks.

As a physician, I see this as counter-therapeutic for a number of reasons.  With patients like Keisha, I often wonder, what exactly am I “treating”?  What constitutes success?  An improvement in symptoms?  (What symptoms?)  Or successfully getting her on the government dole?  And when a patient comes to me, already on disability after receiving a diagnosis of MDD (296.34) or panic disorder (300.21) from some other doctor or clinic, I can’t just say, “I’m sorry about your situation, but let’s see what we can do to overcome it together,” because there’s no incentive to overcome it.  (This is from someone who dealt with severe 307.51 for sixteen years, but who also had the promise of a bright future to help overcome it.)

Moreover, making diagnoses where there is no true pathology artificially inflates disease prevalence, further enlarging state and county mental health bureaucracies.  It enables massive over-prescription of expensive (e.g., atypical antipsychotics like Seroquel and Zyprexa), addictive (like stimulants and benzodiazepines), or simply ineffective (like SSRIs) medications.  And far from helping the downtrodden who claim to be its “victims,” this situation instead rewards drug companies and doctors, some of whom prefer serving this population because of the assembly-line nature of this sort of practice:  see the patient, make the diagnosis, write the script, and see them again in 3-6 months.

The bottom line is, here in America we’ve got thousands (perhaps millions?) of able-bodied people who, for one socioeconomic (i.e., not psychiatric) reason or another, can’t find work and have fallen upon psychiatric “disability” as their savior.  I’d love to help them, but, almost by definition, I cannot.  And neither can any other doctor.  Sure, they struggle and suffer, but their suffering is relieved by a steady job, financial support, and yes, direct government assistance.  These are not part of the psychiatric armamentarium.  It’s not medicine.

Psychiatry should not be a tool for social justice.  (We’ve tried that before.  It failed.)  Using psychiatric labels to help patients obtain taxpayers’ money, unless absolutely necessary and legitimate, is wasteful and dishonest.  More importantly, it harms the very souls we have pledged an oath to protect.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


Psychiatry, Homeostasis, and Regression to the Mean

July 20, 2011

Are atypical antipsychotics overprescibed?  This question was raised in a recent article on the Al Jazeera English website, and has been debated back and forth for quite some time on various blogs, including this one.  Not surprisingly, their conclusion was that, yes, these medications are indeed overused—and, moreover, that the pharmaceutical industry is responsible for getting patients “hooked” on these drugs via inappropriate advertising and off-label promotion of these agents.

However, I don’t know if this is an entirely fair characterization.

First of all, let’s just be up front with what should be obvious.  Pharmaceutical companies are businesses.  They’re not interested in human health or disease, except insofar as they can exploit people’s fears of disease (sometimes legitimately, sometimes not) to make money.  Anyone who believes that a publicly traded drugmaker might forego their bottom line to treat malaria in Africa “because it’s the right thing to do” is sorely mistaken.  The mission of companies like AstraZeneca, Pfizer, and BMS is to get doctors to prescribe as much Seroquel, Geodon, and Abilify (respectively) as possible.  Period.

In reality, pharmaceutical company revenues would be zero if doctors (OK, and nurse practitioners and—at least in some states—psychologists) didn’t prescribe their drugs.  So it’s doctors who have made antipsychotics one of the most prescribed classes of drugs in America, not the drug companies.  Why is this?  Has there been an epidemic of schizophrenia?  (NB:  most cases of schizophrenia do not fully respond to these drugs.)  Are we particularly susceptible to drug marketing?  Do we believe in the clear and indisputable efficacy of these drugs in the many psychiatric conditions for which they’ve been approved (and those for which they haven’t)?

No, I like to think of it instead as our collective failure to appreciate that patients are more resilient and adaptive than we give them credit for, not to mention our infatuation with the concept of biological psychiatry.  In fact, much of what we attribute to our drugs may in fact be the result of something else entirely.

For an example of what I mean, take a look at the following figure:

This figure has nothing to do with psychiatry.  It shows the average body temperature of two groups of patients with fever—one who received intravenous Tylenol, and the other who received an intravenous placebo.  As you can easily see, Tylenol cut the fever short by a good 30-60 minutes.  But both groups of patients eventually reestablished a normal body temperature.

This is a concept called homeostasis.  It’s the innate ability of a living creature to keep things constant.  When you have a fever, you naturally perspire to give off heat.  When you have an infection, you naturally mobilize your immune system to fight it.  (BTW, prescribing antibiotics for viral respiratory infections is wasteful:  the illness resolves itself “naturally” but the use of a drug leads us to believe that the drug is responsible.)  When you’re sad and hopeless, lethargic and fatigued, you naturally engage in activities to pull yourself out of this “rut.”  All too often, when we doctors see these symptoms, we jump at a diagnosis and a treatment, neglecting the very real human capacity—evolutionarily programmed!!—to naturally overcome these transient blows to our psychological stability and well-being.

There’s another concept—this one from statistics—that we often fail to recognize.  It’s called “regression to the mean.”  If I survey a large number of people on some state of their psychological function (such as mood, or irritability, or distractibility, or anxiety, etc), those with an extreme score on their first evaluation will most likely have a more “normal” score on their next evaluation, and vice versa, even in the absence of any intervention.  In other words, if you’re having a particularly bad day today, you’re more likely to be having a better day the next time I see you.

This is perhaps the best argument for why it takes multiple sessions with a patient—or, at the very least, a very thorough psychiatric history—to make a confident psychiatric diagnosis and to follow response to treatment.  Symptoms—especially mild ones—come and go.  But in our rush to judgment (not to mention the pressures of modern medicine to determine a diagnosis ASAP for billing purposes), endorsement of a few symptoms is often sufficient to justify the prescription of a drug.

Homeostasis and regression to the mean are not the same.  One is a biological process, one is due to natural, semi-random variation.  But both of these concepts should be considered as explanations for our patients “getting better.”  When these changes occur in the context of taking a medication (particularly one like an atypical antipsychotic, with so many uses for multiple nonspecific diagnoses), we like to think the medication is doing the trick, when the clinical response may be due to something else altogether.

Al Jazeera was right: the pharmaceutical companies have done a fantastic job in placing atypical antipsychotics into every psychiatrist’s armamentarium.  And yes, we use them, and people improve.  The point, though, is that the two are sometimes not connected.  Until and unless we find some way to recognize this—and figure out what really works—Big Pharma will continue smiling all the way to the bank.


%d bloggers like this: