Turf Wars

July 6, 2012

The practice of medicine has changed enormously in just the last few years.  While the upcoming implementation of the Affordable Care Act promises even further—and more dramatic—change, one topic which has received little popular attention is the question of exactly who provides medical services.  Throughout medicine, physicians (i.e., those with MD or DO degrees) are being replaced by others, whenever possible, in an attempt to cut costs and improve access to care.

In psychiatry, non-physicians have long been a part of the treatment landscape.  Most commonly today, psychiatrists focus on “medication management” while psychologists, psychotherapists, and others perform “talk therapy.” But even the med management jobs—the traditional domain of psychiatrists, with their extensive medical training—are gradually being transferred to other so-called “midlevel” providers.

The term “midlevel” (not always a popular term, by the way) refers to someone whose training lies “mid-way” between that of a physician and another provider (like a nurse, psychologist, social worker, etc) but who is still licensed to diagnose and treat patients.  Midlevel providers usually work under the supervision (although often not direct) of a physician.  In psychiatry, there are a number of such midlevel professionals, with designations like PMHNP, PMHCNS, RNP, and APRN, who have become increasingly involved in “med management” roles.  This is partly because they tend to demand lower salaries and are reimbursed at a lower rate than medical professionals.  However, many physicians—and not just in psychiatry, by the way—have grown increasingly defensive (and, at times, downright angry, if some physician-only online communities are any indication) about this encroachment of “lesser-trained” practitioners onto their turf.

In my own experience, I’ve worked side-by-side with a few RNPs.  They performed their jobs quite competently.  However, their competence speaks less to the depth of their knowledge (which was impressive, incidentally) and more to the changing nature of psychiatry.  Indeed, psychiatry seems to have evolved to such a degree that the typical psychiatrist’s job—or “turf,” if you will—can be readily handled by someone with less (in some cases far less) training.  When you consider that most psychiatric visits comprise a quick interview and the prescription of a drug, it’s no surprise that someone with even just a rudimentary understanding of psychopharmacology and a friendly demeanor can do well 99% of the time.

This trend could spell (or hasten) the death of psychiatry.  More importantly, however, it could present an opportunity for psychiatry’s leaders to redefine and reinvigorate our field.

It’s easy to see how this trend could bring psychiatry to its knees.  Third-party payers obviously want to keep costs low, and with the passage of the ACA the role of the third-party payer—and “treatment guidelines” that can be followed more or less blindly—will be even stronger.  Patients, moreover, increasingly see psychiatry as a medication-oriented specialty, thanks to direct-to-consumer advertising and our medication-obsessed culture.  Taken together, this means that psychiatrists might be passed over in favor of cheaper workers whose main task will be to follow guidelines or protocols.  If so, most patients (unfortunately) wouldn’t even know the difference.

On the other hand, this trend could also present an opportunity for a revolution in psychiatry.  The predictions in the previous paragraph are based on two assumptions:  first, that psychiatric care requires medication, and second, that patients see the prescription of a drug as equivalent to a cure.  Psychiatry’s current leadership and the pharmaceutical industry have successfully convinced us that these statements are true.  But they need not be.  Instead, they merely represent one treatment paradigm—a paradigm that, for ever-increasing numbers of people, leaves much to be desired.

Preservation of psychiatry requires that psychiatrists find ways to differentiate themselves from midlevel providers in a meaningful fashion.  Psychiatrists frequently claim that they are already different from other mental health practitioners, because they have gone to medical school and, therefore, are “real doctors.”  But this is a specious (and arrogant) argument.  It doesn’t take a “real doctor” to do a psychiatric interview, to compare a patient’s complaints to what’s written in the DSM (or what’s in one’s own memory banks) and to prescribe medication according to a guideline or flowchart. Yet that’s what most psychiatric care is.  Sure, there are those cases in which successful treatment requires tapping the physician’s knowledge of pathophysiology, internal medicine, or even infectious disease, but these are rare—not to mention the fact that most treatment settings don’t even allow the psychiatrist to investigate these dimensions.

Thus, the sad reality is that today’s psychiatrists practice a type of medical “science” that others can grasp without four years of medical school and four years of psychiatric residency training.  So how, then, can psychiatrists provide something different—particularly when appointment lengths continue to dwindle and costs continue to rise?  To me, one answer is to revamp specialty training.  I received my training in two institutions with very different cultures and patient populations.  But both shared a common emphasis on teaching medication management.  Did I need four years to learn how to prescribe drugs?  No.  In reality, practical psychopharmacology can be learned in a one-year (maybe even six-month) course—not to mention the fact that the most valuable knowledge comes from years of experience, something that only real life (and not a training program) can provide.

Beyond psychopharmacology, psychiatry training programs need to beef up psychotherapy training, something that experts have encouraged for years.  But it goes further than that: psychiatry trainees need hands-on experience in the recovery model, community resources and their delivery, addictive illness and recovery concepts, behavioral therapies, case management, and, yes, how to truly integrate medical care into psychiatry.  Furthermore, it wouldn’t hurt to give psychiatrists lessons in communication and critical thinking skills, cognitive psychology principles, cultural sensitivity, economics, business management, alternative medicine (much of which is “alternative” only because the mainstream says so), and, my own pet peeve, greater exposure to the wide, natural variability among human beings in their intellectual, emotional, behavioral, perceptual, and physical characteristics and aptitudes—so we stop labeling everyone who walks in the door as “abnormal.”

One might argue, that sounds great but psychiatrists don’t get paid for those things.  True, we don’t.  At least not yet.  Nevertheless, a comprehensive approach to human wellness, taken by someone who has invested many years learning how to integrate these perspectives, is, in the long run, far more efficient than the current paradigm of discontinuous care, in which one person manages meds, another person provides therapy, another person serves as a case manager—roles which can change abruptly due to systemic constraints and turnover.

If we psychiatrists want to defend our “turf,” we can start by reclaiming some of the turf we’ve given away to others.  But more importantly, we must also identify new turf and make it our own—not to provide duplicate, wasteful care, but instead to create a new treatment paradigm in which the focus is on the patient and the context in which he or she presents, and treatment involves only what is necessary (and which is likely to work for that particular individual).  Only a professional with a well-rounded background can bring this paradigm to light, and psychiatrists—those who have invested the time, effort, expense, and hard work to devote their lives to the understanding and treatment of mental illness—are uniquely positioned to bring this perspective to the table and make it happen.


What’s the Proper Place of Science in Psychiatry and Medicine?

April 29, 2012

On the pages of this blog I have frequently written about the “scientific” aspects of psychiatry and questioned how truly scientific they are.   And I’m certainly not alone.  With the growing outcry against psychiatry for its medicalization of human behavior and the use of powerful drugs to treat what’s essentially normal variability in our everyday existence, it seems as if everyone is challenging the evidence base behind what we do—except most of us who do it on a daily basis.

Psychiatrists are unique among medical professionals, because we need to play two roles at once.  On the one hand, we must be scientists—determining whether there’s a biological basis for a patient’s symptoms.  On the other hand, we must identify environmental or psychological precursors to a patient’s complaints and help to “fix” those, too.  However, today’s psychiatrists often eschew the latter approach, brushing off their patients’ internal or interpersonal dynamics and ignoring environmental and social influences, rushing instead to play the “doctor” card:  labeling, diagnosing, and prescribing.

Why do we do this?  We all know the obvious reasons:  shrinking appointment lengths, the influence of drug companies, psychiatrists’ increasing desire to see themselves as “clinical neuroscientists,” and so on.

But there’s another, less obvious reason, one which affects all doctors.  Medical training is all about science.  There’s a reason why pre-meds have to take a year of calculus, organic chemistry, and physics to get into medical school.  It’s not because doctors solve differential equations and perform redox reactions all day.  It’s because medicine is a science (or so we tell ourselves), and, as such, we demand a scientific, mechanistic explanation for everything from a broken toe to a myocardial infarction to a manic episode.  We do “med checks,” as much as we might not want to, because that’s what we’ve been trained to do.  And the same holds true for other medical specialties, too.  Little emphasis is placed on talking and listening.  Instead, it’s all about data, numbers, mechanisms, outcomes, and the right drugs for the job.

Perhaps it’s time to rethink the whole “medical science” enterprise.  In much of medicine, paying more and more attention to biological measures—and the scientific evidence—hasn’t really improved outcomes.  “Evidence-based medicine,” in fact, is really just a way for payers and the government to create guidelines to reduce costs, not a way to improve individual patients’ care. Moreover, we see examples all the time—in all medical disciplines—of the corruption of scientific data (often fueled by drug company greed) and very little improvement in patient outcomes.  Statins, for instance, are effective drugs for high cholesterol, but their widespread use in people with no other risk factors seems to confer no additional benefit.  Decades of research into understanding appetite and metabolism hasn’t eradicated obesity in our society.  A full-scale effort to elucidate the brain’s “reward pathways” hasn’t made a dent in the prevalence of drug and alcohol addiction.

Psychiatry suffers under the same scientific determinism.  Everything we call a “disease” in psychiatry could just as easily be called something else.  I’ve seen lots of depressed people in my office, but I can’t say for sure whether I’ve ever seen one with a biological illness called “Major Depressive Disorder.”  But that’s what I write in the chart.  If a patient in my med-management clinic tells me he feels better after six weeks on an antidepressant, I have no way of knowing whether it was due to the drug.  But that’s what I tell myself—and that’s usually what he believes, too.  My training encourages me to see my patients as objects, as collections of symptoms, and to interpret my “biological” interventions as having a far greater impact on my patients’ health than the hundreds or thousands of other phenomena my patient experiences in between appointments with me.  Is this fair?

(This may explain some of the extreme animosity from the anti-psychiatry crowd—and others—against some very well-meaning psychiatrists.  With few exceptions, the psychiatrists I know are thoughtful, compassionate people who entered this field with a true desire to alleviate suffering.  Unfortunately, by virtue of their training, many have become uncritical supporters the scientific model, making them easy targets for those who have been hurt by that very same model.)

My colleague Daniel Carlat, in his book Unhinged, asks the question: “Why do [psychiatrists] go to medical school? How do months of intensive training in surgery, internal medicine, radiology, etc., help psychiatrists treat mental illness?”  He lays out several alternatives for the future of psychiatric training.  One option is a hybrid approach that combines a few years of biomedical training with a few years of rigorous exposure to psychological techniques and theories.  Whether this would be acceptable to psychiatrists—many of whom wear their MD degrees as scientific badges of honor—or to psychologists—who might feel that their turf is being threatened—is anyone’s guess.

I see yet another alternative.  Rather than taking future psychiatrists out of medical school and teaching them an abbreviated version of medicine, let’s change medical school itself.  Let’s take some of the science out of medicine and replace it with what really matters: learning how to think critically and communicate with patients (and each other), and to think about our patients in a greater societal context.  Soon the Medical College Admissions Test (MCAT) will include more questions about cultural studies and ethics.  Medical education should go one step further and offer more exposure to economics, politics, management, health-care policy, decision-making skills, communication techniques, multicultural issues, patient advocacy, and, of course, how to interpret and critique the science that does exist.

We doctors will need a scientific background to interpret the data we see on a regular basis, but we must also acknowledge that our day-to-day clinical work requires very little science at all.  (In fact, all the biochemistry, physiology, pharmacology, and anatomy we learned in medical school is either (a) irrelevant, or (b) readily available on our iPhones or by a quick search of Wikipedia.)  We need to be cautious not to bring science into a clinical scenario simply because it’s easy or “it’s what we know,” particularly—especially—when it provides no benefit to the patient.

So we don’t need to take psychiatry out of medicine.  Instead, we should bring a more enlightened, patient-centered approach to all of medicine, starting with formal medical training itself.  This would help all medical professionals to offer care that focuses on the person, rather than an MRI or CT scan, receptor profile or genetic polymorphism, or lab value or score on a checklist.  It would help us to be more accepting of our patients’ diversity and less likely to rush to a diagnosis.  It might even restore some respect for the psychiatric profession, both within and outside of medicine.  Sure, it might mean that fewer patients are labeled with “mental illnesses” (translating into less of a need for psychiatrists), but for the good of our patients—and for the future of our profession—it’s a sacrifice that we ought to be willing to make.


Is Weiner Really Such A Bad Guy?

June 25, 2011

I don’t use this blog as a platform for political opinions or broad social commentary, but the Anthony Weiner “sexting” fiasco has raised some issues in my mind.  And I guess, in a roundabout way, it actually does pertain to psychiatry and medicine, so I figured I’d share these thoughts.

Unless you’ve been exiled to the Gulag for the last month, you probably know that Weiner, a Democratic New York congressman, was forced to resign from his post after the outcry over lewd photographs he sent to women from his Twitter account.  He left his office in disgrace and is apparently entering rehab.  (Maybe I’ll write about the wisdom of that move in a different post.)

The thing is, Weiner was a generally well-liked Congressman and was reportedly a leading candidate to run for mayor of New York in 2013.    He had many supporters and, until the “Weinergate” scandal broke, was seen as a very capable politican.  One might argue, in fact, that his sexual exploits had no effect on his ability to legislate, despite the vociferous (and at times rabid) barbs levied upon him by pundits and critics after the scandal became public.

Now, don’t get me wrong.  I am not condoning his behavior.  I am not saying that we should ignore it because “he’s otherwise a good guy.”  In no way should we turn a blind eye to something that shows such poor taste, a profound lack of judgment, and a disregard for his relationship with his wife.

But does it require the sudden unraveling of an entire political career?  Weiner has done some bad things.  But do they make him a bad congressman?

Some of the same questions arose during the recent flurry of stories about doctors who speak for drug companies.  As ProPublica has written in its “Dollars for Docs” series, some doctors have earned tens of thousands of dollars speaking on behalf of companies when they are also expected to be fair and unbiased in their assessment of patients, or in their analysis and presentation of data from clinical trials.

This is, in my opinion, a clear conflict of interest.  However, some of the articles went one step further and pointed out that many of those doctors have been disciplined by their respective Medical Boards, or have had other blemishes on their record.  Are these conflicts of interest?  No.  To me, it seems more like muckraking.  It’s further ammunition with which critics can attack Big Pharma and the “bad” doctors who carry out its dirty work.

Now I don’t mean to say that every sin or transgression should be ignored.  If one of those doctors had been disciplined for excessive or inappropriate prescribing, or for prescription fraud, or for questionable business practices, then I can see why it might be an issue worthy of concern.  But to paint all these doctors with a broad stroke and malign them even further because of past disciplinary action (and not simply on the basis of the rather obvious financial conflicts of interest), seems unfair.

The bottom line is, sometimes good people do bad things.  And unfortunately, even when those “bad things” are unrelated to the business at hand, we sometimes ruin lives and careers in our attempts to exact justice.  Whatever happened to rehabilitation and recovery?  A second chance?  Can we evaluate doctors (and politicians) by the quality of their work and their potential current conflicts, rather than something they did ten or twenty years ago?

(By the way, there are some bad—i.e., uninformed, irresponsible—doctors out there who have no disciplinary actions and no relationships with pharmaceutical companies.  Where are the journalists and patient-advocacy groups looking into their malfeasance?)

In our society, we are quick to judge—particularly those in positions of great power and responsibility.  And those judgments stick.  They become a lens through which we see a person, and those lenses rarely come off, regardless of how hard that person has worked to overcome those characterizations.  Ask any recovered alcoholic or drug addict.  Ask any ex-felon who has cleaned up his act.  Ask any “impaired professional.”  (In the interest of full disclosure, I am one of those professionals, whose “impairments” stemmed from a longstanding mental illness [now in remission] and affected none of my patients or colleagues, but which have introduced significant obstacles to my employability for the last five years.)  And ask any politician who has had to surrender an office due to a personal failing like Weiner’s.

Come to think of it, ask any patient who has been given a psychiatric diagnosis and whose words and actions will be interpreted by her friends, family,  doctors, or boss as part of her “borderline personality” or “bipolar” or “psychosis.”  It’s hard to live that down.

When evaluations matter, we should strive to judge people by the criteria that count, instead of the criteria that strengthen our biases, confirm our misconceptions, and polarize us further.  If we are able to do so, we may make it easier for people to recover and emerge even stronger after making mistakes or missteps in their lives.  We also might get along with each other just a little better.


Is a Good Doctor Like a Good Teacher?

February 7, 2011

The Huffington Post published an interesting and thought-provoking article two weeks ago, entitled “What If We Treated Doctors The Way We Treat Teachers?” The author, an assistant professor of education at Towson University, suggests that, since doctors and teachers both provide a vital service to society (and, importantly, to all members of society, not just those who care about whether they might develop diabetes in 30 years, or whether they can get into a good college), doctors and teachers should be evaluated by similar measures.

In particular, he writes, doctors and others involved in patient care should be evaluated by their patient outcomes, for example, whether a doctor’s patients meet certain standards of general health, whether a community’s specific health care needs are being met, and whether medical schools produce competent physicians.  This emphasis on “outcomes” is in parallel with the education system’s emphasis on measuring student performance as a way to assess the effectiveness of teachers.

Even though his article was not meant to be taken literally, I believe that most of his proposals are quite sound.  No one would argue that it is NOT the responsibility of the medical profession to make sure that people are healthy, that underserved communities get the care they need, that hospitals are available to take care of the sick, and so forth.  And since we know the underlying causes of many diseases, and public health has identified numerous strategies that can prevent or delay the development of common conditions, one would think that we would welcome “outcome measures” as a way to demonstrate and prove how effective our interventions are.

[One underlying message of the article, however, which I won’t detail here, is that the same cannot be said for education; there are widely divergent opinions on the “right” way to educate a child, and even if there was one “right” way, the educational system (much less an individual teacher) absolutely cannot control what happens in the child’s home that may have a profound impact on how he or she learns.]

So why don’t we evaluate doctors on these measures?  Well, for one thing, how do we measure “success” or “health”?  When people are sick, they have abnormalities or lesions that we can see, measure, and fix.  We can remove the tumor or help the blood pressure get back to normal, but is that the right measure of “health”?  Another reason doctors aren’t subject to outcome measures is because it’s far easier to assess doctors on other measures that have little to do with patient care but serve some other special interest.  For instance, I’m evaluated by various parties on how many prescriptions I write, how many days my patients stay in the hospital, how completely I fill out the mental status exam form in my patient charts, how many buttons I click in my electronic medical record system, and so on.  Everything EXCEPT how well my patients do.

And then, of course, there’s the fact that so many other factors which are beyond the control of the physician (and usually outside of the patient’s control, too) prevent positive outcomes:  insurance companies refuse to cover the cost of effective drugs and other treatments; direct-to-consumer advertising leads patients to demand medications that may not be helpful (and which might actually cause harm); and the lack of accessible and affordable primary care treatment, or other services such as therapy or rehab prevents patients from accessing vital components of effective care.

I’ll go on record to say that doctors ought to be evaluated on how healthy their patients are.  After all, that’s why we do what we do.  But before we start measuring patient outcomes, let’s first decide what we want to measure, and whether it’s valid.  Simple measurements like blood pressure or cholesterol level are a start, but don’t tell the whole story; neither do “patient satisfaction scores,” as sometimes the best medical advice is something patients don’t want to hear.  Second, let’s make sure patients and doctors have access to the resources that would promote positive outcomes.  We know the elements of wise, cost-effective, preventive care, so we should implement them.  Finally, if we are to measure patient outcomes, then let’s stop assessing and rewarding physicians on other measures that have nothing to do with patient care.

All doctors want to treat patients, just as all teachers want to educate students.  Measuring outcomes—i.e., how effectively do we do what we set out to do—is one way to ensure good doctors and good teachers, but let’s make sure we’re measuring the right things, we have access to the tools we need to do the job, and we remove all the other obligations that interfere with the job we have undertaken.  Whether that can be done (in medicine or in education) is anybody’s guess.


%d bloggers like this: