Disruptive Technology Vs. The Disruptive Physician

February 26, 2012

The technological advances of just the last decade—mobile computing, social networking, blogging, tablet computers—were never thought to be “essential” when first introduced.  But while they started as novelties, their advantages became apparent, and today these are all part of our daily lives.  These are commonly referred to as “disruptive technologies”:  upstart developments that originally found their place in niche markets outside of the mainstream, but gradually “disrupted” the conventional landscape (and conventional wisdom) to become the established ways of doing things.

In our capitalist economy, disruptive technology is considered a very good thing.  It has made our lives easier, more enjoyable, and more productive.  It has created no small number of multimillionaires.  Entrepreneurs worldwide are constantly looking for the next established technologies to disrupt, usurp, and overturn, in hopes of a very handsome payoff.

In medicine, when we talk about “disruption,” the implication is not quite as positive.  In fact, the term “disruptive physician” is an insult, a black mark on one’s record that can be very hard to overcome.  It refers to someone who doesn’t cooperate, doesn’t follow established protocols, yells at people, discriminates against others, who might abuse drugs or alcohol, or who is generally incompetent.  These are not good things.

Really?  Now, no one would argue that substance abuse, profanity, spreading rumors, degrading one’s peers, or incompetence are good.  But what about the physician who “expresses political views that are disagreeable to the hospital administration”?  How about the physician who speaks out about deficiencies in patient care or patient safety, or who (legitimately) points out the incompetence of others?  How about the physician who prioritizes his own financial and/or business objectives over those of the hospital (when in fact it may be the only way to protect one’s ability to practice)?  All of these have been considered to be “disruptive” behaviors and could be used by highly conservative medical staffs to discipline physicians and preserve the status quo.

Is this fair?  In modern psychiatry, with its shrinking appointment lengths, overreliance on the highly deficient DSM, excessive emphasis on pharmacological solutions, and an increasing ignorance of developmental models and psychosocial interventions among practitioners, maybe someone should stand up and express opinions that the “powers that be” might consider unacceptable.  Someone should speak out on behalf of patient safety.  Someone should point out extravagant examples of waste, incompetence, or abuse of privilege.  Plenty of psych bloggers and a few renegade psychiatrists do express these opinions, but they (we?) are a minority.  I don’t know of any department chairmen or APA officers who are willing to be so “disruptive.”  As a result, we’re stuck with what we’ve got.

That’s not to say there aren’t any disruptive technologies in psychiatry.  What are they?  Well, medications, for instance.  Drug treatment “disrupted” psychoanalysis and psychotherapy, and represent the foundation of most psychiatric treatment today.  Over the last 30 years, pharmaceutical companies (and prescribers) have earned millions of dollars from SSRIs, SNRIs, second-generation antipsychotics, psychostimulants, and many others.  But are people less mentally ill now than they were in the early 1980s?  Today—just in time for patent expirations!—we’re already seeing the next disruptive medication technologies, like those based on glutamate and glutamine signaling.  According to Stephen Stahl at the most recent NEI Global Congress, “we’ve beaten the monoamine horse sixteen ways to Sunday” (translation: we’ve milked everything we can out of the serotonin and dopamine stories) and glutamate is the next blockbuster drug target to disrupt the marketplace.

Another disruptive technology is the DSM.  I don’t have much to add to what’s already been written about the DSM-5 controversy except to point out what should be obvious:  We don’t need another DSM right now.  Practically speaking, a new DSM is absolutely unnecessary.  It will NOT help me treat patients any better.  But it’s coming, like it or not.  It will disrupt the way we have conducted our practices for the last 10 years (guided by the equally imperfect DSM-IV-TR), and it will put millions more dollars in the coffers of the APA.

And then, of course, is the electronic medical record (EMR).  As with the DSM-5, I don’t need to have an EMR to practice psychiatry.  But some politicians in Washington, DC, decided that, as a component of the Affordable Care Act (and in preparation for truly nationalized health care), we should all use EMRs.  They even offered a financial incentive to doctors to do so (and are levying penalties for not doing so).  And despite some isolated benefits (which are more theoretical than practical, frankly), EMRs are disruptive.  Just not in the right way.  They disrupt work flow, the doctor-patient relationship, and, sometimes, common sense.  But they’re here to stay.

Advances in records & database management, in general, are the new disruptive technologies in medicine.  Practice Fusion, a popular (and ad-supported) EMR has earned tens of millions of dollars in venture capital funding and employs over 150 people.  And what does it do with the data from the 28 million patients it serves?  It sells it to others, of course.  (And it can tell you fun things like which cities are most “lovesick.”  How’s that for ROI?)

There are many other examples of companies competing for your health-care dollar, whose products are often only peripherally related to patient care but which represent that holy grail of the “disruptive technology.”  There are online appointment scheduling services, telepsychiatry services, educational sites heavily sponsored by drug companies, doctor-only message boards (which sell doctors’ opinions to corporations), drug databases (again, sponsored by drug companies), and others.

In the interest of full disclosure, I use some of the above services, and some are quite useful.  I believe telemedicine, in particular, has great potential.  But at the end of the day, these market-driven novelties ignore some of the bigger, more entrenched problems in medicine, which only practicing docs see.  In my opinion, the factors that would truly help psychiatrists take better care of patients are of a different nature entirely:  improving psychiatric training (of MDs and non-MD prescribers); emphasizing recovery and patient autonomy in our billing and reimbursement policies; eliminating heavily biased pharmaceutical advertising (both to patients and to providers); revealing the extensive and unstated conflicts of interest among our field’s “key opinion leaders”; reforming the “disability” system and disconnecting it from Medicaid, particularly among indigent patients; and reallocating health-care resources more equitably.  But, as a physician, if I were to go to my superiors with any ideas to reform the above in my day-to-day work, I run the risk of being labeled “disruptive.”  When in fact, that would be my exact intent:  to disrupt some of the damaging, wasteful practices that occur in our practices almost every day.

I agree that disruption in medicine can be a good thing, and can advance the quality and cost-effectiveness of care.  But when most of the “disruptions” come from individuals who are not actively in the trenches, and who don’t know where needs are the greatest, we may be doing absolutely nothing to improve care.  Even worse, when we fail to embrace the novel ideas of physicians—but instead discipline those physicians for being “disruptive”—we risk punishing creativity, destroying morale, and fostering a sense of helplessness that, in the end, serves no one.


Do I Want A Philosopher As My Surgeon?

February 20, 2012

I recently stumbled upon an article describing upcoming changes to the Medical College Admissions Test.  Also known as the MCAT, this is the exam that strikes fear into the hearts of pre-med students nationwide, due to its rigorous assessment of all the hard sciences that we despised in college.  The MCAT can make or break someone’s application to a prestigious medical school, and in a very real way, it can be the deciding factor as to whether someone even becomes a doctor at all.

According to the article, the AAMC—the organization which administers the MCAT—will “stop focusing solely on biology, physics, statistics, and chemistry, and also will begin asking questions on psychology, ethics, cultural studies, and philosophy.”  The article goes on to say that questions will ask about such topics as “behavior and behavior change, cultural and social differences that influence well-being, and socioeconomic factors, such as access to resources.”

Response has been understandably mixed.  On at least two online physician discussion groups, doctors are denouncing the change.  Medicine is based in science, they argue, and the proposed changes simply encourage mediocrity and “beat the drum for socialized medicine.”  Others express frustration that this shift rewards not those who can practice good medicine, but rather those who can increase “patient satisfaction” scores.  Still others believe the new MCAT is just a way to recruit a new generation of liberal-minded, government-employed docs (or, excuse me, “providers”) just in time for the roll-out of Obamacare.

I must admit that I can understand the resistance from the older generation of physicians.  In the interest of full disclosure, I was trained under the traditional medical model.  I learned anatomy, biochemistry, pathology, microbiology, etc., independently, and then had to synthesize the material myself, rather than through the “problem-based learning” format of today’s medical schools.  I also have an advanced degree in neuroscience, so I’m inclined to think mechanistically, to be critical of experimental designs, and always to search for alternate explanations of what I observe.

In spite of my own training, however, I think I might actually support the new MCAT format.  Medicine is different today.  Driven by factors that are beyond the control of the average physician, diagnostic tools are becoming more automated and treatment protocols more streamlined, even incorporated into our EMRs.  In today’s medicine, the doctor is no longer an independent, objective authority, but rather someone hired to follow a set of rules or guidelines.  We’re rapidly losing sight of (1) who the patient is, (2) what the patient wants, and (3) what unique skills we can provide to that patient.

Some examples:  The scientifically minded physician sees the middle-aged obese male with diabetes and hypertension as a guy with three separate diseases, each requiring its own treatment, often driven by guidelines that result in disorganized, fractured care.  He sees the 90 year-old woman with kidney failure, brittle osteoporosis, and congestive heart failure as a candidate for nephrology, orthopedics, and cardiology consults, exacerbating cost and the likelihood of iatrogenic injury.  In reality, the best care might come from, in the first example, a family doc with an emphasis on lifestyle change, and in the second example, a geriatrician who understands the woman’s resources, needs, and support system.

Psychiatry presents its own unique challenges.  Personally, I believe we psychiatrists have been overzealous in our redefinition of the wide range of abnormal human behaviors as “illnesses” requiring treatment.  It would be refreshing to have an economist work in a community mental health clinic, helping to redirect scarce resources away from expensive antipsychotics or wasteful “disability” programs and towards job-training or housing services instead.  Maybe a sociologist would be less likely to see an HMO patient as “depressed” and needing meds, but enduring complicated relationship problems amenable to therapy and to a reassessment of what she aspires to achieve in her life.

This may sound “touchy-feely” to some.  Trust me, ten years ago—at the peak of my enthusiasm for biological psychiatry—I would have said the same thing, and not in a kind way.  But I’ve since learned that psychiatry is touchy-feely.  And in their own unique ways, all specialties of medicine require a sophisticated understanding of human behavior, psychology, and the socioeconomic realities of the world in which we live and practice.  What medicine truly needs is that rare combination of someone who can not only describe Friedel-Crafts alkylation and define Hardy Weinberg equilibrium, but who can also understand human learning and motivation or describe—even in a very rough way—what the heck “Obamacare” is all about anyway.

If I needed cardiac bypass surgery, would I want a philosophy major as my surgeon?  I honestly don’t care, as long as he or she has the requisite technical skill to put me under the knife.  But perhaps a philosopher would be just as well—or better—prepared to judge whether I needed the operation in the first place, how to evaluate my other options (if any), and—if I undergo the surgery—how to change my behavior so that I won’t need another one.  Better yet, maybe that philosopher would also want to change conditions so that fewer people suffer from coronary artery disease, or to determine a more equitable way to ensure that anyone who needs such a procedure can get it.

If we doctors continue to see ourselves as scientists first and foremost, we’ll be ordering tests and prescribing meds until we’re bankrupt.  At the other extreme, if we’re too people-friendly, patients will certainly like us, but we may have no impact on their long-term health.  Maybe the new MCAT is a way to encourage docs to bridge this gap, to make decisions based on everything that matters, even those factors that today’s medicine tends to ignore.  It’s not clear whether this will succeed, but it’s worth a try.


Big Brother Is Watching You (Sort Of)

February 17, 2012

I practice in California, which, like most (but not all) states has a service by which I can review my patients’ controlled-substance prescriptions.  “Controlled” substances are those drugs with a high potential for abuse, such as narcotic pain meds (e.g., Vicodin, Norco, OxyContin) or benzodiazepines (e.g., Xanax, Valium, Klonopin).  The thinking is that if we can follow patients who use high amounts of these drugs, we can prevent substance abuse or the illicit sale of these medications on the street or black market.

Unfortunately, California’s program may be on the chopping block.  Due to budget constraints, Governor Jerry Brown is threatening to close the Bureau of Narcotic Enforcement (BNE), the agency which tracks pharmacy data.  At present, the program is being supported by grant money—which could run out at any time—and there’s only one full-time staff member managing it.  Thus, while other states (even Florida, despite the opposition of Governor Rick Scott) are scrambling to implement programs like this one, it’s a travesty that we in California might lose ours.

Physicians (and the DEA) argue that these programs are valuable for detecting “doctor shoppers”—i.e., those who go from office to office trying to obtain Rx’es for powerful opioids with street value or addictive potential.  Some have even argued that there should be a nationwide database, which could help us identify people involved in interstate drug-smuggling rings like the famous “OxyContin Express” between rural Appalachia and Florida.

But I would say that the drug-monitoring programs should be preserved for an entirely different reason: namely, that they help to improve patient care.  I frequently check the prescription histories of my patients.  I’m not “playing detective,” seeking to bust a patient who might be abusing or selling their pills.  Rather, I do it to get a more accurate picture of a patient’s recent history.  Patients may come to me, for example, with complaints of anxiety while the database shows they’re already taking large amounts of Xanax or Ativan, occasionally from multiple providers.  Similarly, I might see high doses of pain medications, which (if prescribed & taken legitimately) cues me in to the possibility that pain management may be an important aspect of treating their psychiatric concerns, or vice versa.

I see no reason whatsoever that this system couldn’t be extended to non-controlled medications.  In fact, it’s just a logical extension of what’s already possible.  Most of my patients don’t recognize that I can call every single pharmacy in town and ask for a list of all their medications.  All I need is the patient’s name and birthdate.  Of course, there’s no way in the world I would do this, because I don’t have enough time to call every pharmacy in town.  So instead, I rely largely on what the patient tells me.  But sometimes there’s a huge discrepancy between what patients say they’re taking and what the pharmacy actually dispenses, owing to confusion, forgetfulness, language barriers, or deliberate obfuscation.

So why don’t we have a centralized, comprehensive database of patient med lists?

Some would argue it’s a matter of privacy.  Patients might not want to disclose that they’re taking Viagra or Propecia or an STD treatment (or methadone—for some reasons patients frequently omit that opioid).  But that argument doesn’t hold much water, because in practice, as I wrote above, I could, in theory, call every pharmacy in one’s town (or state) and find that out.

Another argument is that it would be too complicated to gather data from multiple pharmacies and correlate medication lists with patient names.  I don’t buy this argument either.  Consider “data mining.”  This widespread practice allows pharmaceutical companies to get incredibly detailed descriptions of all medications prescribed by each licensed doctor.  The key difference here, of course, is that the data are linked to doctors, not to patients, so patient privacy is not a concern.  (The privacy of patients is sacred, that of doctors, not so much; the Supreme Court even said so.)  Nevertheless, when my Latuda representative knows exactly how much Abilify, Seroquel, and Zyprexa I’ve prescribed in the last 6 months, and knows more about my practice than I do (unless I’ve decided to opt out of this system), then a comprehensive database is clearly feasible.

Finally, some would argue that a database would be far too expensive, given the costs of collecting data, hiring people to manage it, etc.  Maybe if it’s run by government bureaucrats, yes, but I believe this argument is out of touch with the times.  Why can’t we find some out-of-work Silicon Valley engineers, give them a small grant, and ask them to build a database that would collect info from pharmacy chains across the state, along with patient names & birthdates, which could be searched through an online portal by any verified physician?  And set it up so that it’s updated in real time.  Maintenance would probably require just a few people, tops.

Not only does such a proposal sound eminently doable, it actually sounds like it might be easy (and maybe even fun) to create.  If a group of code warriors & college dropouts can set up microblogging platforms, social networking sites, and online payment sites, fueled by nothing more than enthusiasm and Mountain Dew, then a statewide prescription database could be a piece of cake.

Alas, there are just too many hurdles to overcome.  Although it may seem easy to an IT professional, and may seem like just plain good medicine to a doc like me, history has a way of showing that what makes the best sense just doesn’t happen (especially when government agencies are involved).  Until this changes, I’ll keep bothering my local pharmacists by phone to get the information that would be nice to have at my fingertips already.


The Second Law of Thermodynamics and The “Med Check”

February 12, 2012

On one of my recent posts, a psychiatrist made a very thought-provoking comment.  He/she wrote that they interviewed at a clinic where the psychiatrist saw 20 patients per day and made well over $300,000 per year.  At a different clinic the psychiatrists saw many fewer patients (and, of course, made less money) but, the commenter opined, the patients probably received much better care.

This problem of “med checks” serving as the psychiatrist’s bread-and-butter has been discussed ad nauseum, particularly since the infamous New York Times “Talk Doesn’t Pay” article (see my comments here and here).  It’s almost universally accepted that this style of practice is cold, impersonal, sometimes reckless, and often focuses on symptoms and medications rather than people.  I would add that this approach also makes patient care more disorderly and confusing.  Moreover, minimizing this confusion would require more time and energy than most psychiatric practices currently allow.

I work part-time in one setting where the 15-20 minute “med check” is the standard of care.  Because my own personal strategy is to minimize medication usage in general, I’ve been able to use this time, with most patients, to discuss lifestyle changes or offer brief supportive therapy, keeping the lid (hopefully) on irresponsible prescribing.  However, I frequently get patients who have been seen by other docs, or from other clinics, who come to me with complicated medication regimens or questionable diagnoses, and who almost universally complain that “my last doctor never talked to me, he just pushed drugs,” or “he just kept prescribing medication but never told me what they were for,” or “I had a side effect from one drug so he just added another one to take care of it,” or some combination of the above.

These patients present an interesting dilemma.  On the one hand, they are usually extraordinarily fascinating, often presenting tough diagnostic challenges or complicated biological conundrums that test my knowledge of psychopharmacology.  On the other hand, a 15- or 20-minute “med check” appointment offers me little time or flexibility to do the work necessary to improve their care.

Consider one patient I saw recently.  She’s in her mid-20s and carries diagnoses of “bipolar II” (more about that diagnosis in a future post, if I have the guts to write it) and Asperger syndrome.  She is intelligent, creative, and has a part-time job in an art studio.  She has a boyfriend and a (very) involved mother, but few other social contacts.  She was hospitalized once in her teens for suicidal ideation.  Her major struggles revolve around her limited social life and the associated anxiety.  She’s also on six psychiatric medications: two antipsychotics, two mood stabilizers, a benzodiazepine, and a PRN sleep agent (and an oral contraceptive, whose efficacy is probably inhibited by one of her mood stabilizers—something that she says she was never warned about), and complains of a handful of mild physical symptoms that are most likely medication side effects.  She (and her mother) told me that her last two doctors “never took the time” to answer their questions or engage in discussion, instead “they just gave me drugs and kept asking me to come back in three months.”

What to do with such an individual?  My first wish would be to discontinue all medications, assess her baseline, help to redefine her treatment goals, and identify tools to achieve them.  But remember, I only have 20 minutes.  Even the simplest of maneuvers—e.g., start a gradual taper of one of her medications—would require a detailed explanation of what to expect and how to deal with any difficulties that might arise.  And if I can’t see her for another 2-3 months—or if I have only 13 annual visits with her, as is the case in my Medicaid practice—then this option becomes far more difficult.

As a result, it’s easier to add stuff than to take it away.  It brings to mind the second law of thermodynamics in physics, which (very loosely) says that a system will always develop greater disorder (or randomness, or “entropy”) unless work is done on that system.  Stated from a clinical point of view:  unless we invest more time and energy in our patients, their care will become more scattered, disorganized, and chaotic.

Some of that time and energy can come from a dedicated physician (which will, of course, require the additional investment of money in the form of greater out-of-pocket cost).  Other times, it can come from the patient him- or herself; there are an impressive—and growing—number of websites and books dedicated to helping patients understand their mental illness and what to expect from specific medications or from their discontinuation (for instance, here’s one to which I’ve referred several patients), often written by patients or ex-patients themselves.  But without some external input, I’m afraid the current status quo sets many patients adrift with little or no guidance, direction, or hope.

It’s disheartening to think that psychiatric care has a tendency to make patients’ lives more disorganized and unstable, particularly when most of us entered this field to do the exact opposite.  It’s also discouraging to know that for those patients who do benefit from mental health care, it’s often in spite of, not because of, the psychiatrist’s involvement (something I’ve written about here).  But if our training programs, health care system, and large financial interests like the pharmaceutical companies—not to mention the increasingly narrow expertise of today’s psychiatrists—continue to drive psychiatric care into brief med-management appointments (which, BTW, I find insulting to call “psychiatry,” but that’s an argument for another time), then we must also prepare for the explosion in diagnoses, the overprescription of largely useless (and often damaging) drugs, skyrocketing rates of psychiatric “disability,” and the bastardization that currently passes as psychiatric care.


Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.


Do What You’re Taught

February 5, 2012

In my mail yesterday was an invitation to an upcoming 6-hour seminar on the topic of “Trauma, Addiction, and Grief.”  The course description included topics such as “models of addiction and trauma/information processing” and using these models to plan treatment; recognizing “masked grief reactions” and manifestations of trauma in clients; and applying several psychotherapeutic techniques to help a patient through addiction and trauma recovery.

Sound relevant?  To any psychiatrist dealing with issues of addiction, trauma, grief, anxiety, and mood—which is pretty much all of us—and interested in integrative treatments for the above, this would seem to be an entirely valid topic to learn.  And, I was pleased to learn that the program offers “continuing education” credit, too.

But upon reading the fine print, credit is not available for psychiatrists.  Instead, you can get credit if you’re one the following mental health workers:  counselor, social worker, MFT, psychologist, addiction counselor, alcoholism & drug abuse counselor, chaplain/clergy, nurse, nurse practitioner, nurse specialist, or someone seeking “certification in thanatology” (whatever that is).  But not a psychiatrist.  In other words, psychiatrists need not apply.

Well, okay, that’s not entirely correct, psychiatrists can certainly attend, and–particularly if the program is a good one—my guess is that they would clearly benefit from it.  They just won’t get credit for it.

It’s not the first time I’ve encountered this.  Why do I think this is a big deal?  Well, in all of medicine, “continuing medical education” credit, or CME, is a rough guide to what’s important in one’s specialty.  In psychiatry, the vast majority of available CME credit is in psychopharmacology.  (As it turns out, in the same batch of mail, I received two “throwaway” journals which contained offers of free CME credits for reading articles about treating metabolic syndrome in patients on antipsychotics, and managing sexual side effects of antidepressants.)  Some of the most popular upcoming CME events are the Harvard Psychopharmacology Master Class and the annual Nevada Psychopharmacology Update.  And, of course, the NEI Global Congress in October is a can’t-miss event.  Far more psychiatrists will attend these conferences than a day-long seminar on “trauma, addiction, and grief.”  But which will have the most beneficial impact on patients?

To me, a more important question is, which will have the most beneficial impact on the future of the psychiatrist?   H. Steven Moffic, MD, recently wrote an editorial in Psychiatric Times in which he complained openly that the classical “territory” of the psychiatrist—diagnosis of mental disorder, psychotherapy, and psychopharmacology—have been increasingly ceded to others.  Well, this is a perfect example.  A seminar whose content is probably entirely applicable to most psychiatric patients, being marketed primarily to non-psychiatrists.

I’ve always maintained—on this blog and in my professional life—that psychiatrists should be just as (if not more) concerned about the psychological, cultural, and social aspects of their patients and their experience as in their proper psychopharmacological management.  It’s also just good common sense, especially when viewed from the patient’s perspective.  But if psychiatrists (and our leadership) don’t advocate for the importance of this type of experience, then of course others will do this work, instead of us.  We’re making ourselves irrelevant.

I’m currently experiencing this irony in my own personal life.  I’m studying for the American Board of Psychiatry and Neurology certification exam (the “psychiatry boards”), while looking for a new job at the same time.  On the one hand, while studying for the test I’m being forced to refresh my knowledge of human development, the history of psychiatry, the theory and practice of psychotherapy, the cognitive and psychological foundations of axis I disorders, theories of personality, and many other topics.  That’s the “core” subject matter of psychiatry, which is (appropriately) what I’ll be tested on.  Simultaneously, however, the majority of the jobs I’m finding require none of that.  I feel like I’m being hired instead for my prescription pad.

Psychiatry, as the study of human experience and the treatment of a vast range of human suffering, can still be a fascinating field, and one that can offer so much more to patients.  To be a psychiatrist in this classic sense of the word, it seems more and more like one has to blaze an independent trail: obtain one’s own specialized training, recruit patients outside of the conventional means, and—unless one wishes to live on a relatively miserly income—charge cash.  And because no one seriously promotes this version of psychiatry, this individual is rapidly becoming an endangered species.

Maybe I’ll get lucky and my profession’s leadership will advocate more for psychiatrists to be better trained in (and better paid for) psychotherapy, or, at the very least, encourage educators and continuing education providers to emphasize this aspect of our training as equally relevant.  But as long as rank-and-file psychiatrists sit back and accept that our primary responsibility is to diagnose and medicate, and rabidly defend that turf at the expense of all else, then perhaps we deserve the fate that we’re creating for ourselves.


ADHD: A Modest Proposal

February 1, 2012

I’m reluctant to write a post about ADHD.  It just seems like treacherous ground.  Judging by comments I’ve read online and in magazines, and my own personal experience, expressing an opinion about this diagnosis—or just about anything in child psychiatry—will be met with criticism from one side or another.  But after reading L. Alan Sroufe’s article (“Ritalin Gone Wild”) in this weekend’s New York Times, I feel compelled to write.

If you have not read the article, I encourage you to do so.  Personally, I agree with every word (well, except for the comment about “children born into poverty therefore [being] more vulnerable to behavior problems”—I would remind Dr Sroufe that correlation does not equal causation).  In fact, I wish I had written it.  Unfortunately, it seems that only outsiders or retired psychiatrists can write such stuff about this profession. The rest of us might need to look for jobs someday.

Predictably, the article has attracted numerous online detractors.  For starters, check out this response from the NYT “Motherlode” blog, condemning Dr Sroufe for “blaming parents” for ADHD.  In my reading of the original article, Dr Sroufe did nothing of the sort.  Rather, he pointed out that ADHD symptoms may not entirely (or at all) arise from an inborn neurological defect (or “chemical imbalance”), but rather that environmental influences may be more important.  He also remarked that, yes, ADHD drugs do work; children (and adults, for that matter) do perform better on them, but those successes decline over time, possibly because a drug solution “does nothing to change [environmental] conditions … in the first place.”

I couldn’t agree more.  To be honest, I think this statement holds true for much of what we treat in psychiatry, but it’s particularly relevant in children and adolescents.  Children are exposed to an enormous number of influences as they try to navigate their way in the world, not to mention the fact that their brains—and bodies—continue to develop rapidly and are highly vulnerable.  “Environmental influences” are almost limitless.

I have a radical proposal which will probably never, ever, be implemented, but which might help resolve the problems raised by the NYT article.  Read on.

First of all, you’ll note that I referred to “ADHD symptoms” above, not “ADHD.”  This isn’t a typo.  In fact, this is a crucial distinction.  As with anything else in psychiatry, diagnosing ADHD relies on documentation of symptoms.  ADHD-like symptoms are extremely common, particularly in child-age populations.  (To review the official ADHD diagnostic criteria from the DSM-IV, click here.)  To be sure, a diagnosis of ADHD requires that these symptoms be “maladaptive and inconsistent with developmental level.”  Even so, I’ve often joked with my colleagues that I can diagnose just about any child with ADHD just by asking the right questions in the right way.  That’s not entirely a joke.  Try it yourself.  Look at the criteria, and then imagine you have a child in your office whose parent complains that he’s doing poorly in school, or gets in fights, or refuses to do homework, or daydreams a lot, etc.  When the ADHD criteria are on your mind—remember, you have to think like a psychiatrist here!—you’re likely to ask leading questions, and I guarantee you’ll get positive responses.

That’s a lousy way of making a diagnosis, of course, but it’s what happens in psychiatrists’ and pediatricians’ offices every day.  There are more “valid” ways to diagnose ADHD:  rating scales like the Connors or Vanderbilt surveys, extensive neuropsychiatric assessment, or (possibly) expensive imaging tests.  However, in practice, we often let subthreshold scores on those surveys “slide” and prescribe ADHD medications anyway (I’ve seen it plenty); neuropsychiatric assessments are often wishy-washy (“auditory processing score in the 60th percentile,” etc); and, as Dr Sroufe correctly points out, children with poor motivation or “an underdeveloped capacity to regulate their behavior” will most likely have “anomalous” brain scans.  That doesn’t necessarily mean they have a disorder.

So what’s my proposal?  My proposal is to get rid of the diagnosis of ADHD altogether.  Now, before you crucify me or accuse me of being unfit to practice medicine (as one reader—who’s also the author of a book on ADHD—did when I floated this idea on David Allen’s blog last week), allow me to elaborate.

First, if we eliminate the diagnosis of ADHD, we can still do what we’ve been doing.  We can still evaluate children with attention or concentration problems, or hyperactivity, and we can still use stimulant medications (of course, they’d be off-label now) to provide relief—as long as we’ve obtained the same informed consent that we’ve done all along.  We do this all the time in medicine.  If you complain of constant toe and ankle pain, I don’t immediately diagnose you with gout; instead, I might do a focused physical exam of the area and recommend a trial of NSAIDs.  If the pain returns, or doesn’t improve, or you have other features associated with gout, I may want to check uric acid levels, do a synovial fluid analysis, or prescribe allopurinol.

That’s what medicine is all about:  we see symptoms that suggest a diagnosis, and we provide an intervention to help alleviate the symptoms while paying attention to the natural course of the illness, refining the diagnosis over time, and continually modifying the therapy to treat the underlying diagnosis and/or eliminate risk factors.  With the ultimate goal, of course, of minimizing dangerous or expensive interventions and achieving some degree of meaningful recovery.

This is precisely what we don’t do in most cases of ADHD.  Or in most of psychiatry.  While exceptions definitely exist, often the diagnosis of ADHD—and the prescription of a drug that, in many cases, works surprisingly well—is the end of the story.  Child gets a diagnosis, child takes medication, child does better with peers or in school, parents are satisfied, everyone’s happy.  But what caused the symptoms in the first place?  Can (or should) that be fixed?  When can (or should) treatment be stopped?  How can we prevent long-term harm from the medication?

If, on the other hand, we don’t make a diagnosis of ADHD, but instead document that the child has “problems in focusing” or “inattention” or “hyperactivity” (i.e., we describe the specific symptoms), then it behooves us to continue looking for the causes of those symptoms.  For some children, it may be a chaotic home environment.  For others, it may be a history of neglect, or ongoing substance abuse.  For others, it may be a parenting style or interaction which is not ideal for that child’s social or biological makeup (I hesitate to write “poor parenting” because then I’ll really get hate mail!).  For still others, there may indeed be a biological abnormality—maybe a smaller dorsolateral prefrontal cortex (hey! the DLPFC!) or delayed brain maturation.

ADHD offers a unique platform upon which to try this open-minded, non-DSM-biased approach.  Dropping the diagnosis of “ADHD” would have a number of advantages.  It would encourage us to search more deeply for root causes; it would allow us to be more eclectic in our treatment; it would prevent patients, parents, doctors, teachers, and others from using it as a label or as an “excuse” for one’s behavior; and it would require us to provide truly individualized care.  Sure, there will be those who simply ask for the psychostimulants “because they work” for their symptoms of inattentiveness or distractibility (and those who deliberately fake ADHD symptoms because they want to abuse the stimulant or because they want to get into Harvard), but hey, that’s already happening now!  My proposal would create a glut of “false negative” ADHD diagnoses, but it would also reduce the above “false positives,” which, in my opinion, are more damaging to our field’s already tenuous nosology.

A strategy like this could—and probably should—be extended to other conditions in psychiatry, too.  I believe that some of what we call “ADHD” is truly a disorder—probably multiple disorders, as noted above; the same is probably true with “major depression,” ”bipolar disorder,” and just about everything else.  But when these labels start being used indiscriminately (and unfortunately DSM-5 doesn’t look to offer any improvement), the diagnoses become fixed labels and lock us into an approach that may, at best, completely miss the point, and at worst, cause significant harm.  Maybe we should rethink this.


%d bloggers like this: