Two Psychiatries

March 12, 2012

A common—and ever-increasing—complaint of physicians is that so many variables interfere with our ability to diagnose and treat disease:  many patients have little or no access to preventive services; lots of people are uninsured; insurance plans routinely deny necessary care; drug formularies are needlessly restrictive; paperwork never ends; and the list goes on and on.  Beneath the frustration (and, perhaps, part of the source of it) is the fact that medical illness, for the most part, has absolutely nothing to do with these external burdens or socioeconomic inequalities.  Whether a patient is rich or poor, black or white, insured or uninsured—a disease is a disease, and everyone deserves the same care.

I’m not so sure whether the same can be said for psychiatry.  Over the last four years, I’ve spent at least part of my time working in community mental health (and have written about it here and here).  Before that, though—and for the majority of my training—I worked in a private, academic hospital setting.  I saw patients who had good health insurance, or who could pay for health care out of pocket.  I encountered very few restrictions in terms of access to medications or other services (including multiple types of psychotherapy, partial hospitalization programs, ECT, rTMS, clinical trials of new treatments, etc).  I was fortunate enough to see patients in specialty referral clinics, where I saw fascinating “textbook” cases of individuals who had failed to respond to years of intensive treatment.  It was exciting, stimulating, thought-provoking, and—for lack of a better word—academic.  (Perhaps it’s not surprising that this the environment in which textbooks, and the DSM, are written.)

When I started working in community psychiatry, I tried to approach patients with the same curiosity and to employ the same diagnostic strategies and treatment approach.  It didn’t take long for me to learn, however, that these patients had few of the resources I had taken for granted elsewhere.  For instance, psychotherapy was difficult to arrange, and I was not reimbursed for doing it myself.  Access to medications depended upon capricious, unpredictable (and illogical) formularies.  Patients found it difficult to get to regular appointments or to come up with the co-payment, not to mention pay the electric bill or deal with crime in their neighborhood.  It was often hard to obtain a coherent and reliable history, and records obtained from elsewhere were often spotty and unhelpful.

It all made for a very challenging place in which to practice what I (self-righteously) called “true” psychiatry.  But maybe community psychiatry needs to be redefined.  Maybe the social stressors encountered by community psych patients—not the least of which is substandard access to “quality” medical and psychiatric services—result in an entirely different type of mental distress, and demand an entirely different type of intervention.

(I should point out that I did see, at times, examples of the same sort of mental illness I saw in the private hospital, and which did respond to the same interventions that the textbooks predicted.  While this reaffirmed my hope in the validity of [at least some] mental illnesses, this was a small fraction of the patients I saw.)

Should we alter our perceptions and definitions of illness—and of “psychiatry” itself—in public mental health?  Given the obstacles found in community psychiatry settings (absurdly brief appointment times; limited psychotherapy; poor prescription drug coverage; high rates of nonadherence and substance abuse; reliance on ERs for non-emergency care, often resulting in complicated medication regimens, like dangerous combinations of narcotics and benzodiazepines), should we take an entirely different approach?  Does it even make sense to diagnose the same disorders—not to mention put someone on “disability” for these disorders—when there are so many confounding factors involved?

One of my colleagues suggested: just give everyone an “adjustment disorder” diagnosis until you figure everything out.  Good idea, but you won’t get paid for diagnosing “adjustment disorder.”  So a more “severe” diagnosis must be given, followed closely thereafter by a medication (because many systems won’t let a psychiatrist continue seeing a patient unless a drug is prescribed).  Thus, in a matter of one or two office visits (totaling less than an hour in most cases), a Medicaid or uninsured patient might end up with a major Axis I diagnosis and medication(s), while the dozens of stressors that may have contributed to the office visit in the first place go unattended.

Can this change?  I sure hope so.  I firmly believe that everyone deserves access to mental health care.  (I must also point out that questionable diagnoses and inappropriate medication regimens can be found in any demographic.)  But we psychiatrists who work in community settings must not delude ourselves into thinking that what’s written in the textbooks or tested on our Board exams always holds true for the patients we see.  It’s almost as if we’re practicing a “different psychiatry,” one that requires its own diagnostic system, different criteria for “disability” determinations, a different philosophy of “psychotherapy,” and where medications should be used much more conservatively.  (It might also help to perform clinical trials with subjects representative of those in community psychiatry, but due to their complexity, this is highly unlikely).

Fortunately, a new emphasis on the concept of “recovery” is taking hold in many community mental health settings.  This involves patient empowerment, self-direction, and peer support, rather than a narrow focus on diagnosis and treatment.  For better or for worse, such an approach relies less on the psychiatrist and more on peers and the patient him- or herself.  It also just seems much more rational, emphasizing what patients want and what helps them to succeed.

Whether psychiatrists—and community mental health as a whole—are able to follow this trend remains to be seen.  Unless we do, however, I fear that we may continue to mislead ourselves into believing that we’re doing good, when in fact we’re perpetuating a cycle of invalid diagnoses, potentially harmful treatment, and, worst of all, over-reliance on a system designed for a distinctly different type of “care” than what these individuals need and deserve.


How To Think Like A Psychiatrist

March 4, 2012

The cornerstone of any medical intervention is a sound diagnosis.  Accurate diagnosis guides the proper treatment, while an incorrect diagnosis might subject a patient to unnecessary procedures or excessive pharmacotherapy, and it may further obscure the patient’s true underlying condition.  This is true for all medical specialties—including psychiatry.  It behooves us, then, to examine the practice of clinical decision-making, how we do it, and where we might go wrong, particularly in the area of psychiatric diagnosis.

According to Pat Croskerry, a physician at Dalhousie University in Canada, the foundation of clinical cognition the “dual process model,” first described by the Greek philosophers (and reviewed here).  This model proposes that people solve problems using one of two “processes”:  Type 1 processes involve intuition and are largely automatic, fast, and unconscious (e.g., recognizing a friend’s face).  Type 2 processes are more deliberate, analytical, and systematic (e.g., planning the best route for an upcoming trip).  Doctors use both types when making a diagnosis, but the relative emphasis varies with the setting.  In the ED, quick action based on pattern recognition (i.e., Type 1 process) is crucial.  Sometimes, however, it may be wrong, particularly if other conditions aren’t evaluated and ruled out (i.e., Type 2 process).  For instance, a patient with flank pain, nausea, vomiting, and hematuria demonstrates the “pattern” of a kidney stone (common), but may in fact have a dissecting aortic aneurysm (uncommon).

This model is valuable for understanding how we arrive at psychiatric diagnoses (the above figure is from a 2009 article by Croskerry).  When evaluating a patient for the first time, a psychiatrist often looks at “the big picture”:  Does this person appear to have a mood disorder, psychosis, anxiety, a personality disorder?  Have I seen this type of patient before?  What’s my general impression of this person?  In other words, the assessment relies heavily on Type 1 processes, using heuristics and “Gestalt” impressions.  But Type 2 processes are also important.  We must inquire about specific symptoms, treatment history, social background; we might order tests or review old records, which may change our initial perception.

Sound clinical decision-making, therefore, requires both processes.  Unfortunately, these are highly prone to error.  In fact, Croskerry identifies at least 40 cognitive biases, which occur when the processes are not adapted for the specific task at hand.  For instance, we tend to use Type 1 processes more frequently than we should.  Many psychiatrists, particularly those seeing a large volume of patients for short periods of time, often see patterns earlier than is warranted, and rush to diagnoses without fully considering all possibilities.  In other words, they fall victim to what psychologist Keith Stanovich calls “dysrationalia,” or the inability to think or act rationally despite adequate intelligence.  In the dual process model, dysrationalia can “override” Type 2 processes (“I don’t need to do a complete social history, I just know this patient has major depression”), leading to diagnostic failure.

Croskerry calls this the “cognitive miser” function: we rely on processes that consume fewer cognitive resources because we’re cognitively lazy.  The alternative would be to switch to a Type 2 process—a more detailed evaluation, using deductive, analytic reasoning.  But this takes great effort and time.  Moreover, when a psychiatrist switches to a “Type 2” mode, he or she asks questions are nonspecific in nature (largely owing to the unreliability of some DSM-IV diagnoses), or questions that confirm the initial “Type 1” hunch.  In other words, we end up finding we expect to find.

The contrast between Type 1 and Type 2 processes is most apparent when we observe people operating at either end of the spectrum.  Some psychiatrists see patterns in every patient (e.g., “I could tell he was bipolar as soon as he walked into my office”—a classic error called the representativeness heuristic), even though they rarely ask about specific symptoms, let alone test alternate hypotheses.  On the other hand, medical students and young clinicians often work exclusively in Type 2; they ask very thorough questions, covering every conceivable alternative, and every symptom in the DSM-IV (even irrelevant ones).  As a result, they get frustrated when they can’t determine a precise diagnosis or, alternately, they come up with a diagnosis that might “fit” the data but completely miss the mark regarding the underlying essence of the patient’s suffering.

Croskerry writes that the most accurate clinical decision-making occurs when a physician can switch between Type 1 and Type 2 processes  as needed, a process called metacognition.  Metacognition requires a certain degree of humility, a willingness to re-examine one’s decisions in light of new information.  It also demands that the doctor be able to recognize when he or she is not performing well and to be willing to self-monitor and self-criticize.  To do this, Croskerry recommends that we develop “cognitive forcing strategies,” deliberate interventions that force us to think more consciously and deliberately about the problem at hand.  This may help us to be more accurate in our assessments:  in other words, to see both the trees for the forest, and the forest for the trees.

This could be a hard sell.  Doctors can be a stubborn bunch.  Clinicians who insist on practicing Type 2,  “checklist”-style medicine (e.g., in a clinical trial) may be unwilling to consider the larger context in which specific symptoms arise, or they may not have sufficient understanding of that context to see how it might impact a patient.  On the other hand, clinicians who rush to judgment based on first impressions (a Type 1 process) may be annoyed by any suggestion that they should slow down and be more thorough or methodical.  Not to mention the fact that being more thorough takes more time. And as we all know, time is money.

I believe that all psychiatrists should heed the dual-process model and ask how it influences their practice.  Are you too quick to label and diagnose, owing to your “dysrational” (Type 1) impulses?  On the other hand, if you use established diagnostic criteria (Type 2), are you measuring anything useful?  Should you use a cognitive forcing strategy to avoid over-reliance on one type of decision-making?  If you continue to rely on pattern recognition (Type 1 process), then what other data (Type 2) should you collect?  Treatment history?  A questionnaire?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information to diagnose a “disorder” in a given individual?

These are just a few questions that the dual process model raises.  There are no easy answers, but anything that challenges us to be better physicians and avoid clinical errors, in my opinion, is well worth our time, attention, and thought.


Sleeping Pills Are Deadly? Says Who, Exactly?

March 1, 2012

As most readers know, we’re paying more attention than ever before to conflicts of interest in medicine.   If an individual physician, researcher, speaker, or author is known to have a financial relationship with a drug company, we publicize it.  It’s actually federal law now.  The idea is that doctors might be biased by drug companies who “pay” them (either directly—through gifts, meals, or cash—or indirectly, through research or educational grants) to say or write things that are favorable to their drug.

A recent article on the relationship between sedative/hypnotics and mortality, published this week in BMJ Open (the online version of the British Medical Journal) and widely publicized, raises additional questions about the conflicts and biases that individual researchers bring to their work.

Co-authors Daniel Kripke, of UC San Diego, and Robert Langer, of the Jackson Hole Center for Preventive Medicine, reviewed the electronic charts of over 30,000 patients in a rural Pennsylvania health plan.  Approximately 30% of those patients received at least one prescription for a hypnotic (a benzodiazepine like Klonopin or Restoril, or a sleeping agent like Lunesta or Ambien) during the five-year study period, and there was a strong relationship between hypnotics and risk of death.  The more prescriptions one received, the greater the likelihood that one would die during the study period.  There was also a specifically increased risk of cancer in groups receiving the largest number of hypnotic prescriptions.

The results have received wide media attention.  Mainstream media networks, major newspapers, popular websites, and other outlets have run with sensational headlines like “Higher Death Risk With Sleeping Pills” and “Sleeping Pills Can Bring On the Big Sleep.”

But the study has received widespread criticism, too.  Many critics have pointed out that concurrent psychiatric diagnoses were not addressed, so mortality may have been related more to suicide or substance abuse.  Others point out the likelihood of Berkson’s Bias—the fact that the cases (those who received hypnotic prescriptions) may have been far sicker than controls, despite attempts to match them.  The study also failed to report other medications patients received (like opioids, which can be dangerous when given with sedative/hypnotics) or to control for socioeconomic status.

What has not received a lot of attention, however, is the philosophical (and financial) bias of the authors.  Lead author Daniel Kripke has been, for many years, an outspoken critic of the sleeping pill industry.  He has also widely criticized the conventional wisdom that people need 8 or more hours of sleep per night.  He has written books about it, and was even featured on the popular Showtime TV show “Penn & Teller: Bullshit!” railing against drug companies (and doctors) who profit by prescribing sleep meds.  Kripke is also one of the pioneers of “bright light therapy” (using high-intensity light to affect circadian rhythms)—first in the area of depression, and, most recently, to improve sleep.  To the best of my knowledge, he has no financial ties to the makers of light boxes.  Then again, light boxes are technically not medical devices and, therefore, are not regulated by the FDA, so he may not be required to report any affiliation.  Nevertheless, he clearly has had a decades-long professional interest in promoting light therapy and demonizing sleeping pills.

Kripke’s co-author, Robert Langer, is an epidemiologist, a past site coordinator of the Women’s Health Initiative, and a staunch advocate of preventive medicine.  More importantly, though (and advertised prominently on his website), he is an expert witness in litigation involving hormone replacement therapy (HRT), and also in cancer malpractice cases.  Like Kripke, he has also found a place in the media spotlight; he will be featured in “Hot Flash Havoc,” a movie about HRT in menopausal women, to be released later this month.

[Interestingly, Kripke and Langer also collaborated on a 2011 study showing that sleep times >6.5 hrs or <5 hrs were associated with increased mortality.  One figure looked virtually identical to figure 1 in their BMJ paper (see below).  It would be interesting to know whether mortality in the current study is indeed due to sedative prescriptions or, if the results of their earlier paper are correct, simply due to the fact that the people requesting sedative prescriptions in the first place are the ones with compromised sleep and, therefore, increased mortality.  In other words, maybe the sedative is simply a marker for something else causing mortality—the same argument raised above.]

Do the authors’ backgrounds bias their results?  If Kripke and Langer were receiving grants and speakers’ fees from Forest Labs, and published an article extolling the benefits of Viibryd, Forest’s new antidepressant, how would we respond?  Might we dig a little deeper?  Approach the paper with more skepticism?  Is the media publicizing this study (largely uncritically) because its conclusion resonates with the “politically correct” idea that psychotropic medications are bad?  Michael Thase (a long-time pharma-sponsored researcher and U Penn professor) was put in the hot seat on “60 Minutes” a few weeks ago about whether antidepressants provide any benefit, but Kripke and Langer—two equally prominent researchers—seem to be getting a free ride, as far as the media are concerned.

I’m not trying to defend the drug industry, and I’m certainly not defending sedatives.  My own bias is that I prefer to minimize my use of hypnotics in my patients—although my opposition is not so much because of their cancer or mortality risk, but rather the risk of abuse, dependence, and their effect on other psychiatric and medical symptoms.  The bottom line is, I want to believe the BMJ study.  But more importantly, I want the medical literature to be objective, fair, and unbiased.

Unfortunately, it’s hard—if not impossible—to avoid bias, particularly when you’ve worked in a field for many years (like Kripke and Langer) and have a strong belief about why things are the way they are.  In such a case, it seems almost natural that you’d want to publish research providing evidence in support of your belief.  But when does a strongly held belief become a conflict of interest?  Does it contribute to a bias in the same way that a psychopharmacologist’s financial affiliation with a drug company might?

These are just a few questions that we’ll need to pay closer attention to, as we continue to disclose conflicts of interest among medical professionals.  Sometimes bias is obvious and driven by one’s pocketbook, other times it is more subtle and rooted in one’s beliefs and experience.  But we should always be wary of the ways in which it can compromise scientific objectivity or lead us to question what’s really true.


Disruptive Technology Vs. The Disruptive Physician

February 26, 2012

The technological advances of just the last decade—mobile computing, social networking, blogging, tablet computers—were never thought to be “essential” when first introduced.  But while they started as novelties, their advantages became apparent, and today these are all part of our daily lives.  These are commonly referred to as “disruptive technologies”:  upstart developments that originally found their place in niche markets outside of the mainstream, but gradually “disrupted” the conventional landscape (and conventional wisdom) to become the established ways of doing things.

In our capitalist economy, disruptive technology is considered a very good thing.  It has made our lives easier, more enjoyable, and more productive.  It has created no small number of multimillionaires.  Entrepreneurs worldwide are constantly looking for the next established technologies to disrupt, usurp, and overturn, in hopes of a very handsome payoff.

In medicine, when we talk about “disruption,” the implication is not quite as positive.  In fact, the term “disruptive physician” is an insult, a black mark on one’s record that can be very hard to overcome.  It refers to someone who doesn’t cooperate, doesn’t follow established protocols, yells at people, discriminates against others, who might abuse drugs or alcohol, or who is generally incompetent.  These are not good things.

Really?  Now, no one would argue that substance abuse, profanity, spreading rumors, degrading one’s peers, or incompetence are good.  But what about the physician who “expresses political views that are disagreeable to the hospital administration”?  How about the physician who speaks out about deficiencies in patient care or patient safety, or who (legitimately) points out the incompetence of others?  How about the physician who prioritizes his own financial and/or business objectives over those of the hospital (when in fact it may be the only way to protect one’s ability to practice)?  All of these have been considered to be “disruptive” behaviors and could be used by highly conservative medical staffs to discipline physicians and preserve the status quo.

Is this fair?  In modern psychiatry, with its shrinking appointment lengths, overreliance on the highly deficient DSM, excessive emphasis on pharmacological solutions, and an increasing ignorance of developmental models and psychosocial interventions among practitioners, maybe someone should stand up and express opinions that the “powers that be” might consider unacceptable.  Someone should speak out on behalf of patient safety.  Someone should point out extravagant examples of waste, incompetence, or abuse of privilege.  Plenty of psych bloggers and a few renegade psychiatrists do express these opinions, but they (we?) are a minority.  I don’t know of any department chairmen or APA officers who are willing to be so “disruptive.”  As a result, we’re stuck with what we’ve got.

That’s not to say there aren’t any disruptive technologies in psychiatry.  What are they?  Well, medications, for instance.  Drug treatment “disrupted” psychoanalysis and psychotherapy, and represent the foundation of most psychiatric treatment today.  Over the last 30 years, pharmaceutical companies (and prescribers) have earned millions of dollars from SSRIs, SNRIs, second-generation antipsychotics, psychostimulants, and many others.  But are people less mentally ill now than they were in the early 1980s?  Today—just in time for patent expirations!—we’re already seeing the next disruptive medication technologies, like those based on glutamate and glutamine signaling.  According to Stephen Stahl at the most recent NEI Global Congress, “we’ve beaten the monoamine horse sixteen ways to Sunday” (translation: we’ve milked everything we can out of the serotonin and dopamine stories) and glutamate is the next blockbuster drug target to disrupt the marketplace.

Another disruptive technology is the DSM.  I don’t have much to add to what’s already been written about the DSM-5 controversy except to point out what should be obvious:  We don’t need another DSM right now.  Practically speaking, a new DSM is absolutely unnecessary.  It will NOT help me treat patients any better.  But it’s coming, like it or not.  It will disrupt the way we have conducted our practices for the last 10 years (guided by the equally imperfect DSM-IV-TR), and it will put millions more dollars in the coffers of the APA.

And then, of course, is the electronic medical record (EMR).  As with the DSM-5, I don’t need to have an EMR to practice psychiatry.  But some politicians in Washington, DC, decided that, as a component of the Affordable Care Act (and in preparation for truly nationalized health care), we should all use EMRs.  They even offered a financial incentive to doctors to do so (and are levying penalties for not doing so).  And despite some isolated benefits (which are more theoretical than practical, frankly), EMRs are disruptive.  Just not in the right way.  They disrupt work flow, the doctor-patient relationship, and, sometimes, common sense.  But they’re here to stay.

Advances in records & database management, in general, are the new disruptive technologies in medicine.  Practice Fusion, a popular (and ad-supported) EMR has earned tens of millions of dollars in venture capital funding and employs over 150 people.  And what does it do with the data from the 28 million patients it serves?  It sells it to others, of course.  (And it can tell you fun things like which cities are most “lovesick.”  How’s that for ROI?)

There are many other examples of companies competing for your health-care dollar, whose products are often only peripherally related to patient care but which represent that holy grail of the “disruptive technology.”  There are online appointment scheduling services, telepsychiatry services, educational sites heavily sponsored by drug companies, doctor-only message boards (which sell doctors’ opinions to corporations), drug databases (again, sponsored by drug companies), and others.

In the interest of full disclosure, I use some of the above services, and some are quite useful.  I believe telemedicine, in particular, has great potential.  But at the end of the day, these market-driven novelties ignore some of the bigger, more entrenched problems in medicine, which only practicing docs see.  In my opinion, the factors that would truly help psychiatrists take better care of patients are of a different nature entirely:  improving psychiatric training (of MDs and non-MD prescribers); emphasizing recovery and patient autonomy in our billing and reimbursement policies; eliminating heavily biased pharmaceutical advertising (both to patients and to providers); revealing the extensive and unstated conflicts of interest among our field’s “key opinion leaders”; reforming the “disability” system and disconnecting it from Medicaid, particularly among indigent patients; and reallocating health-care resources more equitably.  But, as a physician, if I were to go to my superiors with any ideas to reform the above in my day-to-day work, I run the risk of being labeled “disruptive.”  When in fact, that would be my exact intent:  to disrupt some of the damaging, wasteful practices that occur in our practices almost every day.

I agree that disruption in medicine can be a good thing, and can advance the quality and cost-effectiveness of care.  But when most of the “disruptions” come from individuals who are not actively in the trenches, and who don’t know where needs are the greatest, we may be doing absolutely nothing to improve care.  Even worse, when we fail to embrace the novel ideas of physicians—but instead discipline those physicians for being “disruptive”—we risk punishing creativity, destroying morale, and fostering a sense of helplessness that, in the end, serves no one.


Do I Want A Philosopher As My Surgeon?

February 20, 2012

I recently stumbled upon an article describing upcoming changes to the Medical College Admissions Test.  Also known as the MCAT, this is the exam that strikes fear into the hearts of pre-med students nationwide, due to its rigorous assessment of all the hard sciences that we despised in college.  The MCAT can make or break someone’s application to a prestigious medical school, and in a very real way, it can be the deciding factor as to whether someone even becomes a doctor at all.

According to the article, the AAMC—the organization which administers the MCAT—will “stop focusing solely on biology, physics, statistics, and chemistry, and also will begin asking questions on psychology, ethics, cultural studies, and philosophy.”  The article goes on to say that questions will ask about such topics as “behavior and behavior change, cultural and social differences that influence well-being, and socioeconomic factors, such as access to resources.”

Response has been understandably mixed.  On at least two online physician discussion groups, doctors are denouncing the change.  Medicine is based in science, they argue, and the proposed changes simply encourage mediocrity and “beat the drum for socialized medicine.”  Others express frustration that this shift rewards not those who can practice good medicine, but rather those who can increase “patient satisfaction” scores.  Still others believe the new MCAT is just a way to recruit a new generation of liberal-minded, government-employed docs (or, excuse me, “providers”) just in time for the roll-out of Obamacare.

I must admit that I can understand the resistance from the older generation of physicians.  In the interest of full disclosure, I was trained under the traditional medical model.  I learned anatomy, biochemistry, pathology, microbiology, etc., independently, and then had to synthesize the material myself, rather than through the “problem-based learning” format of today’s medical schools.  I also have an advanced degree in neuroscience, so I’m inclined to think mechanistically, to be critical of experimental designs, and always to search for alternate explanations of what I observe.

In spite of my own training, however, I think I might actually support the new MCAT format.  Medicine is different today.  Driven by factors that are beyond the control of the average physician, diagnostic tools are becoming more automated and treatment protocols more streamlined, even incorporated into our EMRs.  In today’s medicine, the doctor is no longer an independent, objective authority, but rather someone hired to follow a set of rules or guidelines.  We’re rapidly losing sight of (1) who the patient is, (2) what the patient wants, and (3) what unique skills we can provide to that patient.

Some examples:  The scientifically minded physician sees the middle-aged obese male with diabetes and hypertension as a guy with three separate diseases, each requiring its own treatment, often driven by guidelines that result in disorganized, fractured care.  He sees the 90 year-old woman with kidney failure, brittle osteoporosis, and congestive heart failure as a candidate for nephrology, orthopedics, and cardiology consults, exacerbating cost and the likelihood of iatrogenic injury.  In reality, the best care might come from, in the first example, a family doc with an emphasis on lifestyle change, and in the second example, a geriatrician who understands the woman’s resources, needs, and support system.

Psychiatry presents its own unique challenges.  Personally, I believe we psychiatrists have been overzealous in our redefinition of the wide range of abnormal human behaviors as “illnesses” requiring treatment.  It would be refreshing to have an economist work in a community mental health clinic, helping to redirect scarce resources away from expensive antipsychotics or wasteful “disability” programs and towards job-training or housing services instead.  Maybe a sociologist would be less likely to see an HMO patient as “depressed” and needing meds, but enduring complicated relationship problems amenable to therapy and to a reassessment of what she aspires to achieve in her life.

This may sound “touchy-feely” to some.  Trust me, ten years ago—at the peak of my enthusiasm for biological psychiatry—I would have said the same thing, and not in a kind way.  But I’ve since learned that psychiatry is touchy-feely.  And in their own unique ways, all specialties of medicine require a sophisticated understanding of human behavior, psychology, and the socioeconomic realities of the world in which we live and practice.  What medicine truly needs is that rare combination of someone who can not only describe Friedel-Crafts alkylation and define Hardy Weinberg equilibrium, but who can also understand human learning and motivation or describe—even in a very rough way—what the heck “Obamacare” is all about anyway.

If I needed cardiac bypass surgery, would I want a philosophy major as my surgeon?  I honestly don’t care, as long as he or she has the requisite technical skill to put me under the knife.  But perhaps a philosopher would be just as well—or better—prepared to judge whether I needed the operation in the first place, how to evaluate my other options (if any), and—if I undergo the surgery—how to change my behavior so that I won’t need another one.  Better yet, maybe that philosopher would also want to change conditions so that fewer people suffer from coronary artery disease, or to determine a more equitable way to ensure that anyone who needs such a procedure can get it.

If we doctors continue to see ourselves as scientists first and foremost, we’ll be ordering tests and prescribing meds until we’re bankrupt.  At the other extreme, if we’re too people-friendly, patients will certainly like us, but we may have no impact on their long-term health.  Maybe the new MCAT is a way to encourage docs to bridge this gap, to make decisions based on everything that matters, even those factors that today’s medicine tends to ignore.  It’s not clear whether this will succeed, but it’s worth a try.


Big Brother Is Watching You (Sort Of)

February 17, 2012

I practice in California, which, like most (but not all) states has a service by which I can review my patients’ controlled-substance prescriptions.  “Controlled” substances are those drugs with a high potential for abuse, such as narcotic pain meds (e.g., Vicodin, Norco, OxyContin) or benzodiazepines (e.g., Xanax, Valium, Klonopin).  The thinking is that if we can follow patients who use high amounts of these drugs, we can prevent substance abuse or the illicit sale of these medications on the street or black market.

Unfortunately, California’s program may be on the chopping block.  Due to budget constraints, Governor Jerry Brown is threatening to close the Bureau of Narcotic Enforcement (BNE), the agency which tracks pharmacy data.  At present, the program is being supported by grant money—which could run out at any time—and there’s only one full-time staff member managing it.  Thus, while other states (even Florida, despite the opposition of Governor Rick Scott) are scrambling to implement programs like this one, it’s a travesty that we in California might lose ours.

Physicians (and the DEA) argue that these programs are valuable for detecting “doctor shoppers”—i.e., those who go from office to office trying to obtain Rx’es for powerful opioids with street value or addictive potential.  Some have even argued that there should be a nationwide database, which could help us identify people involved in interstate drug-smuggling rings like the famous “OxyContin Express” between rural Appalachia and Florida.

But I would say that the drug-monitoring programs should be preserved for an entirely different reason: namely, that they help to improve patient care.  I frequently check the prescription histories of my patients.  I’m not “playing detective,” seeking to bust a patient who might be abusing or selling their pills.  Rather, I do it to get a more accurate picture of a patient’s recent history.  Patients may come to me, for example, with complaints of anxiety while the database shows they’re already taking large amounts of Xanax or Ativan, occasionally from multiple providers.  Similarly, I might see high doses of pain medications, which (if prescribed & taken legitimately) cues me in to the possibility that pain management may be an important aspect of treating their psychiatric concerns, or vice versa.

I see no reason whatsoever that this system couldn’t be extended to non-controlled medications.  In fact, it’s just a logical extension of what’s already possible.  Most of my patients don’t recognize that I can call every single pharmacy in town and ask for a list of all their medications.  All I need is the patient’s name and birthdate.  Of course, there’s no way in the world I would do this, because I don’t have enough time to call every pharmacy in town.  So instead, I rely largely on what the patient tells me.  But sometimes there’s a huge discrepancy between what patients say they’re taking and what the pharmacy actually dispenses, owing to confusion, forgetfulness, language barriers, or deliberate obfuscation.

So why don’t we have a centralized, comprehensive database of patient med lists?

Some would argue it’s a matter of privacy.  Patients might not want to disclose that they’re taking Viagra or Propecia or an STD treatment (or methadone—for some reasons patients frequently omit that opioid).  But that argument doesn’t hold much water, because in practice, as I wrote above, I could, in theory, call every pharmacy in one’s town (or state) and find that out.

Another argument is that it would be too complicated to gather data from multiple pharmacies and correlate medication lists with patient names.  I don’t buy this argument either.  Consider “data mining.”  This widespread practice allows pharmaceutical companies to get incredibly detailed descriptions of all medications prescribed by each licensed doctor.  The key difference here, of course, is that the data are linked to doctors, not to patients, so patient privacy is not a concern.  (The privacy of patients is sacred, that of doctors, not so much; the Supreme Court even said so.)  Nevertheless, when my Latuda representative knows exactly how much Abilify, Seroquel, and Zyprexa I’ve prescribed in the last 6 months, and knows more about my practice than I do (unless I’ve decided to opt out of this system), then a comprehensive database is clearly feasible.

Finally, some would argue that a database would be far too expensive, given the costs of collecting data, hiring people to manage it, etc.  Maybe if it’s run by government bureaucrats, yes, but I believe this argument is out of touch with the times.  Why can’t we find some out-of-work Silicon Valley engineers, give them a small grant, and ask them to build a database that would collect info from pharmacy chains across the state, along with patient names & birthdates, which could be searched through an online portal by any verified physician?  And set it up so that it’s updated in real time.  Maintenance would probably require just a few people, tops.

Not only does such a proposal sound eminently doable, it actually sounds like it might be easy (and maybe even fun) to create.  If a group of code warriors & college dropouts can set up microblogging platforms, social networking sites, and online payment sites, fueled by nothing more than enthusiasm and Mountain Dew, then a statewide prescription database could be a piece of cake.

Alas, there are just too many hurdles to overcome.  Although it may seem easy to an IT professional, and may seem like just plain good medicine to a doc like me, history has a way of showing that what makes the best sense just doesn’t happen (especially when government agencies are involved).  Until this changes, I’ll keep bothering my local pharmacists by phone to get the information that would be nice to have at my fingertips already.


The Second Law of Thermodynamics and The “Med Check”

February 12, 2012

On one of my recent posts, a psychiatrist made a very thought-provoking comment.  He/she wrote that they interviewed at a clinic where the psychiatrist saw 20 patients per day and made well over $300,000 per year.  At a different clinic the psychiatrists saw many fewer patients (and, of course, made less money) but, the commenter opined, the patients probably received much better care.

This problem of “med checks” serving as the psychiatrist’s bread-and-butter has been discussed ad nauseum, particularly since the infamous New York Times “Talk Doesn’t Pay” article (see my comments here and here).  It’s almost universally accepted that this style of practice is cold, impersonal, sometimes reckless, and often focuses on symptoms and medications rather than people.  I would add that this approach also makes patient care more disorderly and confusing.  Moreover, minimizing this confusion would require more time and energy than most psychiatric practices currently allow.

I work part-time in one setting where the 15-20 minute “med check” is the standard of care.  Because my own personal strategy is to minimize medication usage in general, I’ve been able to use this time, with most patients, to discuss lifestyle changes or offer brief supportive therapy, keeping the lid (hopefully) on irresponsible prescribing.  However, I frequently get patients who have been seen by other docs, or from other clinics, who come to me with complicated medication regimens or questionable diagnoses, and who almost universally complain that “my last doctor never talked to me, he just pushed drugs,” or “he just kept prescribing medication but never told me what they were for,” or “I had a side effect from one drug so he just added another one to take care of it,” or some combination of the above.

These patients present an interesting dilemma.  On the one hand, they are usually extraordinarily fascinating, often presenting tough diagnostic challenges or complicated biological conundrums that test my knowledge of psychopharmacology.  On the other hand, a 15- or 20-minute “med check” appointment offers me little time or flexibility to do the work necessary to improve their care.

Consider one patient I saw recently.  She’s in her mid-20s and carries diagnoses of “bipolar II” (more about that diagnosis in a future post, if I have the guts to write it) and Asperger syndrome.  She is intelligent, creative, and has a part-time job in an art studio.  She has a boyfriend and a (very) involved mother, but few other social contacts.  She was hospitalized once in her teens for suicidal ideation.  Her major struggles revolve around her limited social life and the associated anxiety.  She’s also on six psychiatric medications: two antipsychotics, two mood stabilizers, a benzodiazepine, and a PRN sleep agent (and an oral contraceptive, whose efficacy is probably inhibited by one of her mood stabilizers—something that she says she was never warned about), and complains of a handful of mild physical symptoms that are most likely medication side effects.  She (and her mother) told me that her last two doctors “never took the time” to answer their questions or engage in discussion, instead “they just gave me drugs and kept asking me to come back in three months.”

What to do with such an individual?  My first wish would be to discontinue all medications, assess her baseline, help to redefine her treatment goals, and identify tools to achieve them.  But remember, I only have 20 minutes.  Even the simplest of maneuvers—e.g., start a gradual taper of one of her medications—would require a detailed explanation of what to expect and how to deal with any difficulties that might arise.  And if I can’t see her for another 2-3 months—or if I have only 13 annual visits with her, as is the case in my Medicaid practice—then this option becomes far more difficult.

As a result, it’s easier to add stuff than to take it away.  It brings to mind the second law of thermodynamics in physics, which (very loosely) says that a system will always develop greater disorder (or randomness, or “entropy”) unless work is done on that system.  Stated from a clinical point of view:  unless we invest more time and energy in our patients, their care will become more scattered, disorganized, and chaotic.

Some of that time and energy can come from a dedicated physician (which will, of course, require the additional investment of money in the form of greater out-of-pocket cost).  Other times, it can come from the patient him- or herself; there are an impressive—and growing—number of websites and books dedicated to helping patients understand their mental illness and what to expect from specific medications or from their discontinuation (for instance, here’s one to which I’ve referred several patients), often written by patients or ex-patients themselves.  But without some external input, I’m afraid the current status quo sets many patients adrift with little or no guidance, direction, or hope.

It’s disheartening to think that psychiatric care has a tendency to make patients’ lives more disorganized and unstable, particularly when most of us entered this field to do the exact opposite.  It’s also discouraging to know that for those patients who do benefit from mental health care, it’s often in spite of, not because of, the psychiatrist’s involvement (something I’ve written about here).  But if our training programs, health care system, and large financial interests like the pharmaceutical companies—not to mention the increasingly narrow expertise of today’s psychiatrists—continue to drive psychiatric care into brief med-management appointments (which, BTW, I find insulting to call “psychiatry,” but that’s an argument for another time), then we must also prepare for the explosion in diagnoses, the overprescription of largely useless (and often damaging) drugs, skyrocketing rates of psychiatric “disability,” and the bastardization that currently passes as psychiatric care.


Measuring The Immeasurable

February 9, 2012

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.


Do What You’re Taught

February 5, 2012

In my mail yesterday was an invitation to an upcoming 6-hour seminar on the topic of “Trauma, Addiction, and Grief.”  The course description included topics such as “models of addiction and trauma/information processing” and using these models to plan treatment; recognizing “masked grief reactions” and manifestations of trauma in clients; and applying several psychotherapeutic techniques to help a patient through addiction and trauma recovery.

Sound relevant?  To any psychiatrist dealing with issues of addiction, trauma, grief, anxiety, and mood—which is pretty much all of us—and interested in integrative treatments for the above, this would seem to be an entirely valid topic to learn.  And, I was pleased to learn that the program offers “continuing education” credit, too.

But upon reading the fine print, credit is not available for psychiatrists.  Instead, you can get credit if you’re one the following mental health workers:  counselor, social worker, MFT, psychologist, addiction counselor, alcoholism & drug abuse counselor, chaplain/clergy, nurse, nurse practitioner, nurse specialist, or someone seeking “certification in thanatology” (whatever that is).  But not a psychiatrist.  In other words, psychiatrists need not apply.

Well, okay, that’s not entirely correct, psychiatrists can certainly attend, and–particularly if the program is a good one—my guess is that they would clearly benefit from it.  They just won’t get credit for it.

It’s not the first time I’ve encountered this.  Why do I think this is a big deal?  Well, in all of medicine, “continuing medical education” credit, or CME, is a rough guide to what’s important in one’s specialty.  In psychiatry, the vast majority of available CME credit is in psychopharmacology.  (As it turns out, in the same batch of mail, I received two “throwaway” journals which contained offers of free CME credits for reading articles about treating metabolic syndrome in patients on antipsychotics, and managing sexual side effects of antidepressants.)  Some of the most popular upcoming CME events are the Harvard Psychopharmacology Master Class and the annual Nevada Psychopharmacology Update.  And, of course, the NEI Global Congress in October is a can’t-miss event.  Far more psychiatrists will attend these conferences than a day-long seminar on “trauma, addiction, and grief.”  But which will have the most beneficial impact on patients?

To me, a more important question is, which will have the most beneficial impact on the future of the psychiatrist?   H. Steven Moffic, MD, recently wrote an editorial in Psychiatric Times in which he complained openly that the classical “territory” of the psychiatrist—diagnosis of mental disorder, psychotherapy, and psychopharmacology—have been increasingly ceded to others.  Well, this is a perfect example.  A seminar whose content is probably entirely applicable to most psychiatric patients, being marketed primarily to non-psychiatrists.

I’ve always maintained—on this blog and in my professional life—that psychiatrists should be just as (if not more) concerned about the psychological, cultural, and social aspects of their patients and their experience as in their proper psychopharmacological management.  It’s also just good common sense, especially when viewed from the patient’s perspective.  But if psychiatrists (and our leadership) don’t advocate for the importance of this type of experience, then of course others will do this work, instead of us.  We’re making ourselves irrelevant.

I’m currently experiencing this irony in my own personal life.  I’m studying for the American Board of Psychiatry and Neurology certification exam (the “psychiatry boards”), while looking for a new job at the same time.  On the one hand, while studying for the test I’m being forced to refresh my knowledge of human development, the history of psychiatry, the theory and practice of psychotherapy, the cognitive and psychological foundations of axis I disorders, theories of personality, and many other topics.  That’s the “core” subject matter of psychiatry, which is (appropriately) what I’ll be tested on.  Simultaneously, however, the majority of the jobs I’m finding require none of that.  I feel like I’m being hired instead for my prescription pad.

Psychiatry, as the study of human experience and the treatment of a vast range of human suffering, can still be a fascinating field, and one that can offer so much more to patients.  To be a psychiatrist in this classic sense of the word, it seems more and more like one has to blaze an independent trail: obtain one’s own specialized training, recruit patients outside of the conventional means, and—unless one wishes to live on a relatively miserly income—charge cash.  And because no one seriously promotes this version of psychiatry, this individual is rapidly becoming an endangered species.

Maybe I’ll get lucky and my profession’s leadership will advocate more for psychiatrists to be better trained in (and better paid for) psychotherapy, or, at the very least, encourage educators and continuing education providers to emphasize this aspect of our training as equally relevant.  But as long as rank-and-file psychiatrists sit back and accept that our primary responsibility is to diagnose and medicate, and rabidly defend that turf at the expense of all else, then perhaps we deserve the fate that we’re creating for ourselves.


ADHD: A Modest Proposal

February 1, 2012

I’m reluctant to write a post about ADHD.  It just seems like treacherous ground.  Judging by comments I’ve read online and in magazines, and my own personal experience, expressing an opinion about this diagnosis—or just about anything in child psychiatry—will be met with criticism from one side or another.  But after reading L. Alan Sroufe’s article (“Ritalin Gone Wild”) in this weekend’s New York Times, I feel compelled to write.

If you have not read the article, I encourage you to do so.  Personally, I agree with every word (well, except for the comment about “children born into poverty therefore [being] more vulnerable to behavior problems”—I would remind Dr Sroufe that correlation does not equal causation).  In fact, I wish I had written it.  Unfortunately, it seems that only outsiders or retired psychiatrists can write such stuff about this profession. The rest of us might need to look for jobs someday.

Predictably, the article has attracted numerous online detractors.  For starters, check out this response from the NYT “Motherlode” blog, condemning Dr Sroufe for “blaming parents” for ADHD.  In my reading of the original article, Dr Sroufe did nothing of the sort.  Rather, he pointed out that ADHD symptoms may not entirely (or at all) arise from an inborn neurological defect (or “chemical imbalance”), but rather that environmental influences may be more important.  He also remarked that, yes, ADHD drugs do work; children (and adults, for that matter) do perform better on them, but those successes decline over time, possibly because a drug solution “does nothing to change [environmental] conditions … in the first place.”

I couldn’t agree more.  To be honest, I think this statement holds true for much of what we treat in psychiatry, but it’s particularly relevant in children and adolescents.  Children are exposed to an enormous number of influences as they try to navigate their way in the world, not to mention the fact that their brains—and bodies—continue to develop rapidly and are highly vulnerable.  “Environmental influences” are almost limitless.

I have a radical proposal which will probably never, ever, be implemented, but which might help resolve the problems raised by the NYT article.  Read on.

First of all, you’ll note that I referred to “ADHD symptoms” above, not “ADHD.”  This isn’t a typo.  In fact, this is a crucial distinction.  As with anything else in psychiatry, diagnosing ADHD relies on documentation of symptoms.  ADHD-like symptoms are extremely common, particularly in child-age populations.  (To review the official ADHD diagnostic criteria from the DSM-IV, click here.)  To be sure, a diagnosis of ADHD requires that these symptoms be “maladaptive and inconsistent with developmental level.”  Even so, I’ve often joked with my colleagues that I can diagnose just about any child with ADHD just by asking the right questions in the right way.  That’s not entirely a joke.  Try it yourself.  Look at the criteria, and then imagine you have a child in your office whose parent complains that he’s doing poorly in school, or gets in fights, or refuses to do homework, or daydreams a lot, etc.  When the ADHD criteria are on your mind—remember, you have to think like a psychiatrist here!—you’re likely to ask leading questions, and I guarantee you’ll get positive responses.

That’s a lousy way of making a diagnosis, of course, but it’s what happens in psychiatrists’ and pediatricians’ offices every day.  There are more “valid” ways to diagnose ADHD:  rating scales like the Connors or Vanderbilt surveys, extensive neuropsychiatric assessment, or (possibly) expensive imaging tests.  However, in practice, we often let subthreshold scores on those surveys “slide” and prescribe ADHD medications anyway (I’ve seen it plenty); neuropsychiatric assessments are often wishy-washy (“auditory processing score in the 60th percentile,” etc); and, as Dr Sroufe correctly points out, children with poor motivation or “an underdeveloped capacity to regulate their behavior” will most likely have “anomalous” brain scans.  That doesn’t necessarily mean they have a disorder.

So what’s my proposal?  My proposal is to get rid of the diagnosis of ADHD altogether.  Now, before you crucify me or accuse me of being unfit to practice medicine (as one reader—who’s also the author of a book on ADHD—did when I floated this idea on David Allen’s blog last week), allow me to elaborate.

First, if we eliminate the diagnosis of ADHD, we can still do what we’ve been doing.  We can still evaluate children with attention or concentration problems, or hyperactivity, and we can still use stimulant medications (of course, they’d be off-label now) to provide relief—as long as we’ve obtained the same informed consent that we’ve done all along.  We do this all the time in medicine.  If you complain of constant toe and ankle pain, I don’t immediately diagnose you with gout; instead, I might do a focused physical exam of the area and recommend a trial of NSAIDs.  If the pain returns, or doesn’t improve, or you have other features associated with gout, I may want to check uric acid levels, do a synovial fluid analysis, or prescribe allopurinol.

That’s what medicine is all about:  we see symptoms that suggest a diagnosis, and we provide an intervention to help alleviate the symptoms while paying attention to the natural course of the illness, refining the diagnosis over time, and continually modifying the therapy to treat the underlying diagnosis and/or eliminate risk factors.  With the ultimate goal, of course, of minimizing dangerous or expensive interventions and achieving some degree of meaningful recovery.

This is precisely what we don’t do in most cases of ADHD.  Or in most of psychiatry.  While exceptions definitely exist, often the diagnosis of ADHD—and the prescription of a drug that, in many cases, works surprisingly well—is the end of the story.  Child gets a diagnosis, child takes medication, child does better with peers or in school, parents are satisfied, everyone’s happy.  But what caused the symptoms in the first place?  Can (or should) that be fixed?  When can (or should) treatment be stopped?  How can we prevent long-term harm from the medication?

If, on the other hand, we don’t make a diagnosis of ADHD, but instead document that the child has “problems in focusing” or “inattention” or “hyperactivity” (i.e., we describe the specific symptoms), then it behooves us to continue looking for the causes of those symptoms.  For some children, it may be a chaotic home environment.  For others, it may be a history of neglect, or ongoing substance abuse.  For others, it may be a parenting style or interaction which is not ideal for that child’s social or biological makeup (I hesitate to write “poor parenting” because then I’ll really get hate mail!).  For still others, there may indeed be a biological abnormality—maybe a smaller dorsolateral prefrontal cortex (hey! the DLPFC!) or delayed brain maturation.

ADHD offers a unique platform upon which to try this open-minded, non-DSM-biased approach.  Dropping the diagnosis of “ADHD” would have a number of advantages.  It would encourage us to search more deeply for root causes; it would allow us to be more eclectic in our treatment; it would prevent patients, parents, doctors, teachers, and others from using it as a label or as an “excuse” for one’s behavior; and it would require us to provide truly individualized care.  Sure, there will be those who simply ask for the psychostimulants “because they work” for their symptoms of inattentiveness or distractibility (and those who deliberately fake ADHD symptoms because they want to abuse the stimulant or because they want to get into Harvard), but hey, that’s already happening now!  My proposal would create a glut of “false negative” ADHD diagnoses, but it would also reduce the above “false positives,” which, in my opinion, are more damaging to our field’s already tenuous nosology.

A strategy like this could—and probably should—be extended to other conditions in psychiatry, too.  I believe that some of what we call “ADHD” is truly a disorder—probably multiple disorders, as noted above; the same is probably true with “major depression,” ”bipolar disorder,” and just about everything else.  But when these labels start being used indiscriminately (and unfortunately DSM-5 doesn’t look to offer any improvement), the diagnoses become fixed labels and lock us into an approach that may, at best, completely miss the point, and at worst, cause significant harm.  Maybe we should rethink this.


%d bloggers like this: