Is Clinical Psychopharmacology a Pseudoscience?

October 24, 2011

I know I write a lot about my disillusionment with modern psychiatry.  I have lamented the overriding psychopharmacological imperative, the emphasis on rapid diagnosis and medication management, at the expense of understanding the whole patient and developing truly “personalized” treatments.  But at the risk of sounding like even more of a heretic, I’ve noticed that not only do psychopharmacologists really believe in what they’re doing, but they often believe it even in the face of evidence to the contrary.

It all makes me wonder whether we’re practicing a sort of pseudoscience.

For those of you unfamiliar with the term, check out Wikipedia, which defines “pseudoscience” as:  “a claim, belief, or practice which is presented as scientific, but which does not adhere to a valid scientific method, lacks supporting evidence or plausibility, cannot be reliably tested…. [is] often characterized by the use of vague, exaggerated or unprovable claims [and] an over-reliance on confirmation rather than rigorous attempts at refutation…”

Among the medical-scientific community (of which I am a part, by virtue of my training), the label of “pseudoscience” is often reserved for practices like acupuncture, naturopathy, and chiropractic.  Each may have its own adherents, its own scientific language or approach, and even its own curative power, but taken as a whole, their claims are frequently “vague or exaggerated,” and they fail to generate hypotheses which can then be proven or (even better) refuted in an attempt to refine disease models.

Does clinical psychopharmacology fit in the same category?

Before going further, I should emphasize I’m referring to clinical psychopharmacology: namely, the practice of prescribing medications (or combinations thereof) to actual patients, in an attempt to treat illness.  I’m not referring to the type of psychopharmacology practiced in research laboratories or even in clinical research settings, where there is an accepted scientific method, and an attempt to test hypotheses (even though some premises, like DSM diagnoses or biological mechanisms, may be erroneous) according to established scientific principles.

The scientific method consists of: (1) observing a phenomenon; (2) developing a hypothesis; (3) making a prediction based on that hypothesis; (4) collecting data to attempt to refute that hypothesis; and (5) determining whether the hypothesis is supported or not, based on the data collected.

In psychiatry, we are not very good at this.  Sure, we may ask questions and listen to our patients’ answers (“observation”), come up with a diagnosis (a “hypothesis”) and a treatment plan (a “prediction”), and evaluate our patients’ response to medications (“data collection”).  But is this only a charade?

First of all, the diagnoses we give are not based on a valid understanding of disease.  As the current controversy over DSM-5 demonstrates, even experts find it hard to agree on what they’re describing.  Maybe if we viewed DSM diagnoses as “suggestions” or “prototypes” rather than concrete diagnoses, we’d be better off.  But clinical psychopharmacology does the exact opposite: it puts far too much emphasis on the diagnosis, which predicts the treatment, when in fact a diagnosis does not necessarily reflect biological reality but rather a “best guess.”  It’s subject to change at any time, as are the fluctuating symptoms that real patients present with.  (Will biomarkers help?  I’m not holding my breath.)

Second, our predictions (i.e., the medications we choose for our patients) are always based on assumptions that have never been proven.  What do I mean by this?  Well, we have “animal models” of depression and theories of errant dopamine pathways in schizophrenia, but for “real world” patients—the patients in our offices—if you truly listen to what they say, the diagnosis is rarely clear.  Instead, we try to “make the patients fit the diagnosis” (which becomes easier to do as appointment lengths shorten), and then concoct treatment plans which perfectly fit the biochemical pathways that our textbooks, drug reps, and anointed leaders lay out for us, but which may have absolutely nothing to do with what’s really happening in the bodies and minds of our patients.

Finally, the whole idea of falsifiability is absent in clinical psychopharmacology.  If I prescribe an antidepressant or even an anxiolytic or sedative drug to my patient, and he returns two weeks later saying that he “feels much better” (or is “less anxious” or is “sleeping better”), how do I know it was the medication?  Unless all other variables are held strictly constant—which is impossible to do even in a well-designed placebo-controlled trial, much less the real world—I can make no assumption about the effect of the drug in my patient’s body.

It gets even more absurd when one listens to a so-called “expert psychopharmacologist,” who uses complicated combinations of 4, 5, or 6 medications at a time to achieve “just the right response,” or who constantly tweaks medication doses to address a specific issue or complaint (e.g., acne, thinning hair, frequent cough, yawning, etc, etc), using sophisticated-sounding pathways or models that have not been proven to play a role in the symptom under consideration.  Even if it’s complete guesswork (which it often is), the patient may improve 33% of the time (“Success! My explanation was right!”), get worse 33% of the time (“I didn’t increase the dose quite enough!”), and stay the same 33% of the time (“Are any other symptoms bothering you?”).

Of course, if you’re paying good money to see an “expert psychopharmacologist,” who has diplomas on her wall and who explains complicated neurochemical pathways to you using big words and colorful pictures of the brain, you’ve already increased your odds of being in the first 33%.  And this is the main reason psychopharmacology is acceptable to most patients: not only does our society value the biological explanation, but psychopharmacology is practiced by people who sound so intelligent and … well, rational.  Even though the mind is still a relatively impenetrable black box and no two patients are alike in how they experience the world.  In other words, psychopharmacology has capitalized on the placebo response (and the ignorance & faith of patients) to its benefit.

Psychopharmacology is not always bad.  Sometimes psychotropic medication can work wonders, and often very simple interventions provide patients with the support they need to learn new skills (or, in rare cases, to stay alive).  In other words, it is still a worthwhile endeavor, but our expectations and our beliefs unfortunately grow faster than the evidence base to support them.

Similarly, “pseudoscience” can give results.  It can heal, too: some health-care plans willingly pay for acupuncture, and some patients swear by Ayurvedic medicine or Reiki.  And who knows, there might still be a valid scientific basis for the benefits professed by advocates of these practices.

In the end, though, we need to stand back and remind ourselves what we don’t know.  Particuarly at a time when clinical psychopharmacology has come to dominate the national psyche—and command a significant portion of the nation’s enormous health care budget—we need to be extra critical and ask for more persuasive evidence of its successes.  And we should not bring to the mainstream something that might more legitimately belong in the fringe.


Rosenhan Redux

September 20, 2011

“If sanity and insanity exist, how shall we know them?”

Those are the opening words of a classic paper in the history of psychology, David Rosenhan’s famous “pseudopatient” study (pdf), published in the prestigious journal Science in 1973.  In his experiment, Rosenhan and seven other people—none of whom had a mental illness—went to 12 different hospitals and complained of “hearing voices.”  They explained to hospital staff that the voices said “empty,” “hollow,” and “thud.”  They reported no other symptoms.

Surprisingly, all patients were admitted.  And even though, upon admission, they denied hearing voices any longer, they all received antipsychotic medication (Rosenhan had instructed his pseudopatients to “cheek” their meds and spit them out later) and were hospitalized for anywhere from 7 to 52 days (average = 19 days).  They behaved normally, yet all of their behaviors—for example, writing notes in a notebook—were interpreted by staff as manifestations of their disease.  All were discharged with a diagnosis of “schizophrenia in remission.”

Rosenhan’s experiment was a landmark study not only for its elegance and simplicity, but for its remarkable conclusions.  Specifically, that psychiatric diagnosis often rests solely upon a patient’s words, and, conversely, that “the normal are not detectably sane.”

Would a similar experiment performed today yield different results?  Personally, I think not.  (Well, actually, admission to a psychiatric hospital these days is determined more by the availability of beds, a patient’s insurance status, and the patient’s imminent dangerousness to self or others, than by the severity or persistence of the symptoms a patient reports, so maybe we’d be a bit less likely to admit these folks.)  At any rate, I’m not so sure that our diagnostic tools are any better today, nearly 40 years later.

In a very controversial book, Opening Skinner’s Box, published in 2003, journalist Lauren Slater claimed to have replicated Rosenhan’s study by visiting nine psychiatric emergency rooms and reporting a single symptom: hearing the word “thud.”  She wrote that “almost every time” she was given a diagnosis of psychotic depression and was prescribed a total of 60 antidepressants and 25 antipsychotics (that’s an average of 9.4 medications per visit!).  But her report was widely criticized by the scientific community, and Slater even confessed in the November 2005 Journal of Nervous and Mental Disease, that “I never did such a study: it simply does not exist.”

While I’m deeply disturbed by the dishonesty exhibited by Slater, whose words had great power to change the public perception of psychiatry (and I am offended, as a professional, by the attitude she demonstrated in her response to her critics… BTW, if you want a copy of her response—for entertainment purposes only, of course—email me), I think she may have been onto something.  In fact, I would invite Slater to repeat her study.  For real, this time.

Here’s what I would like Slater to do.  Instead of visiting psychiatric ERs, I invite her to schedule appointments with a number of outpatient psychiatrists.  I would encourage her to cast a wide net:  private, cash-only practices; clinics in academic medical centers; community mental health clinics; and, if accessible, VA and HMO psychiatrists.  Perhaps she can visit a few family practice docs or internists, for good measure.

When she arrives for her appointment, she should report one of the following chief complaints:  “I feel depressed.”  “I’m under too much stress.”  “I see shadows out of the corner of my eyes sometimes.”  “My mood is constantly going from one extreme to the other, like one minute I’m okay, the next minute I’m all hyper.”  “My nerves are shot.” “I feel like lashing out at people sometimes.”  “I can’t pay attention at work [or school].” “I sometimes drink [or use drugs] to feel better.”  Or anything similar.

She will most certainly be asked some follow-up questions.  Maybe some family history.  Maybe a mental status exam.  She will, most likely, be asked whether she’s suicidal or whether she hears voices.  I encourage her to respond honestly, sticking to her initial, vague, symptom, but without reporting anything else significant.

In the vast majority of cases, she will probably receive a diagnosis, most likely an “NOS” diagnosis (NOS = “not otherwise specified,” or psychiatric shorthand for “well, it’s sort of like this disorder, but I’m not sure”).  She is also likely to be offered a prescription.  Depending on her chief complaint, it may be an antidepressant, an atypical antipsychotic, or a benzodiazepine.

I don’t encourage otherwise healthy people to play games with psychiatrists, and I don’t promote dishonesty in the examination room.  I also don’t mean to suggest that all psychiatrists arrive at diagnoses from a single statement.  But the reality is that in many practice settings, the tendency is to make a diagnosis and prescribe a drug, even if the doctor is unconvinced of the seriousness of the patient’s reported symptoms.  Sometimes the clinic can’t bill for the service without a diagnosis code, or the psychiatrist can’t keep seeing a patient unless he or she is prescribing medication.  There’s also the liability that comes with potentially “missing” a diagnosis, even if everything else seems normal.

And on the patient’s side, too, the forces are often in favor of receiving a diagnosis.  Sure, there are some patients who report symptoms solely because they seek a Xanax Rx or their Seroquel fix, and other patients who are trying to strengthen a disability case.  But an even greater number of patients are frustrated by very real stressors in their lives and/or just trying to make sense out of difficult situations in which they find themselves.  For many, it’s a relief to know that one’s troubles can be explained by a psychiatric diagnosis, and that a medication might make at least some aspect of their lives a little easier.

As Rosenhan demonstrated, doctors (and patients, often) see things through lenses that are colored by the diagnostic paradigm.  In today’s era, that’s the DSM-IV.  But even more so today than in 1973, other factors—like the pharmaceutical industry, the realities of insurance billing, shorter appointment times, and electronic medical records—all encourage us to read much more into a patient’s words and draw conclusions much more rapidly than might be appropriate.  It’s just as nonsensical as it was 40 years ago, but, unfortunately, it’s the way psychiatry works.


Psychopharmacology And The Educated Guess

May 6, 2011

Sometimes I feel like a hypocrite.

As a practicing psychiatrist, I have an obligation to understand the data supporting my use of prescription medication.  In my attempts to do so, I’ve found some examples of clinical research that, unfortunately, are possibly irrelevant or misleading.  Many other writers and bloggers have taken this field to task (far more aggressively than I have) for clinical data that, in their eyes, are incomplete, inconclusive, or downright fraudulent.

In fact, we all like to hold our clinical researchers to an exceedingly high standard, and we complain indignantly when they don’t achieve it.

At the same time, I’ll admit I don’t always do the same in my own day-to-day practice.  In other words, I demand precision in clinical trials, but several times a day I’ll use anecdotal evidence (or even a “gut feeling”) in my prescribing practices, completely violating the rigor that I expect from the companies that market their drugs to me.

Of all fields in medicine, psychopharmacology the one where this is not only common, but it’s the status quo.

“Evidence-based” practice is about making a sound diagnosis and using published clinical data to make a rational treatment decision.  Unfortunately, subjects in clinical trials of psychotropic drugs rarely—if ever—resemble “real” patients, and the real world often throws us curve balls that force us to improvise.  If an antipsychotic is only partially effective, what do we do?  If a patient doesn’t tolerate his antidepressant, then what?  What if a drug interferes with my patient’s sleep?  Or causes a nasty tremor?  There are no hard-and-fast rules for dealing with these types of situations, and the field of psychopharmacology offers wide latitude in how to handle them.

But then it gets really interesting.  Nearly all psychiatrists have encountered the occasional bizarre symptom, the unexpected physical finding, or the unexplained lab value (if labs are being checked, that is).  Psychopharmacologists like to look at these phenomena and try to concoct an explanation based on what might be happening based on their knowledge of the drugs they prescribe.  In fact, I’ve always thought that the definition of an “expert psychopharmacologist” is someone who understands the properties of drugs well enough to make a plausible (albeit potentially wrong) molecular or neurochemical explanation of a complex human phenotype, and then prescribe a drug to fix it.

The psychiatric literature is filled with case studies of interesting encounters or “clinical pearls” that illustrate this principle at work.

For example, consider this case report in the Journal of Neuropsychiatry and Clinical Neurosciences, in which the authors describe a case of worsening mania during slow upward titration of a Seroquel dose and hypothesize that an intermediate metabolite of quetiapine might be responsible for the patient’s mania.  Here’s another one, in which Remeron is suggested as an aid to benzodiazepine withdrawal, partially due to its 5-HT3 antagonist properties.  And another small study purports to explain how nizatadine (Axid), an H2 blocker, might prevent Zyprexa-induced weight gain.  And, predictably, such “hints” have even made their way into drug marketing, as in the ads for the new antipsychotic Latuda which suggest that its 5-HT7 binding properties might be associated with improved cognition.

Of course, for “clinical pearls” par excellence, one need look no further than Stephen Stahl, particularly in his book Essential Psychopharmacology: The Prescriber’s Guide.  Nearly every page is filled with tips (and cute icons!) such as these:  “Lamictal may be useful as an adjunct to atypical antipsychotics for rapid onset of action in schizophrenia,” or “amoxapine may be the preferred tricyclic/tetracyclic antidepressant to combine with an MAOI in heroic cases due to its theoretically protective 5HT2A antagonist properties.”

These “pearls” or hypotheses are interesting suggestions, and might work, but have never been proven to be true.  At best, they are educated guesses.  In all honesty, no self-respecting psychopharmacologist would say that any of these “pearls” represents the absolute truth until we’ve replicated the findings (ideally in a proper controlled clinical trial).  But that has never stopped a psychopharmacologist from “trying it anyway.”

It has been said that, “every time we prescribe a drug to a patient, we’re conducting an experiment, with n=1.”  It’s amazing how often we throw caution to the wind and, just because we think we know how a drug might work, and can visualize in our minds all the pathways and receptors that we think our drugs are affecting, we add a drug or change a dose and profess to know what it’s doing.  Unfortunately, when we enter the realm of polypharmacy (not to mention the enormous complexity of human physiology), all bets are usually off.

What’s most disturbing is how often our assumptions are wrong—and how little we admit it.  For every published case study like the ones mentioned above, there are dozens—if not hundreds—of failed “experiments.”  (Heck, the same could be said even when we’re using something appropriately “evidence-based,” like using a second-generation antipsychotic for schizophrenia.)  In psychopharmacology, we like to take pride in our successes (“I added a touch of cyproterone, and his compulsive masturbation ceased entirely!”)  but conveniently excuse our failures (“She didn’t respond to my addition of low-dose N-acetylcysteine because of flashbacks from her childhood trauma”).  In that way, we can always be right.

Psychopharmacology is a potentially dangerous playground.  It’s important that we follow some well-established rules—like demanding rigorous clinical trials—and if we’re going to veer from this path, it’s important that we exercise the right safeguards in doing so.  At the same time, we should exercise some humility, because sometimes we have to admit we just don’t know what we’re doing.


What Psychiatrists Treat and Why

February 20, 2011

Do we treat diseases or symptoms in psychiatry?  While this question might sound philosophical in nature, it’s actually a very practical one in terms of treatment strategies we espouse, medications and other interventions we employ, and, of course, how we pay for mental health care.  It’s also a question that lies at the heart of what psychiatry is all about.

Anyone who has been to medical school or who has watched an episode of House knows that a disease has (a) an underlying pathology, often hidden to the naked eye but which is shared by all patients with that diagnosis, and (b) signs and symptoms, which are readily apparent upon exam but which may differ in subtle ways from patient to patient.  An expert physician performing a comprehensive examination can often make a diagnosis simply on the basis of signs and symptoms.  In some cases, more sophisticated tools (lab tests, scans, etc) are required to confirm the diagnosis.  In the end, once a diagnosis is obtained, treatment can commence.

(To be sure, sometimes a diagnosis is not apparent, and a provisional or “rule-out” diagnosis is given.  The doctor may initiate treatment on an empiric basis but will refine the diagnosis on the basis of future observations, responses to treatment, and/or disease course.)

In psychiatry, which is recognized as a branch of medicine and (should) subscribe to the same principles of diagnosis and treatment, the expectations are the same.  There are a number of diseases (or disorders) listed in the DSM-IV, each theoretically with its own underlying pathology and natural history, and each recognizable by a set of signs and symptoms.  A careful psychiatric evaluation and mental status exam will reveal the true diagnosis and suggest a treatment plan to the clinician.

It sounds simple, but it doesn’t always work out this way.  Psychiatrists may disagree about a given diagnosis, or make diagnoses based on “soft” signs.  Moreover, there are very few biological or biochemical tests to “rule in” a psychiatric diagnosis.  As a result, treatment plans for psychiatric patients often include multiple approaches that don’t make sense;  for example, using an antidepressant to treat bipolar disorder, or using antipsychotics to treat anxiety or insomnia symptoms in major depression.

The psychiatrist Nassir Ghaemi at Tufts has written about this before (click here for a very accessible version of his argument and here [registration required] for a more recent dialogue in which he argues his point further).  Ghaemi argues in favor of what he calls “Hippocratic psychopharmacology.” Specifically, we should understand and respect the normal course of a disease before initiating treatment.  He also emphasizes that we not treat symptoms, but rather the disease (this is also known as Osler’s Rule, in honor of Sir William Osler, the “founder of modern medicine”).  For example, Ghaemi makes a fairly compelling argument that bipolar disorder should be treated with a mood stabilizer alone, and not with an antidepressant, or an antipsychotic, or a sedative, because those drugs treat symptoms which should resolve as a person goes through the natural course of the disease.  In other words, we miss the diagnostic forest by focusing on the symptomatic trees.

The problem is, this is a compelling argument only if there is such a diagnosis as “bipolar disorder.”  Or, to be more specific, a clear, unitary entity with a distinct pathophysiological basis that gives rise to the symptoms that we see as mania and depression, and which all “bipolar” patients share.  And I don’t believe this assumption has been borne out.

My personal bias is that bipolar disorder does exist.  So do major depression, schizophrenia, panic disorder, anorexia nervosa, ADHD, and (almost) all the other diagnoses listed in the DSM-IV.  And a deeper understanding of the pathophysiology of each might help us to develop targeted treatments that will be far more effective than what have now.  But we’re not there yet.  In the case of bipolar disorder, lithium is a very effective drug, but it doesn’t work in everyone with “bipolar.”  Why not?  Perhaps “bipolar disorder” is actually several different disorders.  Not just formes frustes of the same condition but separate entities altogether, with entirely different pathophysiologies which might appear roughly the same on the outside (sort of like obesity or alcoholism).  Of course, there are also many diagnosed with “bipolar” who might really have no pathology at all– so it is no surprise that they don’t respond to a mood stabilizer (I won’t elaborate on this possibility here, maybe some other time).

The committee in charge of writing the DSM-5 is almost certainly facing this conundrum.  One of the “holy grails” of 21st century psychiatry (which I wrote about here) is to identify biochemical or genetic markers that predict or diagnose psychiatric disease, and it was hoped that the next version of the DSM would include these markers amongst its diagnostic criteria.   Unfortunately, this isn’t happening, at least not with DSM-5.  In fact, what we’re likely to get is a reshuffling and expansion of diagnostic criteria.  Which just makes matters worse:  how can we follow Osler’s advice to treat the disease and not the symptom when the definition of disease will change with the publication of a new handbook?

As a practicing psychiatrist, I’d love to be able to make a sound and accurate diagnosis and to use this diagnosis to inform my treatment, practicing in the true Hippocratic tradition and following Osler’s Rule, which has benefited my colleagues in other fields of medicine.  I also recognize that this approach would respect Dr Ghaemi’s attempt at bringing some order and sensibility to psychiatric practice.  Unfortunately, this is hard to do because (a) we still don’t know the underlying cause(s) of psychiatric disorders, and (b) restricting myself to pathophysiology and diagnosis means ignoring the psychosocial and environmental factors that are (in many ways) even more important to patients than what “disease” they have.

It has frequently been said that medicine is an art, not a science, and psychiatry is probably the best example of this truism.  Let’s not stop searching for the biological basis of mental illness, but also be aware that it may not be easy to find.  Until then, whether we treat “diagnoses” or “symptoms” is a matter of style.  Yes, the insurance company wants a diagnosis in order to provide reimbursement, but the patient wants management of his or her symptoms in order to live a more satisfying life.


How is an antidepressant an antidepressant?

January 14, 2011

I recently had dinner with a fellow psychiatrist who remarked that he doesn’t use “antidepressants” anymore.  Not that he doesn’t prescribe them, but he doesn’t use the word; he has become aware of how calling something an “antidepressant” implies that it’s something it (frequently) is not.  I’ve thought about his comment for a while now, and I’ve been asking myself, what exactly is an antidepressant anyway?

At the risk of sounding facetious (but trust me, that is not my intent!), an antidepressant can be defined as “anything that makes you feel less depressed.”  Sounds simple enough.  Of course, it only begs the question of what it means to be “depressed.” I’ll return to that point at another time, but I think we can all intuitively agree that here are a number of substances/medications/drugs/activities/people/places which can have an “antidepressant” effect.  Each of us has felt depressed at some point in our lives, and each of us has been lifted from that place by something different:  the receipt of some good news, the smile of a loved one, the exhilaration from some physical activity, the pleasure of a good movie or favorite song, the intoxication from a drug, the peace and clarity of meditation or prayer, and so on.

The critical reader (and the smug clinician) will correctly argue, those are simply things that make someone feel good; what about the treatment of clinical depression?  Indeed, one aspect of clinical depression is that activities that used to be pleasurable are no longer so. This distinction between “sadness” and “depression” (similar, but not identical to, the distinction between “exogenous” and “endogenous” depression) is an important one, so how do we as mental health professionals determine what’s the best way to help a patient who asks us for help?

It’s not easy.  For one thing, the diagnostic criteria for clinical depression are broad enough (and may get even more broad) that many patients who are experiencing “the blues” or are “stressed out” are diagnosed with depression, and are prescribed medications that do little, if anything.

So can we be more scientific?  Well, it would be intellectually satisfying to be able to say, “Clinical depression is characterized by a deficiency in compound X and the treatment replaces compound X,” much like we replace insulin in diabetes or we enhance dopamine in Parkinson’s disease.  Unfortunately, despite the oft-heard statement about “chemical imbalances,” there don’t appear to be any measurable imbalances.  The pretty pictures in the drug ads— and even in the scientific literature—show how (some) antidepressants increase levels of serotonin in the brain, but there’s not much evidence for this explanation for depression, as discussed in this review.  As the authors point out, saying depression is a deficiency in serotonin because SSRIs help, is like saying a headache is a deficiency in aspirin.

In fact, many “antidepressant” drugs affect different neurotransmitters, including norepinephrine and dopamine.  Additional medications that can benefit depression include mood stabilizers, stimulants, antipsychotics, glutamate antagonists, and thyroid hormone analogues.  Do you see a pattern?  I don’t.  Finally, there are still other interventions like electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), vagal nerve stimulation (VNS), and others, that don’t directly affect neurotransmitters at all, but affect other structures and pathways in the brain that we’re just beginning to understand.

Each of these is a tested and “approved” therapy for depression (although the data are better for some interventions than for others), and for each intervention, there are indeed some patients who respond “miraculously.”  But there are also others who are not helped at all (and still others who are harmed); there’s little evidence to guide us in our treatment selection.

To a nonpsychiatrist, it all seems like a lot of hand-waving.  Oftentimes, it is.  But you would also think that psychiatrists, of all people, would be acutely aware that their emperor has no clothes.  Unfortunately, though, in my experience, they don’t.  With a few exceptions (like my dinner colleague, mentioned above), we psychiatrists buy the “chemical imbalance” theory and use it to guide our practice, even though it’s an inaccurate, decades-old map.  We can explain which receptors a drug is binding to, how quickly a drug is metabolized & eliminated from the body, even the target concentration of the drug in the bloodstream and cerebrospinal fluid.  The pop psychiatrist Stephen Stahl has created heuristic models of psychiatric drugs that encapsulate all these features, making prescription-writing as easy as painting by numbers.  But in the end, we still don’t know why these drugs do what they do.  (So it shouldn’t really surprise us, either, when the drugs don’t do what we want them to do.)

The great “promise” of the next era of psychiatry appears to be individualized care– in other words, performing genetic testing, imaging, or using other biological markers to predict treatment choices and improve outcomes.  Current efforts to employ such predictive techniques (like quantitative EEG) are costly, and give predictions that are not much better than chance.

Depression is indeed biological (as long as you agree that the brain has at least something to do with conscious thought, mood, and emotion!), but does it have recognizable chemical deficiencies or brain activation patterns that will respond in some predictable way to available therapies?  If so, then it bodes well for the future of our field.  But I’m afraid that too many psychiatrists are putting the cart in front of the horse, assuming that we know far more than we actually do, and suggesting treatments that “sound good,” but only according to a theoretical understanding of a disease that in no way reflects what’s really happening.  

Unfortunately, all this attention on chemicals, receptors, and putative neural pathways takes the patient out of the equation.  Sometimes we forget that the nice meal, the good friend, the beautiful sunset, or the exhilarating hike can work far better than the prescription or the pill.


Childhood ADHD and Medicaid

December 31, 2010

A study out of UCLA shows that there is a need for significant improvement in the delivery of ADHD care to children on Medicaid.  The study was published in the Journal of the American Academy of Child and Adolescent Psychiatry and a summary can be found at Medscape.

The study followed over 500 children with ADHD.  All were on Medi-Cal (California’s Medicaid program) and were observed over a one-year period.  Some participated solely in primary care treatment, while others received “specialty care” in mental health clinics.  (Because this was an observational study, children were not randomized or assigned to each group, but were simply followed over their course of treatment.)  The study found that at the end of the year, both groups of children fared the same on measures of ADHD symptoms, functioning, academic achievement, family function, and other parameters.

How did primary care differ from “specialty” care?  For one thing, children in the primary care group received stimulant medication 85% of the time (nearly all of these children received a prescription for some medication) but that was about it:  They only followed up with their providers an average of 1 or 2 times in the entire
one-year followup period, and their prescription refill rate was less than 40%.  (50% dropped out of care.)

On the other hand, over 90% of the children in the specialty care group received some sort of psychosocial treatment, and only 40% of these children received medication (30% received stimulants).  Office visits were far more frequent in this population, too, averaging over 5 per month for the duration of the one-year study.

So on the face of it, one might predict that specialty treatment would provide much better care; children had far more frequent contact with their providers, medications were used judiciously (one would assume), and psychosocial interventions were included.  However, the end result was that children did not fare differently in each group.  Academic scores and measures of clinical impairment and “parent distress” were similar in both groups.  Dropout rates and medication discontinuation rates were also similar in each group.

One obvious limitation of this study, which the authors emphasize, is that this is not a randomized trial, but rather an observational study of “real world” patients.  But then again, that’s what they wanted to do:  to observe whether mental health clinics provided better ADHD care.   Two unfortunate conclusions can be drawn.  First, primary care mental health clinics do very little to treat childhood ADHD (cynically, one might look at the data and conclude that they simply “throw meds at the problem” with little to no follow-up).  Secondly, even when these clinics do refer children to a higher level of care, the outcomes aren’t that much better (and the resource costs are undoubtedly much higher).

With the promised expansion of the Medicaid program under PPACA, more children will be receiving care, with mental health as a priority area.  Hopefully, studies like this one will prompt us not simply to provide more care to the increased number of children that will undoubtedly seek it, but to provide better care along the way.


A new take on placebos

December 29, 2010

It has long been known in medicine that placebos can be surprisingly
effective, for the treatment of a wide range of disorders.  A placebo, whose name is taken from
the Latin for “I please,” is an inert substance, a sugar pill, that
should have little to no effect on any physiological process.

A “placebo-controlled study” is considered the gold standard in medication
trials; in such a study, half of the patients with a given condition
are prescribed an active medication, while the other half are
prescribed a placebo.  In virtually all studies, there is some
improvement in the placebo group, and this improvement can, at times,
be significant.  In trials of antidepressants, for example, it has
been estimated that up to 75% of the antidepressant response may be due
to a placebo effect, an observation that has received much
popular press
of late.

A research group at Harvard Medical School has taken this one step
further.  They took a group of patients with irritable bowel
syndrome (IBS), gave half of them a placebo, and the other half nothing.  (This would be, I
guess, “placebo-placebo controlled study”!)  More importantly,
however, they even told the
placebo group that they were getting a placebo!  Specifically,
they told patients that they would get “placebo pills made of an inert
substance, like sugar pills, that have been shown … to produce
significant improvement in IBS-symptoms.”

In their report (available freely here), they
showed that the placebo was more effective at treating IBS symptoms
than nothing at all, and– even though they did not directly compare
placebo to any active medication– they found that the rate of
improvement was twice the success rate of the most powerful IBS
medications.

While this raises several important questions about the utility of
placebos in medicine, it also hits at the heart of a lot of what we do
in psychiatry.  Most psychiatrists have the experience of seeing a
patient fail multiple medications but exhibit a positive response to
yet another medication from the same class, for no obvious
reason.  Or giving two similar patients the same medication and
finding that one responds while the other does not.

Modern biological psychiatry looks at situations like these and asks,
what are the interindividual biochemical or physiological differences
that predict response to one agent over another?  Are there
genetic or other biological markers that make one person a better
candidate for medication X than for medication Y?

This study, however, raises new questions in situations such as
these.  If patients’ symptoms can improve after taking an inert
substance (and I’d be interested to see a repeat study on patients with
a mental illness– although IBS itself is a “psychosomatic” illness
with strong psychological features), this result cannot be ignored and
ascribed to chance.  Something is
working in this treament, but exactly what?  Is it the way we talk to patients about
treatment?  Something about patients’ expectations of treatment?  If
patients don’t believe that their meds will work, does this prompt them
to enact more effective behavioral changes in their lives?  It
appears that patients have more of an ability to solve their problems
than we often give them credit for, and this study should prompt us to
look for those strengths, not serve as ammunition to attack the
weaknesses of psychiatric medicine.