Misplaced Priorities in Addiction Treatment?

January 31, 2011

Can an addiction be treated with a drug?  Imagine: a simple pill to satisfy all of one’s cravings for drugs or alcohol, and to avoid the ravages of this disease.  It would revolutionize our treatment of addiction.  And since we’re constantly told that addiction is a brain disease, it only makes sense that, once we understand the underlying biology, we’ll be able to create just such a pill, right?  Countless researchers, labs, and pharmaceutical companies are indeed trying to do this, as we speak.

The addict struggling to get clean might scramble to be first in line to receive this magic pill.  The recovered addict, on the other hand, would probably argue that a chemical solution, a “drug to end all drugs,” so to speak, is far too simplistic.  Addictions are behavioral, psychological, social, and spiritual problems (and, yes, they also have some underlying neurochemical factors, too).  A pill may treat withdrawal symptoms, or help to reduce the complications of intoxication, or to minimize craving, but even if that pill is 99% effective in reducing cravings, or preventing the intoxicating effect of a drug, the addict will always look to achieve that 1%.  It’s how the disease works.

I mention this not only because I am familiar with the recovery process (including the twelve-step approach, which is decidedly not pharmacological but is probably the closest thing we have to an “effective treatment”), but I am also familiar with how well-meaning professionals often trivialize addiction and recovery.  Our own biases sometimes keep us from recognizing what should be obvious.

A good example is in the January 2011 American Journal of Psychiatry, which contains a letter to the editor suggesting that disulfiram (commonly known as Antabuse) ought to be investigated for its “anticraving” properties.  They point out that disulfiram may increase levels of dopamine in the brain, and since dopamine is “involved” in reward (and addicts sometimes have decreased dopamine activity in the reward pathways), it may reduce craving for addictive drugs and behaviors.

For those of you who don’t know about Antabuse, it has been around since the 1940s and is known as an “aversive” agent.  When a person drinks alcohol while taking Antabuse, the drug impairs one of the key steps in alcohol metabolism, leading to the build-up of acetaldehyde in the blood, which causes sweating, nausea, vomiting, flushing, and headache.  By itself, Antabuse has no effect on drinking or the desire to drink, but when an alcoholic drinks on Antabuse, the reaction is so uncomfortable that the person learns this association with alcohol and avoids it in the future.  (Good old-fashioned classical conditioning at work.)

My reaction to the letter in the journal is not that the authors were factually incorrect, or that we shouldn’t study disulfiram and its properties, but that their argument misses the point.  Despite decades of experience with Antabuse, we still have alcoholism and other addictive behaviors, so obviously it’s not a magic bullet.  And people who take Antabuse still crave alcohol, so it doesn’t reduce craving to any meaningful degree (in fact, one of the arguments against using Antabuse is that people who want to drink– which is, unfortunately, most alcoholics– simply stop taking it.)  The authors cite a case study in which a patient’s desire to gamble “disappeared completely” after taking Antabuse, but as with most everything in psychiatry, how do we know this had anything to do with the drug?

It’s quite naive to think that a simple pill will work in an addiction when addictions are far more complex entities.  It reminds me of the doctor who chooses Wellbutrin instead of a different antidepressant for a depressed patient “because she smokes” (the active compound in Wellbutrin, bupropion, also sold as Zyban, has been shown to be effective in smoking cessation).  Or the doctor who prescribes Suboxone for the daily Oxycontin and Vicodin addict.  Or the doctor who adds Topamax to the regimen of the obese bipolar patient (because some studies show a modest decrease in food craving).

These are not bad ideas (and yes, I’ve seen them all), but again they miss the point.  The depressed smoker isn’t going to give up nicotine because she’s all of a sudden taking Wellbutrin.  The opiate addict won’t unlearn his addictive behaviors and mindset because he’s now taking Suboxone.

If science continues to look at addictions through the lens of neurotransmitters and “reward pathways” in the brain, and to use animal models to study substance dependence (it goes without saying that a rat in a cage is quite different from the homeless crack-addicted prostitute, or the high-powered alcoholic CEO), then we will achieve nothing more than partial success in treating substance dependence.  The clinical trials for “anticraving” drugs like Campral and naltrexone themselves show how limited they are; they measure their effects in terms of “number of drinking days” or “time until first heavy drinking day.”  Not in binary terms like “drinking” or “not drinking.”

I know that none of the experts in the addiction field would ever suggest that a medication will solve any individual’s (much less society’s) addiction problem.  But I’m concerned about the non-expert clinician, who has neither experienced nor witnessed true addiction.  I’m also concerned about the addict, who sees a news headline about some new anti-alcoholism or anti-obesity pill and believes that the wonders of modern science will cure his addiction (so he doesn’t have to look at his own problems).

We in the field also need to be careful about what we promise our patients, and understand the limits of our science.  Perhaps we should go one step further and scrap the science altogether, and instead focus on other ways to understand what drives our patients to drink or use drugs, and emphasize a more comprehensive approach to recovery– and yes, one that will require the addict to do a lot more than just take a pill.

“Decision Support” in Psychiatry

January 28, 2011

I’ve long believed that, just as no two psychiatric patients are identical, there is– and never will be– a “one size fits all” approach to psychiatric care.  However, much work has been done in the last several years to develop “algorithms” to guide treatment and standardize care. At the same time, the adoption of electronic health record (EHR) systems– which are emphasized in the new U.S. health care legislation– has introduced the possibility that computerized decision-support systems will help guide practitioners to make the right choices for their patients.  It is my opinion that such approaches will not improve psychiatric care, and, in fact, will interfere with the human aspect that is the essence of good psychiatric practice.

Clinical decision support,” or CDS, is the idea that an algorithm can help a provider to give the right kind of care.  For a busy doctor, it makes sense that getting a quick reminder to prescribe aspirin to patients with coronary artery disease, or to give diet and exercise recommendations to patients at risk for obesity or diabetes, helps to ensure good care.  Several years ago, I actually helped to develop a CDS system designed to remind primary care doctors to avoid opiate painkillers (or use them with caution) in patients who had a history of substance abuse or other relative contraindications to narcotics.  At the time, I thought this was a great idea.  Why not harness the ability of a computer to gather all the data on a given patient– something that even the best doctor cannot do with absolute accuracy– and suggest the most advisable plan of action?

Now that I spend most of my time actually practicing medicine, and using two different EHR systems, I’m having second thoughts.  While I appreciate the ability to enter patient data (and my notes) into a system that is instantly accessible by any provider in my office at any time, and write prescriptions with a few clicks of my mouse, I’ve begun to resent the ways in which EHRs tell me how to practice, particularly when (a) they give recommendations that I would employ anyway (thereby wasting my time), or (b) they give recommendations that deviate from what I believe is right for the patient.

Obviously, the latter complaint is particualrly relevant in psychiatry, where each patient presents a different background of symptoms, stressors, preferences, and personal history.  When anyone asks me “who is your ideal patient for drug X?” or “what is your first choice of drug for depression?” I find it hard to give an answer.  Treatment choices come down to a feeling, a gestalt, incorporating both observable data and intuition; it’s hard to describe and impossible to quantify.

One example of a psychiatric CDS is based on the Texas Medication Algorithm Project (TMAP).  The TMAP was developed to help providers determine what medications to use in the treatment of mood disorders; the first version of TMAP for depression was designed in 1999 and implemented in a computerized CDS in 2004.  A pilot study involving four primary care providers, published in 2009, showed that depression outcomes were slightly better (i.e., scores in the HAM-D were lower) in the group using the CDS.  (This may have been due to the setting; in a busy primary care clinic, any guidance to address depression symptoms may improve outcomes relative to no guidance at all.)  However, a follow-up study by the same group found that it was much harder to implement the CDS on a more widespread scale in mental health clinics, due to technical problems, poor IT support, billing & coding problems, formulary issues, recommendations that providers disagreed with, lack of time, and impact on workflow.

That may have been for the better.  A new study in this month’s Archives of Internal Medicine by Romano and Stafford shows that CDSs may just be a waste of time and money.  They evaluated over 330 million ambulatory care patient visits using EHRs over 2005-2007, 57% of which involved at least one CDS, and found that, on 20 quality-of-care indicators, using a CDS contributed to improvements in treatment (i.e., treatment concordant with established guidelines) on only one measure.  (Two measures involved psychiatric conditions– one was for the treamtent of depression, and the other was to remind providers not to use benzodiazepines alone for depression treatment.  Neither of these measures showed improvement when a CDS was used, relative to no CDS.)

So despite all the resources devoted to electronic medical records and clinical decision support systems to improve care, the evidence seems to indicate that they don’t.  Either doctors ignore CDSs and provide “practice as usual” anyway, or the CDSs give recommendations that doctors already follow.

This may be good news for psychiatry, where treatment guidelines (thankfully) offer a great deal of latitude, but CDSs, by their very nature, may restrict our options.  In the future, then, when we believe that the patient sitting in front of us is a good candidate for Effexor, or Seroquel, or interpersonal therapy with no meds at all, we may no longer need to explain to a computer program why we’re ignoring its recommendation to try Prozac or Haldol first.

In my opinion, anything that preserves the integrity of the physician-patient interaction– and prevents the practice of medicine from turning into a checklist-and-formula-based recipe– preserves the identity of the patient, and improves the quality of care.

Addendum:  See also a related post today on 1boringoldman.com.

Viva Viibryd ?

January 25, 2011

Well, what do you know… I turn my back for one second and now the FDA has gone ahead and approved another antidepressant.

This new one is vilazodone, made by Massachusetts-based company Clinical Data, Inc., and will be sold under the name Viibryd (which I have absolutely no idea how to pronounce, but I’m sure someone will tell me soon).

At first glance, vilazodone seems promising. It’s not exactly a “me-too” drug, a molecule similar in structure and function to something that already exists. Instead, it’s a “dual-action” antidepressant, a selective serotonin reuptake inhibitor and partial agonist at serotonin 1A receptors. In other words, it does two things: it blocks the reuptake of serotonin into neurons (much like the existing SSRIs like Prozac, Zoloft, and Lexapro) and it acts as a partial agonist at a particular type of serotonin receptor called “1A.” A partial agonist is a molecule that binds to a receptor on a target cell and does not activate that cell fully but doesn’t entirely prevent its response, either.

(Note: don’t let the name fool you. “Dual-action” agents are not “twice as effective” as other agents, and sometimes work just the same.)

If you buy the serotonin hypothesis of depression (closely derived from the “monoamine hypothesis“), then depression is caused by a deficiency in serotonin. SSRIs cause an increase in serotonin between two cells. However, the higher levels of serotonin serve as “negative feedback” to the first-order cell in order to keep the system in balance. (Our bodies do this all the time. If I keep yelling at you for no clear reason, you’ll rapidly “downregulate” your attention so that you don’t listen to me anymore. Neurons work this way, too.) The idea behind a partial agonist is that it will only do “part” of the work that serotonin will do (actually, it will effectively block the negative feedback of serotonin) to increase serotonin release even more.

Remember– that’s only if you agree that low serotonin is responsible for depression. And there are plenty of respectable people who just don’t buy this. After all, no one has convincingly shown a serotonin deficit in depression, and when SSRIs do work (which they do, remarkably well sometimes), they may be acting by a totally different mechanism we just don’t understand yet. Nevertheless, vilazodone did show a significant effect as early as the first week, an effect that lasted for eight weeks.

Specifically, a phase III trial of 410 adults with depression showed decreases in MADRS and HAM-D scales relative to placebo, as well as on the CGI-I, CGI-S, and HAM-A scales, with a decrease in MADRS score from a mean of 30.8 at baseline to about 18 at the 8-week timepoint (the placebo group showed a decrease of about 10 points). A similar decrease was seen in the HAM-D. As is typical with these studies, the phase III trial did not compare vilazodone to an existing drug. However, unpublished phase II trials did compare it to fluoxetine (Prozac) and citalopram (Celexa), and to placebo, and results show that the drugs were comparable (and placebo response rates were high, as high as 40% in some trials). Incidentally, 9.3% of patients in the phase III trial dropped out due to adverse effects, mainly diarrhea.

So is a blockbuster in the works? Well, it’s not quite as “new” as one would think. SSRIs have been in widespread use for years, and there’s already a serotonin 1A partial agonist available called BuSpar (generic = buspirone) which is sort of a “ho-hum” drug– effective for some, but nothing to get too excited about. It seems that one could make “homemade” vilazodone by combining buspirone with an SSRI. (Kids, don’t try this at home. Please consult an expert.) This is a fairly common combination, although most psychiatrists have been underwhelmed by buspirone’s efficacy (one of my teachers called it “holy water”). Maybe vilazodone will convince me otherwise.

To go back to my original question, do we really need this? My gut reaction is no, as it seems too similar to what we already have available. There may be a small group of treatment-resistant depressed patients for whom vilazodone will be a wonder drug, a true lifesaver. In an attempt to discover this small group, the manufacturer is simultaneously studying “biomarkers that may predict treatment response.” In other words, they’re looking for genetic “fingerprints” that might predict patients who will respond to their drug (or who will get side effects). They have no “hits” yet (one of the markers they studied in phase III proved to have no predictive value in a follow-up trial), but it’s appealing to think that we might get more data on how to use– or avoid– this new drug more wisely.

While it’s good to have more tools in our toolkit, I sincerely hope this doesn’t turn into yet another in a long line of medications that we give to depressed patients in the trial-and-error process that unfortunately characterizes a lot of depression management. What’s truly needed is not just another serotonin agent, but a guideline (like a genetic test) to predict who’s likely to respond, or, better yet, a more sophisticated understanding of what’s happening in the minds of “depressed” patients. (And the differences among depressed patients far outweigh their similarities.) Until then, we’ll just be making incremental progress toward an elusive goal.

Levitra Ads Yanked From TV

January 25, 2011

Well, this is unfortunate news.  Drug maker GlaxoSmithKline is pulling its TV ads for Levitra.

No more awkward notices about “erections lasting longer than four hours” on the evening news.  No more of those suggestive glances between a couple out for a relaxing hike or enjoying a glass of wine on the back porch of their house (after their grown children have moved out, of course). No more reminders of how a simple pill can add that “spark” back in your relationship when you’d otherwise be worring about your retirement account, your mortgage, or which color of convertible to buy to satisfy your craving for youth.

In all seriousness, Glaxo sees erectile dysfunction as a “legitimate medical condition,” and while they’re committed to their product, they are pulling their direct-to-consumer Levitra ads “to be more respectful of patients.”  Says Dierdre Connelly, president of their North America pharmaceuticals unit, “when we walk into your home through television, we have to do it in a respectful way.”

That’s funny, when my friends walk into my home, they almost always ask me questions about my sex life and ask me indirect questions about whether I’m “ready.”

I have to admit, though, their German advertising is more entertaining:


LSD and the Happy Housewife

January 24, 2011

This video has been circulating on the internet over the last week or so; it’s from the mid-1950s and shows a “typical housewife” and her first experience with LSD.  It was part of a research study by Sidney Cohen at the Los Angeles VA Hospital, who was testing the effects of LSD on normal volunteers.  (hat tip to Huffington Post)

Lysergic acid, or LSD, has a fascinating history, and if we can ignore the pervasive cultural bias against  psychedelics and other potentially abusable substances, LSD must have appeared quite revolutionary in the mid-20th century.  Remember, this was before the advent of psychiatric medicine, after World War II, and at a time when psychotherapy was called upon to handle everything from psychosis to the neurotic housewife like the one in the video.  (Okay, I don’t know if she was neurotic, I’m just assuming so!)  Psychedelics were introduced by people like Cohen, Timothy Leary, and Stansilav Grof, as ways to alter the fundamental personality structure of a patient, creating an “inner quietude,” breaking down psychological barriers to insight, or “enhancing creativity.”  In the absence of anything else even remotely similar, LSD must have held quite some promise for psychiatry.

(Actually, it still does.  A number of controlled studies on psychedelics are underway, including the study of MDMA (ecstasy) to treat PTSD, and psilocybin (mushrooms) to treat anxiety and pain in end-stage cancer.)

One area in which LSD was used in the past, with some isolated positive results, was in the treatment of alcoholism.  It was believed that LSD might cause the same sort of “spiritual awakening” that is thought to be so important in the 12-Step model of recovery.  Indeed, Bill Wilson, founder of Alcoholics Anonymous, received a series of LSD sessions from 1955 to 1959, and as a result of his chemically-induced “spiritual” experiences, he is reported to have approached the directorship of AA to ask them to consider endorsing LSD as a therapy for alcoholism.  Something tells me AA would have a much wider membership today if this had taken hold.

Bonus feature:  A video made on the 100th anniversary of Albert Hofmann, the Sandoz Laboratories scientist who first synthesized LSD, depicting Hofmann’s first experience with LSD.  He took a dose in his laboratory but had to go home shortly thereafter because of acute anxiety and perceptual abnormalities.  Riding his bicycle home, he experienced feelings of paranoia, visual hallucinations and illusions, and the fear of imminent death.  This video re-creates the episode.

And finally, for good measure, if you like this, check out one of my favorite music videos, “Gronlandic Edit” by of Montreal.

Psychosomatic illness and the DSM-5

January 21, 2011

Among the most fascinating diagnoses in psychiatry are the somatoform disorders; these are characterized chiefly by physical symptoms without a clear medical or biological basis, but which instead are thought to arise from some deeper psychological source.  The field of “psychosomatic medicine” (not to mention many of the most classic cases of in the history of psychiatry and psychoanalysis) illustrates the impact of mental factors on physical illness.  Indeed, most of us have experienced the effects of our moods, thoughts, and attitudes on physical symptoms.  For instance, our headaches intensify when we’re under a lot of stress at work, whereas we can usually ignore pain and fatigue when in the midst of intense and exhilarating competition.  Conversely, intense psychological trauma or prolonged deprivation can contribute to chronic physical disease, while a terminal illness can cause extreme psycholgical suffering.

The somatoform disorders as currently listed in the DSM-IV, the “Bible” of psychiatric diagnosis, are:

  • conversion disorder – unexplained neurological symptoms that are thought to arise in response to psychological conflicts
  • somatization disorder – more widespread physical symptoms (pain, gastrointestinal, sexual, neurological) before the age of 30 and with a chronic course
  • hypochondriasis – excessive preoccupation, worry, or fear about having a serious medical illness
  • body dysmorphic disorder – excessive concern and preoccupation with a perceived (but often nonexistent) physical defect
  • pain disorder – chronic pain in one or more areas, usually exacerbated by psychological factors
  • undifferentiated somatoform disorder – one unexplained physical symptom, present for six months

The planning committee in charge of writing the DSM-5, the replacement to the DSM-IV, wants to scrap this category and create a new one called simply “Somatic Symptom Disorders.”  What makes a “Somatic Symptom Disorder” in the new classification?  According to the APA, “any somatic symptom or concern that is associated with significant distress or dysfunction,” combined with “anxiety” or “persistent concerns” about the symptoms.  Have a nasty, persistent cough?  Frequent headaches?  Concerned about it?  Congratulations, you may now have a mental illness as well.  They also propose a “complex somatic symptom disorder” (CSSD) category in which the symptom(s) is/are accompanied by “excessive or maladaptive response” to those symptoms.  What’s excessive or maladaptive?  As with anything in psychiatry, that’s for you (or, more accurately, your doctor) to decide.

(Specifically, most of the somatoform disorders will be lumped together into the “SSD” category.  They plan to move body dysmorphic disorder into the anxiety group, and the criteria for conversion disorder will be narrowed to describe simply an unexplained neurological symptom– none of the deeper psychological components are necessary for this diagnosis either).

Why would they do such a thing?  In the words of the APA, “clinicians find these diagnoses unclear” and “patients find them very objectionable.”  In other words, doctors just don’t use these diagnoses, and patients think their concerns aren’t being taken seriously.

Whether this justification seems appropriate is certainly debatable.  Maybe these diagnoses aren’t made because we’re just not looking for them.  Maybe we’re afraid of alienating patients.  Maybe it’s because no new drugs have been approved for use in somatoform disorders.  Or maybe it really is just a bogus category.  Nonetheless, the proposed solution may be just as bogus.  Indeed, it seems rather absurd to give a psychiatric diagnosis on the basis of a single unexplained bodily symptom and, of course, one complaint about this proposal is that it continues psychiatry’s gradual march towards pathologizing everyone.

To me, the greatest disappointment is that the richness and complexity of the various somatoform disorders will be disposed of, in favor of criteria that only require a physical symptom and “anxiety or concern” about the symptom.  It may sound condescending or objectionable to remark that an unexplained symptom is “all in one’s head,” but these more user-friendly diagnostic criteria may make clinicians even less likely to “look under the hood,” so to speak, and to uncover the mental and psychological factors that may have an overwhelming, yet hidden, influence on the patient’s body and his/her perceptions of bodily phenomena.
We are only beginning to understand the intricacies and wonders of the connections between mind and body.  Such understanding draws heavily on complementary approaches to human health and disease, alongside the findings of conventional medical science.  Hopefully, psychiatric practitioners will continue to pay attention to advances in this field in order to provide comprehensive, “holistic” care to patients, even if the DSM-5’s efforts at diagnostic expediency and simplicity portend otherwise.

Kids gaming pathologically

January 19, 2011

Today’s New York Times “Well” blog shares the results of a recent study suggesting that video games may contribute to depression in teenagers.  Briefly, the study found that grade-school and middle-school students who were “more impulsive and less comfortable with other children” spent more time playing video games than other teens.  Two years later, these same students were more likely to suffer from depression, anxiety, and social phobias.  The authors are careful to say that there’s no evidence the games caused depression, but there’s a strong correlation.

I pulled up the original article, and the authors’ objectives were to “measure the prevalence…of pathological video gaming, …to identify risk and protective factors, …and to identify outcomes for individuals who become pathological gamers.”  They didn’t use the word “addiction” in their paper (well, actually, they did, but they put it in quotes), but of course the take-home message from the NY Times story is quite clear:  kids can be addicted to video game playing, and this could lead to depression.

As with any extreme activity, I would not be surprised to learn that there are some kids who play games compulsively, who sacrifice food, sleep, hygiene, and other responsibilities for long periods of time.  But to use words like ‘addiction’– or even the less loaded and more clinical-sounding ‘pathological gaming’– risks labeling a potentially harmless behavior as a problem, and may have little to do with the underlying motives.

What’s so pathological, anyway, about pathological gaming?  Is the kid who plays video games for 30 hours a week playing more “pathologically” than the one who plays for only 10?  Does the kid with lots of friends, who gets plenty of fresh air, is active in extracurriculars, and has lots of friends face a more promising future than the one who would prefer to sit at home on the XBOX360 and sometimes forgets to do his homework?  Which friends are more valuable in life—the Facebook friends or the “real” friends?  We know the intuitive answer to these questions, but where are the data to back up these assumptions?

The behavior itself is not the most important factor.  I know some “workaholics” who work 80-plus-hour weeks; they are absolutely committed to their work but they also have rich, fulfilling personal lives and are extremely well-adjusted.  I’ve also met some substance abusers who have never been arrested, never lost a job, and who seem to control their use (they often describe themselves as “functional” addicts) but who nonetheless have all the psychological and emotional hallmarks of a hard-core addict and desperately need rehabilitation.

I have no problem with researchers looking at a widespread activity like video game playing and asking whether it is changing how kids socialize, or whether it may affect learning styles or family dynamics.  But when we take an activity that some kids do “a lot” and label it as pathological or an “addiction,” without defining what those terms mean, or asking what benefit these kids might derive from it, we are, at best, imposing our own standards of acceptable behavior on a generation that sees things much differently, or, at worst, creating a whole new generation of addicts that we now must treat.

FDA approval of psych meds – an alternative

January 17, 2011

The FDA’s approval process for new psychiatric drugs is broken.  It is time-consuming and costly, and benefits no one—patients, physicians, pharmaceutical companies, managed care organizations, or other payers.

To bring a new compound to market, pharmaceutical companies and academic labs invest years (and millions of dollars) in basic research.  When a compound appears promising, it enters “Phase I” testing, to assess the drug’s basic properties and safety profile in healthy human subjects; this phase may take one to two years.  If successful, the drug enters “Phase II” testing, which measures responses to the drug in a small target population of patients.  After this step comes “Phase III” testing, usually the most expensive and prolonged phase, in which the drug is tested (usually against a placebo) to determine its safety and efficacy for a given indication.  This may take many more years, and many more millions of dollars, to complete.

For psychiatric drugs, this process is somewhat of an anachronism.  There is extensive overlap among psychiatric diagnoses (and the changes on the horizon with DSM-5 won’t make things any clearer), so it makes little sense to focus on a drug’s efficacy for a single indication (e.g., generalized anxiety disorder) when it could prove quite helpful in another (e.g., major depression).  The end result is that doctors think about patients in terms of diagnoses (and assign diagnoses that are sometimes inaccurate) rather than about the symptoms (or the patients) they are treating.  Managed care companies, too, force us to pigeonhole patients into a given diagnosis in order for them to pay for a medication.  Finally, pharmaceutical companies must conduct expensive, prolonged Phase III trials for each indication they wish to receive (driving up costs of all medications), and are subject to significant penalties when they even suggest that their drug might be used in a slightly different population.

Here is one way the drug approval process could be improved for all involved.  Rather than recruit a uniform population of subjects with a given diagnosis (which does not resemble the “real world” in any way), we can require drug companies to test the drug to a large number of subjects with a broad range of psychiatric conditions (as well as normal controls), perform a much more extensive battery of tests on each subject, release all the data, and then allow doctors to determine how to use the drugs.

For instance, let’s say a company believes, on the basis of its research, that “olanzidone” might be an effective antipsychotic.  So they recruit several hundred subjects—some with schizophrenia, some with depression, some with bipolar disorder, some with a personality disorder, some with multiple disorders, and so on, and some with no psychiatric diagnosis at all—and subject them to a battery of baseline tests:  a physical exam; comprehensive laboratory measures; genetic screens; cognitive tests; personality tests; tests of anxiety, depression, OCD symptoms, panic symptoms, PTSD symptoms, and so on; as well as a full diagnostic clinical interview.  They administer olanzidone at a range of doses (determined to be safe on the basis of phase I testing) and over a range of time periods, then perform the same battery of tests after the trial.  All results are then published and made available to clinicians.

The results might show that olanzidone is an effective antipsychotic, but only in patients with a concurrent mood disorder.  They might show that olanzidone worsens anxiety.  They might show that olanzidone causes weight gain, but only in patients with the HTR2C -759C/T polymorphism.  They might show that olanzidone worsens negative symptoms of psychosis, but improve cognitive abilities.  Get the picture?

It sounds, at first, like this alternative would be just as complex and time-consuming as the current way of doing things.  But I don’t think so.  For one thing, drug companies wouldn’t have to spend as much time and money finding the “perfect” subject population, and can test a drug’s safety profile in a diverse group of subjects.  Also, companies wouldn’t have to invest millions of R&D dollars to obtain each new indication.  Furthermore, they would be required to make all data public, preventing them from hiding data which don’t support a medication’s proposed indication.  Finally, this proposal would allow doctors to make medication decisions based on a much more extensive and accuate data set, rather than the information that is offered to them in glossy drug-company brochures.

The drawbacks?  We might end up with far more compounds on the market, some of questionable efficacy.  But drug companies would most likely invest their efforts in developing compounds that have some chance of improving what’s on the market (instead of just finding a new “niche” indication).  Drug companies may also fear the loss of market share or the costs of testing drugs on larger populations of patients.  But, in reality, this may actually create new markets for drugs and would obviate the need to push for new indications every few years.

This change would also make for more truthful (and informative) marketing material.  Instead of an ad proclaiming “Olanzidone newly approved for the treatment of schizophrenia!!” (which doesn’t mean very much, frankly), I might read an ad explaining “Olanzidone shows a 30% decrease in average PANSS score; no effect on mood symptoms; a significant improvement in executive function but not memory; a modest decrease in Beck Anxiety Inventory score; and a significant improvement in Pittsburgh Sleep Quality Index.”  Not quite as sexy, but certainly more helpful in my practice.

This will, of course, never happen, because there are simply too many vested interests in the status quo.  But now is the time to start thinking of ways to make the approval process more transparent to the public, and to help doctors (as well as patients and payers) make more informed decisions about the drugs we use.

How is an antidepressant an antidepressant?

January 14, 2011

I recently had dinner with a fellow psychiatrist who remarked that he doesn’t use “antidepressants” anymore.  Not that he doesn’t prescribe them, but he doesn’t use the word; he has become aware of how calling something an “antidepressant” implies that it’s something it (frequently) is not.  I’ve thought about his comment for a while now, and I’ve been asking myself, what exactly is an antidepressant anyway?

At the risk of sounding facetious (but trust me, that is not my intent!), an antidepressant can be defined as “anything that makes you feel less depressed.”  Sounds simple enough.  Of course, it only begs the question of what it means to be “depressed.” I’ll return to that point at another time, but I think we can all intuitively agree that here are a number of substances/medications/drugs/activities/people/places which can have an “antidepressant” effect.  Each of us has felt depressed at some point in our lives, and each of us has been lifted from that place by something different:  the receipt of some good news, the smile of a loved one, the exhilaration from some physical activity, the pleasure of a good movie or favorite song, the intoxication from a drug, the peace and clarity of meditation or prayer, and so on.

The critical reader (and the smug clinician) will correctly argue, those are simply things that make someone feel good; what about the treatment of clinical depression?  Indeed, one aspect of clinical depression is that activities that used to be pleasurable are no longer so. This distinction between “sadness” and “depression” (similar, but not identical to, the distinction between “exogenous” and “endogenous” depression) is an important one, so how do we as mental health professionals determine what’s the best way to help a patient who asks us for help?

It’s not easy.  For one thing, the diagnostic criteria for clinical depression are broad enough (and may get even more broad) that many patients who are experiencing “the blues” or are “stressed out” are diagnosed with depression, and are prescribed medications that do little, if anything.

So can we be more scientific?  Well, it would be intellectually satisfying to be able to say, “Clinical depression is characterized by a deficiency in compound X and the treatment replaces compound X,” much like we replace insulin in diabetes or we enhance dopamine in Parkinson’s disease.  Unfortunately, despite the oft-heard statement about “chemical imbalances,” there don’t appear to be any measurable imbalances.  The pretty pictures in the drug ads— and even in the scientific literature—show how (some) antidepressants increase levels of serotonin in the brain, but there’s not much evidence for this explanation for depression, as discussed in this review.  As the authors point out, saying depression is a deficiency in serotonin because SSRIs help, is like saying a headache is a deficiency in aspirin.

In fact, many “antidepressant” drugs affect different neurotransmitters, including norepinephrine and dopamine.  Additional medications that can benefit depression include mood stabilizers, stimulants, antipsychotics, glutamate antagonists, and thyroid hormone analogues.  Do you see a pattern?  I don’t.  Finally, there are still other interventions like electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), vagal nerve stimulation (VNS), and others, that don’t directly affect neurotransmitters at all, but affect other structures and pathways in the brain that we’re just beginning to understand.

Each of these is a tested and “approved” therapy for depression (although the data are better for some interventions than for others), and for each intervention, there are indeed some patients who respond “miraculously.”  But there are also others who are not helped at all (and still others who are harmed); there’s little evidence to guide us in our treatment selection.

To a nonpsychiatrist, it all seems like a lot of hand-waving.  Oftentimes, it is.  But you would also think that psychiatrists, of all people, would be acutely aware that their emperor has no clothes.  Unfortunately, though, in my experience, they don’t.  With a few exceptions (like my dinner colleague, mentioned above), we psychiatrists buy the “chemical imbalance” theory and use it to guide our practice, even though it’s an inaccurate, decades-old map.  We can explain which receptors a drug is binding to, how quickly a drug is metabolized & eliminated from the body, even the target concentration of the drug in the bloodstream and cerebrospinal fluid.  The pop psychiatrist Stephen Stahl has created heuristic models of psychiatric drugs that encapsulate all these features, making prescription-writing as easy as painting by numbers.  But in the end, we still don’t know why these drugs do what they do.  (So it shouldn’t really surprise us, either, when the drugs don’t do what we want them to do.)

The great “promise” of the next era of psychiatry appears to be individualized care– in other words, performing genetic testing, imaging, or using other biological markers to predict treatment choices and improve outcomes.  Current efforts to employ such predictive techniques (like quantitative EEG) are costly, and give predictions that are not much better than chance.

Depression is indeed biological (as long as you agree that the brain has at least something to do with conscious thought, mood, and emotion!), but does it have recognizable chemical deficiencies or brain activation patterns that will respond in some predictable way to available therapies?  If so, then it bodes well for the future of our field.  But I’m afraid that too many psychiatrists are putting the cart in front of the horse, assuming that we know far more than we actually do, and suggesting treatments that “sound good,” but only according to a theoretical understanding of a disease that in no way reflects what’s really happening.  

Unfortunately, all this attention on chemicals, receptors, and putative neural pathways takes the patient out of the equation.  Sometimes we forget that the nice meal, the good friend, the beautiful sunset, or the exhilarating hike can work far better than the prescription or the pill.

The Tucson massacre – preventable?

January 12, 2011

Last Saturday’s tragic events in Tucson have called attention to the behavior of Jared Loughner, the perpetrator, as well as the political climate in which this horrific event occurred.  While it is likely too early to determine to what degree, if any, Loughner’s political views may have motivated this unspeakable act, information has come to light regarding his unusual behavior and unorthodox opinions, raising questions regarding his mental state.

Without direct observation of Loughner and his behavior, it would be risky to posit a diagnosis at this time.  Even though details are emerging, I do not yet know whether Loughner had been diagnosed with an illness, or whether he was taking medications.  But if it is determined by forensic experts that Loughner did indeed suffer from a psychiatric illness, one question that is certain to arise is:  Can a person with an illness that gives rise to unconventional views and a potential for violence be forcibly treated, so that events like this can be prevented?

[Before answering this question, two things should be emphasized:  First, mental illness very rarely causes violent behavior; and as a consequence, the function of psychiatric medication is not to prevent violence (indeed, see my earlier post).  Antipsychotic drugs can, however, minimize delusional and paranoid thoughts, and improve a person’s ability to negotiate the difference between reality and fantasy, and some mood stabilizers and antidepressants may lessen impulsivity and aggression, but we cannot assume that medications could have prevented Loughner’s act.]

Several landmark cases addressing this very issue have said no; patients retain the right to refuse treatment.  Patients can, however, be involuntarily committed to a hospital, but only when immediate intervention is required to prevent death or serious harm to themselves or to another person, or to prevent deterioration of the patient’s clinical state.  In California, the relevant section of the law is section 5150 of the Welfare and Institutions Code.  This allows a law enforcement officer or a clinician to involuntarily confine a person to treatment for a 72-hour period.  The criteria for a 5150 hold require the presence of “symptoms of a mental disorder” prior to the hold.  (Thus, self-injurious behavior as a result of alcohol intoxication does not qualify a person for a legal hold.)  All states provide some comparable form of brief involuntary commitment for those suspected of danger to self or others, or grave disability, as a result of a mental illness.

Even after hospital admission, though, patients have the right to refuse medications.  Medications can only be given involuntarily if a court determines, based on evidence presented by doctors, that a patient lacks the capacity to give informed consent (in California, this process is called a Riese hearing).

But what about cases that are less acute?  If Loughner’s behavior arose from a psychotic disorder such as paranoid schizophrenia (and his behavior does indeed have hallmarks of such a diagnosis) but not significant enough to require hospitalization, one might argue that adherence to an antipsychotic regimen may have prevented the extreme behavior we saw on Saturday.

He still could have refused.  A number of court decisions (discussed here) have established and affirmed this right.  Recent exceptions include Kendra’s Law in New York and Laura’s Law in California.  Kendra’s Law, enacted in 1999, allows courts to order seriously mentally ill individuals to accept treatment as a condition for living in the community.  It was originally designed to target those with a history of repeat hospitalizations that resulted from nonadherence to medications.  Patients can be ordered into assisted outpatient treatment if they are “unlikely to survive safely in the community without supervision” and have demonstrated either (a) acts of serious violent behavior toward self or others, or (b) at least two hospitalizations within the last 3 years, resulting from nonadherence to a treatment regimen.  Laura’s Law was signed into law in 2002 in California, although as of 2010 only two California counties have implemented it.  Studies reviewing the effects of these laws have found that patients in assisted outpatient treatment had fewer hospitalizations, fewer arrests and incarcerations, and had were less likely to be homeless or to abuse alcohol or drugs.

If Loughner had made threats of violence while engaged in treatment, another related decision, the Tarasoff duty, could have been invoked.  In the 1976 case of Tarasoff vs Regents of University of California (and a second ruling in 1982), it was determined that a physician or therapist who has reason to believe that a patient may injure or kill someone must warn the potential victim (the 1982 ruling broadened the decision to include the duty to protect, as well).  Thus, if a patient makes a threat against another person—and the clinician perceives it to be credible—he or she must warn the targeted individual, law enforcement, or take any other steps that are “reasonably necessary.”  

Clearly, there is a great deal of uncertainty and latitude in the above cases.  While the Tarasoff duty is clearly designed to prevent danger to others, it may potentially destroy the trust between doctor and patient and therefore hinder treatment; similarly, it is often difficult to determine whether a patient’s threats are credible.  In the case of a patient like Loughner, would antigovernment rhetoric prompt a warning?  What about threats to “politicians” in general?  The clinician’s responsibility is not always clear.

With regard to Kendra’s and Laura’s Laws, the meaning of “survival in the community” can be debated, and it is often arguable whether compliance with medications would prevent hospitalization.  Opponents argue that the best solution is more widespread (and more effective) voluntary outpatient treatment, rather than forced treatment.

As more information on this case comes to light, these issues are certain to be discussed and debated.  We must not rush to judgment, however, regarding motives and explanations for Loughner’s behavior and the
steps we could take (or could have taken) to prevent it.

%d bloggers like this: