What Adderall Can Teach Us About Medical Marijuana

June 19, 2012

An article in the New York Times last week described the increasing use of stimulant medications such as Adderall and Ritalin among high-school students.  Titled “The Risky Rise of the Good-Grade Pill,” the article discussed how 15 to 40 percent of students, competing for straight-As and spots in elite colleges, use stimulants for an extra “edge,” regardless of whether they actually have ADHD.  In this blog, I’ve written about ADHD.  It’s a real condition—and medications can help tremendously—but the diagnostic criteria are quite vague.  As with much in psychiatry, anyone “saying the right thing” can relatively easily get one of these drugs, whether they want it or not.

Sure enough, the number of prescriptions for these drugs has risen 26% since 2007.  Does this mean that ADHD is now 26% more prevalent?  No.  In the Times article, some students admitted they “lie to [their] psychiatrists” in order to “get something good.”  In fact, some students “laughed at the ease with which they got some doctors to write prescriptions for ADHD.”  In the absence of an objective test (some computerized tests exist but aren’t widely used nor validated, and brain scans are similarly circumspect) and diagnostic criteria that are readily accessible on the internet, anyone who wants a stimulant can basically get one.  And while psychiatric diagnosis is often an imperfect science, in many settings the methodology by which we assess and diagnose ADHD is particularly crude.

Many of my colleagues will disagree with (or hate) me for saying so, but in some sense, the prescription of stimulants has become just like any other type of cosmetic medicine.  Plastic surgeons and dermatologists, for instance, are trained to perform medically necessary procedures, but they often find that “cosmetic” procedures like facelifts and Botox injections are more lucrative.  Similarly, psychiatrists can have successful practices in catering to ultra-competitive teens (and their parents) and giving out stimulants.  Who cares if there’s no real disease?  Psychiatry is all about enhancing patients’ lives, isn’t it?  As another blogger wrote last week, some respectable physicians have even argued that “anyone and everyone should have access to drugs that improve performance.”

When I think about “performance enhancement” in this manner, I can’t help but think about the controversy over medical marijuana.  This is another topic I’ve written about, mainly to question the “medical” label on something that is neither routinely accepted nor endorsed by the medical profession.  Proponents of medical cannabis, I wrote, have co-opted the “medical” label in order for patients to obtain an abusable psychoactive substance legally, under the guise of receiving “treatment.”

How is this different from the prescription of psychostimulants for ADHD?  The short answer is, it’s not.  If my fellow psychiatrists and I prescribe psychostimulants (which are abusable psychoactive substances in their own right, as described in the pages of the NYT) on the basis of simple patient complaints—and continue to do so simply because a patient reports a subjective benefit—then this isn’t very different from a medical marijuana provider writing a prescription (or “recommendation”) for medical cannabis.  In both cases, the conditions being treated are ill-defined (yes, in the case of ADHD, it’s detailed in the DSM, which gives it a certain validity, but that’s not saying much).  In both cases, the conditions affect patients’ quality of life but are rarely, if ever, life-threatening.  In both cases, psychoactive drugs are prescribed which could be abused but which most patients actually use quite responsibly.  Last but not least, in both cases, patients generally do well; they report satisfaction with treatment and often come back for more.

In fact, taken one step further, this analogy may turn out to be an argument in favor of medical marijuana.  As proponents of cannabis are all too eager to point out, marijuana is a natural substance, humans have used it for thousands of years, and it’s arguably safer than other abusable (but legal) substances like nicotine and alcohol.  Psychostimulants, on the other hand, are synthetic chemicals (not without adverse effects) and have been described as “gateway drugs” to more or less the same degree as marijuana.  Why one is legal and one is not simply appears to be due to the psychiatric profession’s “seal of approval” on one but not the other.

If the psychiatric profession is gradually moving away from the assessment, diagnosis, and treatment of severe mental illness and, instead, treating “lifestyle” problems with drugs that could easily be abused, then I really don’t have a good argument for denying cannabis to patients who insist it helps their anxiety, insomnia, depression, or chronic pain.

Perhaps we should ask physicians take a more rigorous approach to ADHD diagnosis, demanding interviews with parents and teachers, extensive neuropsychiatric testing, and (perhaps) neuroimaging before offering a script.  But in a world in which doctors’ reimbursements are dwindling, and the time devoted to patient care is vanishing—not to mention a patient culture which demands a quick fix for the problems associated with the stresses of modern adolescence—it doesn’t surprise me one bit that some doctors will cut corners and prescribe without a thorough workup, in much the same way that marijuana is provided, in states where it’s legal.  If the loudest protests against such a practice don’t come from our leadership—but instead from the pages of the New York Times—we only have ourselves to blame when things really get out of hand.


The Evidence of the Anecdote

June 8, 2012

The foundation of medical decision-making is “evidence-based medicine.”  As most readers know, this is the effort to use the best available evidence (using the scientific method) to make decisions and recommendations about how to treat individual patients.  “Evidence” is typically rated on four levels (1 to 4).  Level 1 represents high-quality evidence—usually the results of randomized clinical trials—while level 4 typically consists of case studies, uncontrolled observations, and anecdotal reports.

Clinical guidelines and drug approvals typically rely more heavily (or even exclusively) on level-1 evidence.  It is thought to be more valid, more authoritative, and less affected by variations among individuals.  For example, knowing that an antidepressant works (i.e., it gives a “statistically significant effect” vs placebo) in a large, controlled trial is more convincing to the average prescriber than knowing that it worked for a single depressed guy in Peoria.

But is it, really?  Not always (especially if you’re the one treating that depressed guy in Peoria).  Clinical trials can be misleading, even if their results are “significant.”  As most readers know, some investigators, after analyzing data from large industry-funded clinical trials, have concluded that antidepressants may not be effective at all—a story that has received extensive media coverage.  But lots of individuals insist that they do work, based on personal experience.  One such depression sufferer—who benefited greatly from antidepressants—wrote a recent post on the Atlantic Online, and quoted Peter Kramer: “to give the impression that [antidepressants] are placebos is to cause needless suffering” because many people do benefit from them.  Jonathan Leo, on the other hand, argues that this is a patently anti-scientific stance.  In a post this week on the website Mad In America, Leo points out (correctly) that there are people out there who will give recommendations and anecdotes in support of just about anything.  That doesn’t mean they work.

Both sides make some very good points.  We just need to find a way to reconcile them—i.e., to make the “science” more reflective of real-world cases, and use the wisdom of individual cases to influence our practice in a more scientifically valid way.  This is much easier said than done.

While psychiatrists often refer to the “art” of psychopharmacology, make no mistake:  they (we) take great pride in the fact that it’s supposedly grounded in hard science.  Drug doses, mechanisms, metabolites, serum levels, binding coefficients, polymorphisms, biomarkers, quantitative outcome measures—these are the calling cards of scientific investigation.  But when medications don’t work as planned (which is often), we improvise, and when we do, we quickly enter the world of personal experience and anecdote.  In fact, in the absence of objective disease markers (which we may never find, frankly), psychiatric treatment is built almost exclusively on anecdotes.  When a patient says a drug “worked” in some way that the data don’t support, or they experience a side effect that’s not listed in the PDR, that becomes the truth, and it happens far more frequently than we like to admit.

It’s even more apparent in psychotherapy.  When a therapist asks a question like “What went through your mind when that woman rejected you?” the number of possible responses is infinite, unlike a serum lithium level or a blood pressure.  A good therapist follows the patient’s story and individualizes treatment based on the individual case (and only loosely on some theory or therapeutic modality).  The “proof” is the outcome with that particular patient.  Sure, the “N” is only 1, but it’s the only one that counts.

Is there any way to make the science look more like the anecdotal evidence we actually see in practice?  I think not.  Most of us don’t even stop to think about how UN-convincing the “evidence” truly is.  In his book Pharmageddon, David Healy describes the example of the parachute:  no one needs to do a randomized, controlled trial to show that a parachute works.  It just does.   By comparison, to show that antidepressants “work,” drug companies must perform large, expensive trials (and often multiple trials at that) and even then, prove their results through statistical measures or clever trial designs.  Given this complexity, it’s a wonder that we believe clinical trials at all.

On the other side of the coin, there’s really no way to subject the anecdotal report, or case study, to the scientific method.  By definition, including more patients and controls (i.e., increasing the “N”) automatically introduces heterogeneity.  Whatever factor(s) led a particular patient to respond to Paxil “overnight” or to develop a harsh cough on Abilify are probably unique to that individual.

But maybe we can start looking at anecdotes through a scientific lens.  When we observe a particular response or effect, we ought to look not just at the most obvious cause (e.g., a new medication) but at the context in which it occurred, and entertain any and all alternative hypotheses.  Similarly, when planning treatment, we need to think not just about FDA-approved drugs, but also patient expectations, treatment setting, home environment, costs, other comorbidities, the availability of alternative therapies, and other data points or “independent variables.”  To use a crude but common analogy, it is indeed true that every person becomes his or her own laboratory, and should be viewed as such.

The more we look at patients this way, the further we get from clinical trials and the less relevant clinical trials become.  This is unfortunate, because—for better or for worse (I would vote for “worse”)—clinical trials have become the cornerstone of evidence-based psychiatry.  But a re-emphasis on anecdotes and individual cases is important.  Because in the end, it’s the individual who counts.  The individual resembles an N of 1 much more closely than he or she resembles an N of 200, and that’s probably the most important evidence we need to keep in mind.