Kids gaming pathologically

January 19, 2011

Today’s New York Times “Well” blog shares the results of a recent study suggesting that video games may contribute to depression in teenagers.  Briefly, the study found that grade-school and middle-school students who were “more impulsive and less comfortable with other children” spent more time playing video games than other teens.  Two years later, these same students were more likely to suffer from depression, anxiety, and social phobias.  The authors are careful to say that there’s no evidence the games caused depression, but there’s a strong correlation.

I pulled up the original article, and the authors’ objectives were to “measure the prevalence…of pathological video gaming, …to identify risk and protective factors, …and to identify outcomes for individuals who become pathological gamers.”  They didn’t use the word “addiction” in their paper (well, actually, they did, but they put it in quotes), but of course the take-home message from the NY Times story is quite clear:  kids can be addicted to video game playing, and this could lead to depression.

As with any extreme activity, I would not be surprised to learn that there are some kids who play games compulsively, who sacrifice food, sleep, hygiene, and other responsibilities for long periods of time.  But to use words like ‘addiction’– or even the less loaded and more clinical-sounding ‘pathological gaming’– risks labeling a potentially harmless behavior as a problem, and may have little to do with the underlying motives.

What’s so pathological, anyway, about pathological gaming?  Is the kid who plays video games for 30 hours a week playing more “pathologically” than the one who plays for only 10?  Does the kid with lots of friends, who gets plenty of fresh air, is active in extracurriculars, and has lots of friends face a more promising future than the one who would prefer to sit at home on the XBOX360 and sometimes forgets to do his homework?  Which friends are more valuable in life—the Facebook friends or the “real” friends?  We know the intuitive answer to these questions, but where are the data to back up these assumptions?

The behavior itself is not the most important factor.  I know some “workaholics” who work 80-plus-hour weeks; they are absolutely committed to their work but they also have rich, fulfilling personal lives and are extremely well-adjusted.  I’ve also met some substance abusers who have never been arrested, never lost a job, and who seem to control their use (they often describe themselves as “functional” addicts) but who nonetheless have all the psychological and emotional hallmarks of a hard-core addict and desperately need rehabilitation.


I have no problem with researchers looking at a widespread activity like video game playing and asking whether it is changing how kids socialize, or whether it may affect learning styles or family dynamics.  But when we take an activity that some kids do “a lot” and label it as pathological or an “addiction,” without defining what those terms mean, or asking what benefit these kids might derive from it, we are, at best, imposing our own standards of acceptable behavior on a generation that sees things much differently, or, at worst, creating a whole new generation of addicts that we now must treat.


FDA approval of psych meds – an alternative

January 17, 2011

The FDA’s approval process for new psychiatric drugs is broken.  It is time-consuming and costly, and benefits no one—patients, physicians, pharmaceutical companies, managed care organizations, or other payers.

To bring a new compound to market, pharmaceutical companies and academic labs invest years (and millions of dollars) in basic research.  When a compound appears promising, it enters “Phase I” testing, to assess the drug’s basic properties and safety profile in healthy human subjects; this phase may take one to two years.  If successful, the drug enters “Phase II” testing, which measures responses to the drug in a small target population of patients.  After this step comes “Phase III” testing, usually the most expensive and prolonged phase, in which the drug is tested (usually against a placebo) to determine its safety and efficacy for a given indication.  This may take many more years, and many more millions of dollars, to complete.

For psychiatric drugs, this process is somewhat of an anachronism.  There is extensive overlap among psychiatric diagnoses (and the changes on the horizon with DSM-5 won’t make things any clearer), so it makes little sense to focus on a drug’s efficacy for a single indication (e.g., generalized anxiety disorder) when it could prove quite helpful in another (e.g., major depression).  The end result is that doctors think about patients in terms of diagnoses (and assign diagnoses that are sometimes inaccurate) rather than about the symptoms (or the patients) they are treating.  Managed care companies, too, force us to pigeonhole patients into a given diagnosis in order for them to pay for a medication.  Finally, pharmaceutical companies must conduct expensive, prolonged Phase III trials for each indication they wish to receive (driving up costs of all medications), and are subject to significant penalties when they even suggest that their drug might be used in a slightly different population.

Here is one way the drug approval process could be improved for all involved.  Rather than recruit a uniform population of subjects with a given diagnosis (which does not resemble the “real world” in any way), we can require drug companies to test the drug to a large number of subjects with a broad range of psychiatric conditions (as well as normal controls), perform a much more extensive battery of tests on each subject, release all the data, and then allow doctors to determine how to use the drugs.

For instance, let’s say a company believes, on the basis of its research, that “olanzidone” might be an effective antipsychotic.  So they recruit several hundred subjects—some with schizophrenia, some with depression, some with bipolar disorder, some with a personality disorder, some with multiple disorders, and so on, and some with no psychiatric diagnosis at all—and subject them to a battery of baseline tests:  a physical exam; comprehensive laboratory measures; genetic screens; cognitive tests; personality tests; tests of anxiety, depression, OCD symptoms, panic symptoms, PTSD symptoms, and so on; as well as a full diagnostic clinical interview.  They administer olanzidone at a range of doses (determined to be safe on the basis of phase I testing) and over a range of time periods, then perform the same battery of tests after the trial.  All results are then published and made available to clinicians.

The results might show that olanzidone is an effective antipsychotic, but only in patients with a concurrent mood disorder.  They might show that olanzidone worsens anxiety.  They might show that olanzidone causes weight gain, but only in patients with the HTR2C -759C/T polymorphism.  They might show that olanzidone worsens negative symptoms of psychosis, but improve cognitive abilities.  Get the picture?

It sounds, at first, like this alternative would be just as complex and time-consuming as the current way of doing things.  But I don’t think so.  For one thing, drug companies wouldn’t have to spend as much time and money finding the “perfect” subject population, and can test a drug’s safety profile in a diverse group of subjects.  Also, companies wouldn’t have to invest millions of R&D dollars to obtain each new indication.  Furthermore, they would be required to make all data public, preventing them from hiding data which don’t support a medication’s proposed indication.  Finally, this proposal would allow doctors to make medication decisions based on a much more extensive and accuate data set, rather than the information that is offered to them in glossy drug-company brochures.

The drawbacks?  We might end up with far more compounds on the market, some of questionable efficacy.  But drug companies would most likely invest their efforts in developing compounds that have some chance of improving what’s on the market (instead of just finding a new “niche” indication).  Drug companies may also fear the loss of market share or the costs of testing drugs on larger populations of patients.  But, in reality, this may actually create new markets for drugs and would obviate the need to push for new indications every few years.

This change would also make for more truthful (and informative) marketing material.  Instead of an ad proclaiming “Olanzidone newly approved for the treatment of schizophrenia!!” (which doesn’t mean very much, frankly), I might read an ad explaining “Olanzidone shows a 30% decrease in average PANSS score; no effect on mood symptoms; a significant improvement in executive function but not memory; a modest decrease in Beck Anxiety Inventory score; and a significant improvement in Pittsburgh Sleep Quality Index.”  Not quite as sexy, but certainly more helpful in my practice.

This will, of course, never happen, because there are simply too many vested interests in the status quo.  But now is the time to start thinking of ways to make the approval process more transparent to the public, and to help doctors (as well as patients and payers) make more informed decisions about the drugs we use.


How is an antidepressant an antidepressant?

January 14, 2011

I recently had dinner with a fellow psychiatrist who remarked that he doesn’t use “antidepressants” anymore.  Not that he doesn’t prescribe them, but he doesn’t use the word; he has become aware of how calling something an “antidepressant” implies that it’s something it (frequently) is not.  I’ve thought about his comment for a while now, and I’ve been asking myself, what exactly is an antidepressant anyway?

At the risk of sounding facetious (but trust me, that is not my intent!), an antidepressant can be defined as “anything that makes you feel less depressed.”  Sounds simple enough.  Of course, it only begs the question of what it means to be “depressed.” I’ll return to that point at another time, but I think we can all intuitively agree that here are a number of substances/medications/drugs/activities/people/places which can have an “antidepressant” effect.  Each of us has felt depressed at some point in our lives, and each of us has been lifted from that place by something different:  the receipt of some good news, the smile of a loved one, the exhilaration from some physical activity, the pleasure of a good movie or favorite song, the intoxication from a drug, the peace and clarity of meditation or prayer, and so on.

The critical reader (and the smug clinician) will correctly argue, those are simply things that make someone feel good; what about the treatment of clinical depression?  Indeed, one aspect of clinical depression is that activities that used to be pleasurable are no longer so. This distinction between “sadness” and “depression” (similar, but not identical to, the distinction between “exogenous” and “endogenous” depression) is an important one, so how do we as mental health professionals determine what’s the best way to help a patient who asks us for help?

It’s not easy.  For one thing, the diagnostic criteria for clinical depression are broad enough (and may get even more broad) that many patients who are experiencing “the blues” or are “stressed out” are diagnosed with depression, and are prescribed medications that do little, if anything.

So can we be more scientific?  Well, it would be intellectually satisfying to be able to say, “Clinical depression is characterized by a deficiency in compound X and the treatment replaces compound X,” much like we replace insulin in diabetes or we enhance dopamine in Parkinson’s disease.  Unfortunately, despite the oft-heard statement about “chemical imbalances,” there don’t appear to be any measurable imbalances.  The pretty pictures in the drug ads— and even in the scientific literature—show how (some) antidepressants increase levels of serotonin in the brain, but there’s not much evidence for this explanation for depression, as discussed in this review.  As the authors point out, saying depression is a deficiency in serotonin because SSRIs help, is like saying a headache is a deficiency in aspirin.

In fact, many “antidepressant” drugs affect different neurotransmitters, including norepinephrine and dopamine.  Additional medications that can benefit depression include mood stabilizers, stimulants, antipsychotics, glutamate antagonists, and thyroid hormone analogues.  Do you see a pattern?  I don’t.  Finally, there are still other interventions like electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), vagal nerve stimulation (VNS), and others, that don’t directly affect neurotransmitters at all, but affect other structures and pathways in the brain that we’re just beginning to understand.

Each of these is a tested and “approved” therapy for depression (although the data are better for some interventions than for others), and for each intervention, there are indeed some patients who respond “miraculously.”  But there are also others who are not helped at all (and still others who are harmed); there’s little evidence to guide us in our treatment selection.

To a nonpsychiatrist, it all seems like a lot of hand-waving.  Oftentimes, it is.  But you would also think that psychiatrists, of all people, would be acutely aware that their emperor has no clothes.  Unfortunately, though, in my experience, they don’t.  With a few exceptions (like my dinner colleague, mentioned above), we psychiatrists buy the “chemical imbalance” theory and use it to guide our practice, even though it’s an inaccurate, decades-old map.  We can explain which receptors a drug is binding to, how quickly a drug is metabolized & eliminated from the body, even the target concentration of the drug in the bloodstream and cerebrospinal fluid.  The pop psychiatrist Stephen Stahl has created heuristic models of psychiatric drugs that encapsulate all these features, making prescription-writing as easy as painting by numbers.  But in the end, we still don’t know why these drugs do what they do.  (So it shouldn’t really surprise us, either, when the drugs don’t do what we want them to do.)

The great “promise” of the next era of psychiatry appears to be individualized care– in other words, performing genetic testing, imaging, or using other biological markers to predict treatment choices and improve outcomes.  Current efforts to employ such predictive techniques (like quantitative EEG) are costly, and give predictions that are not much better than chance.

Depression is indeed biological (as long as you agree that the brain has at least something to do with conscious thought, mood, and emotion!), but does it have recognizable chemical deficiencies or brain activation patterns that will respond in some predictable way to available therapies?  If so, then it bodes well for the future of our field.  But I’m afraid that too many psychiatrists are putting the cart in front of the horse, assuming that we know far more than we actually do, and suggesting treatments that “sound good,” but only according to a theoretical understanding of a disease that in no way reflects what’s really happening.  

Unfortunately, all this attention on chemicals, receptors, and putative neural pathways takes the patient out of the equation.  Sometimes we forget that the nice meal, the good friend, the beautiful sunset, or the exhilarating hike can work far better than the prescription or the pill.


The Tucson massacre – preventable?

January 12, 2011

Last Saturday’s tragic events in Tucson have called attention to the behavior of Jared Loughner, the perpetrator, as well as the political climate in which this horrific event occurred.  While it is likely too early to determine to what degree, if any, Loughner’s political views may have motivated this unspeakable act, information has come to light regarding his unusual behavior and unorthodox opinions, raising questions regarding his mental state.

Without direct observation of Loughner and his behavior, it would be risky to posit a diagnosis at this time.  Even though details are emerging, I do not yet know whether Loughner had been diagnosed with an illness, or whether he was taking medications.  But if it is determined by forensic experts that Loughner did indeed suffer from a psychiatric illness, one question that is certain to arise is:  Can a person with an illness that gives rise to unconventional views and a potential for violence be forcibly treated, so that events like this can be prevented?

[Before answering this question, two things should be emphasized:  First, mental illness very rarely causes violent behavior; and as a consequence, the function of psychiatric medication is not to prevent violence (indeed, see my earlier post).  Antipsychotic drugs can, however, minimize delusional and paranoid thoughts, and improve a person’s ability to negotiate the difference between reality and fantasy, and some mood stabilizers and antidepressants may lessen impulsivity and aggression, but we cannot assume that medications could have prevented Loughner’s act.]

Several landmark cases addressing this very issue have said no; patients retain the right to refuse treatment.  Patients can, however, be involuntarily committed to a hospital, but only when immediate intervention is required to prevent death or serious harm to themselves or to another person, or to prevent deterioration of the patient’s clinical state.  In California, the relevant section of the law is section 5150 of the Welfare and Institutions Code.  This allows a law enforcement officer or a clinician to involuntarily confine a person to treatment for a 72-hour period.  The criteria for a 5150 hold require the presence of “symptoms of a mental disorder” prior to the hold.  (Thus, self-injurious behavior as a result of alcohol intoxication does not qualify a person for a legal hold.)  All states provide some comparable form of brief involuntary commitment for those suspected of danger to self or others, or grave disability, as a result of a mental illness.

Even after hospital admission, though, patients have the right to refuse medications.  Medications can only be given involuntarily if a court determines, based on evidence presented by doctors, that a patient lacks the capacity to give informed consent (in California, this process is called a Riese hearing).

But what about cases that are less acute?  If Loughner’s behavior arose from a psychotic disorder such as paranoid schizophrenia (and his behavior does indeed have hallmarks of such a diagnosis) but not significant enough to require hospitalization, one might argue that adherence to an antipsychotic regimen may have prevented the extreme behavior we saw on Saturday.

He still could have refused.  A number of court decisions (discussed here) have established and affirmed this right.  Recent exceptions include Kendra’s Law in New York and Laura’s Law in California.  Kendra’s Law, enacted in 1999, allows courts to order seriously mentally ill individuals to accept treatment as a condition for living in the community.  It was originally designed to target those with a history of repeat hospitalizations that resulted from nonadherence to medications.  Patients can be ordered into assisted outpatient treatment if they are “unlikely to survive safely in the community without supervision” and have demonstrated either (a) acts of serious violent behavior toward self or others, or (b) at least two hospitalizations within the last 3 years, resulting from nonadherence to a treatment regimen.  Laura’s Law was signed into law in 2002 in California, although as of 2010 only two California counties have implemented it.  Studies reviewing the effects of these laws have found that patients in assisted outpatient treatment had fewer hospitalizations, fewer arrests and incarcerations, and had were less likely to be homeless or to abuse alcohol or drugs.

If Loughner had made threats of violence while engaged in treatment, another related decision, the Tarasoff duty, could have been invoked.  In the 1976 case of Tarasoff vs Regents of University of California (and a second ruling in 1982), it was determined that a physician or therapist who has reason to believe that a patient may injure or kill someone must warn the potential victim (the 1982 ruling broadened the decision to include the duty to protect, as well).  Thus, if a patient makes a threat against another person—and the clinician perceives it to be credible—he or she must warn the targeted individual, law enforcement, or take any other steps that are “reasonably necessary.”  

Clearly, there is a great deal of uncertainty and latitude in the above cases.  While the Tarasoff duty is clearly designed to prevent danger to others, it may potentially destroy the trust between doctor and patient and therefore hinder treatment; similarly, it is often difficult to determine whether a patient’s threats are credible.  In the case of a patient like Loughner, would antigovernment rhetoric prompt a warning?  What about threats to “politicians” in general?  The clinician’s responsibility is not always clear.

With regard to Kendra’s and Laura’s Laws, the meaning of “survival in the community” can be debated, and it is often arguable whether compliance with medications would prevent hospitalization.  Opponents argue that the best solution is more widespread (and more effective) voluntary outpatient treatment, rather than forced treatment.

As more information on this case comes to light, these issues are certain to be discussed and debated.  We must not rush to judgment, however, regarding motives and explanations for Loughner’s behavior and the
steps we could take (or could have taken) to prevent it.


Medical marijuana and psychiatry

January 9, 2011

Is marijuana really medicine?  I’m not arguing against the potential for marijuana to treat illness, nor do I mean to imply that marijuana is simply a recreational drug that has no place in medicine.  Instead, I simply wish to point out how the “medical” label, I feel, has been misused and co-opted in a way that reveals what “medicine” really is (and is not).

Let me state, for the record, that I have no position on medical marijuana.  I practice in California, a state in which it is legal to use marijuana for medicinal purposes.  Even though I do not prescribe it, I do not judge those who do, nor those who use it.  I agree that it can be helpful in a wide range of illnesses– sometimes even in the place of established medicines.  It is unfortunate that controlled studies on THC and other cannabinoid compounds– studies that could lead to new therapies– have not been performed.

Medical care usually follows a well-established outline: a patient with a complaint undergoes an examination by a provider; a diagnosis is determined; potential courses of treatment are evaluated; and the optimal treatment is prescribed.  Afterward, the patient follows up with the provider to determine the efficacy of treatment, any potential side effects, and interactions with other medications or therapies.  The frequency of follow-up is determined by the severity of the illness, and therapy is discontinued after it is no longer necessary, or becomes detrimental to the patient.

Unfortunately, none of this describes how medical marijuana is practiced.  Any patient can undergo an examination; the vast majority of such patients have already been using marijuana and explain that they find it helpful, and the provider issues a card stating that they “advise” the use of medical marijuana.  Not a prescription, but a card– which permits him or her to buy virtually any amount, of any type of cannabis desired.  Follow-up visits are typically yearly, not to evalaute response to treatment, but to issue a new card.

As a psychiatrist, I frequently see patients who tell me they have been prescribed marijuana for “anxiety” or “depression.”  Often, my evaluation confirms that they do indeed suffer from, say, a clinically relevant anxiety disorder or major depression.  However, when I know they are using another chemical to treat their symptoms (whether cannabis, alcohol, or a medication prescribed by another physician) it becomes my responsibility to determine whether it will interfere with treatment.  In most cases, it also makes sense to collaborate with the other provider to develop a treatment plan, much as a cardiologist might collaborate with a family physician to manage a patient’s coronary artery disease.  [Sometimes the treatment plan might be to continue marijuana because I believe psychiatric meds simply won’t have any effect.]

But efforts to communicate with marijuana prescribers often fail (and when I have been successful in communicating with such a prescriber, they’re usually surprised that I made the effort!).  Similarly, if I suggest to a patient that he or she consult with the marijuana prescriber to find a strain, or a delivery method, or a dosing interval, that would provide the best symptom relief, or the least interaction with conventional medications– they often react with shock.  “But I only see him once a year,” is the answer I receive.

Often I say to myself something like, “well, marijuana helps him, so I’ll let him continue using it; I’ll just ‘work around it’ unless it becomes a problem.”  The patient usually tells me that he wants to continue using marijuana “as needed,” but he also wishes to continue in treatment with me, taking the medications I prescribe and following through with any treatment I suggest.

It leads to an uncomfortable compartmentalization of care, in which I feel that I’m practicing “real” medicine, while simultaneously condoning his use of another substance, even though neither of us knows the true chemical content of this substance, doses might vary from day to day, and some might be shared with friends.  To top it all off, patients frequently report a greater response to marijuana than what I prescribe, and yet I ignore it?  This is not the way I was trained to practice medicine, and yet I do it almost every day.

The approval of “medical marijuana” has been, I believe, a successful campaign by proponents of marijuana legalization to take advantage of the fragmented and confused health care system to create a de facto social sanction of marijuana use, rather than (a) introducing it as a true “medicine” through the proper and accepted channels (clinical trials, FDA approval, etc) or (b) decriminalizing it into a legal drug, much like alcohol.  I can see the arguments in favor of either approach, but the “medical” label unfortunately undermines what we actually try to do in medicine.

On the other hand, if it works, maybe we ought to take a closer look at what we actually are trying to do in medicine.  If medicine worked all the time, there would be no need for medical marijuana, would there?


Violence, crime, and mental illness

January 7, 2011

Are people with mental illness more violent or aggressive, or more likely to commit crimes than those without mental illness?  Two recent papers investigate different aspects of this question.

In the January 2011 issue of Psychiatric Services, Fisher and colleagues investigate data from the Massachusetts Department of Mental Health, and found that people who had been diagnosed with a “severe and persistent psychiatric disorder” were two-thirds more likely than the general population to be arrested within a one-year period.  Arrest rates were significantly higher for all crimes, but particularly high for assault and battery on a police officer, a felony (odds ratio 5.96, or about 6 times more likely), and “crimes against public decency” (odds ratio 4.72).  While the data only reflect arrests (and not convictions, which would be fewer, since some charges were undoubtedly dropped), and say nothing about whether a person was actively involved in treatment at the time of his or her arrest, it does portray the severely mentally ill as more likely to engage with the criminal justice system.

A separate study, published last month in PLOS One, examined reports of medication-related adverse events to the FDA.  The authors looked at 484 drugs and the rates with which they had been associated with “violence-related symptoms.”  All medications had some such symptoms reported, but certain classes of drugs were associated with more frequent violent events than would occur by chance alone.  In particular, varenicline (Chantix) was most frequently associated with reports of violence, with a PRR (proportional reporting ratio) of 18.0.  (This means that the proportion of violent events by patients on Chantix was 18 times greater than the proportion of violent events reported for all other drugs).  Other medications shown to be associated with violent behaivor included antidepressants (average PRR = 8.4) and psychostimulants (average PRR = 6.9).  

It should be pointed out that the authors of the PLOS study have served as expert witnesses in criminal cases involving psychiatric drugs, and work for the Institute for Safe Medication Practices, a nonprofit group dedicated to exposing medication risks.  Furthermore, the reports of adverse events to the FDA may suffer from “attribution error”: when an adverse event like a violent act occurs, we search for possible causes, and medications– particulalry psychiatric meds– are frequent culprits, when in fact there may be no causal relationship.  Nevertheless, the large numbers of events, and the relative consistency within drug classes, should give us reason for concern.

But even with these limitations, these studies unfortunately show that criminal activity may indeed be more common among the mentally ill, and we need to exercise caution when using medications when the risk of aggression is high, to avoid making matters worse.

 


Sometimes meds are not the answer

January 4, 2011

I wanted to share an article that brought a smile to my face.  Dr Miguel Rivera is my hero of the day!

Under the direction of psychiatrist Dr. Miguel Rivera, caregivers at the Pines have deployed such simple spa comforts as music, massage and calming colors to help reduce agitation. As a result, dosages of antipsychotic medications have dropped to less than half the state average for this most challenging patient population.

I have never worked in a long-term care facility, although I have treated many patients from such places.  I have great respect for those who can work on a sustained basis with people who have progressive neurological or developmental disorders and who cannot adequately report feelings, thoughts, or emotions the way other patients can.

Unfortunately, with few exceptions, when patients are brought to my office from a long-term care setting, the concern is not a disturbance in mood, recurrent psychotic ideation, problematic anxiety, or a need for acute hospitalization for imminent danger to self or others.  Instead, it is because the patient is “causing problems”– maybe yelling in the middle of the night or fighting with a staff member.  Perhaps he refuses to take his medications, or he’s throwing food.  He might wander off from the facility or accuse staff members of stealing from him.

Many chronically disabling conditions, including progressive dementia, can be associated with psychiatric symptoms such as delusional thoughts or agitation.  And it is true that many of our most potent medications can, when used judiciously, treat these symptoms (noting, of course, the black-box warning against using antipsychotics for the treatment of dementia-related psychosis).  However, as with most things in psychiatry, there is a fine line between treating a psychophysiological symptom that causes distress to a patient, and treating a behavioral phenomenon that causes distress to a caregiver.

The most common question I hear from attendants, family members, nursing staff, and others who accompany these patients to my office is, “Can you do something about his [insert troublesome symptom here]?”  And my answer is always the same:  Yes, if you mean you want me to sedate him or calm him with a powerful chemical.  But it’s more important, in my mind, to understand other ways to alleviate his suffering, while preserving his dignity and whatever autonomy he still retains.

It’s an extremely difficult process, but Dr Miguel Rivera seems to have found a solution.  And it didn’t come from the pharmacopoeia, but from his unique ability to listen, to empathize, and to design therapies to appeal to patients’ own unique needs.  It’s a model we all ought to follow.


Bipolar in the eye of the beholder

January 4, 2011

 

So whom is the joke on here?

I found this video on one of the several blogs I subscribe to.
(Okay, I’ll admit it, I’m a sucker for these Xtranormal videos.)

It seems to be composed from the point of view of the jaded psychiatric consumer patient, disturbed at the fact that her fairly unremarkable complaints are interpreted by her psychiatrist as symptoms of bipolar disorder, and how every problem’s solution seems to be a medication adjustment.

Indeed, most mental health conditions include, among their symptoms, common concerns like insomnia, poor attention/concentration, feelings of sadness, or (my personal favorite) “stress.”  But the truth is that bipolar disorder (the topic of this video) is a serious illness which can, at times, be incapacitating and threaten one’s livelihood or even one’s life.  Sleeplessness and “talking fast,” in and of themselves, do not make a bipolar diagnosis.

Watching the video as a psychiatrist, however, I’m reminded of the other side of the issue; namely, that patients will frequently come in with fairly ordinary complaints and profess that they must be “bipolar” or “depressed” or “anxious” and require medication.  Sometimes this self-assessment is accurate, but other times it’s more appropriate to exercise restraint.

The truth remains that, while in some physician-patient encounters the doctor tries to diagnose and treat on the basis of few symptoms, at other times the patient actually wants the diagnosis and/or the drug.  Which gives rise to the age-old
“slippery slope” in psychiatry, in which we deal with behaviors existing on a spectrum from normal to pathological.  Where does “wellness” end and “illness” begin?  And who makes this decision?


Childhood ADHD and Medicaid

December 31, 2010

A study out of UCLA shows that there is a need for significant improvement in the delivery of ADHD care to children on Medicaid.  The study was published in the Journal of the American Academy of Child and Adolescent Psychiatry and a summary can be found at Medscape.

The study followed over 500 children with ADHD.  All were on Medi-Cal (California’s Medicaid program) and were observed over a one-year period.  Some participated solely in primary care treatment, while others received “specialty care” in mental health clinics.  (Because this was an observational study, children were not randomized or assigned to each group, but were simply followed over their course of treatment.)  The study found that at the end of the year, both groups of children fared the same on measures of ADHD symptoms, functioning, academic achievement, family function, and other parameters.

How did primary care differ from “specialty” care?  For one thing, children in the primary care group received stimulant medication 85% of the time (nearly all of these children received a prescription for some medication) but that was about it:  They only followed up with their providers an average of 1 or 2 times in the entire
one-year followup period, and their prescription refill rate was less than 40%.  (50% dropped out of care.)

On the other hand, over 90% of the children in the specialty care group received some sort of psychosocial treatment, and only 40% of these children received medication (30% received stimulants).  Office visits were far more frequent in this population, too, averaging over 5 per month for the duration of the one-year study.

So on the face of it, one might predict that specialty treatment would provide much better care; children had far more frequent contact with their providers, medications were used judiciously (one would assume), and psychosocial interventions were included.  However, the end result was that children did not fare differently in each group.  Academic scores and measures of clinical impairment and “parent distress” were similar in both groups.  Dropout rates and medication discontinuation rates were also similar in each group.

One obvious limitation of this study, which the authors emphasize, is that this is not a randomized trial, but rather an observational study of “real world” patients.  But then again, that’s what they wanted to do:  to observe whether mental health clinics provided better ADHD care.   Two unfortunate conclusions can be drawn.  First, primary care mental health clinics do very little to treat childhood ADHD (cynically, one might look at the data and conclude that they simply “throw meds at the problem” with little to no follow-up).  Secondly, even when these clinics do refer children to a higher level of care, the outcomes aren’t that much better (and the resource costs are undoubtedly much higher).

With the promised expansion of the Medicaid program under PPACA, more children will be receiving care, with mental health as a priority area.  Hopefully, studies like this one will prompt us not simply to provide more care to the increased number of children that will undoubtedly seek it, but to provide better care along the way.


Allen Frances and the DSM-5

December 30, 2010

There’s a great (and long) article in the January 2011 Wired magazine profiling Allen Frances, lead editor of the DSM-IV and an outspoken critic of the process by which the American Psychiatric Association (APA) is developing the next version, the DSM-5.  It’s worth a read and can be found here, as it provides a revealing look at a process that, according to the author (somewhat melodramatically, I might add) could make or break modern psychiatry.

I have many feelings about what’s written in the article, but one passage in particular caught my attention.  The author, Gary Greenberg, writes that he asked a psychiatrist (in fact, a “former president of the APA”) how he uses the DSM in his daily work.

He told me his secretary had just asked him for a diagnosis on a patient he’d been seeing for a couple of months so that she could bill the insurance company. “I hadn’t really formulated it,” he told me.  He consulted the DSM-IV and concluded that the patient had obsessive-compulsive disorder (OCD). 

“Did it change the way you treated her?” I asked, noting that he’d worked with her for quite a while without naming what she had.

“No.”

“So what would you say was the value of the diagnosis?”

“I got paid.”

I include this excerpt because the “hook” here—and the part that will most likely attract the most fervent anti-psychiatry folk—is the line about “getting paid.”  But this entirely misses the point.

See, the DSM-5 is easy to criticize because it seems like a catalogue of invented “syndromes”, from which any psychiatrist can pick out a few symptoms (some of which, I would venture to say, both you and I are experiencing right now), name a diagnosis, and prescribe a medication—and get paid by the insurance company because he believes he is confidently treating a “disease.”  But the truth of the matter, if you talk to any thoughtful psychiatrist, is that, more often than not, the book gets in the way.

In the example above, the doctor had seen his patient for several sessions but hadn’t yet come up with a firm diagnosis.  He settled upon OCD because he was required to write a diagnosis on some form or another.  Yes, ultimately to get paid, but I think we’d all agree that professionals deserve to be reimbursed for their time.  (And if he’s actually listening to his patient instead of comparing her symptoms to a list in a book, his patient would probably agree as well.)

Did this woman have OCD?  Judging by his hesitancy, it’s arguable that perhaps she didn’t have all of the symptoms of OCD.  But she was probably suffering nonetheless, and such presentations are typical of most psychiatric patients.  Nobody fits the DSM mold, we all have quirks and characteristics that present a very complicated picture.  I would argue that this psychiatrist was probably doing well by not rushing to a diagnosis, but instead getting to learn about this woman and develop a treatment plan that was most appropriate for her.

The article’s author writes that if the DSM-5 is a “disaster,” as some observers predict it will be, the APA will “lose its franchise on our psychic suffering, the naming rights to our pain.”  Quite frankly, this could turn out to be the best possible outcome for patients.  If we as a profession ditch the DSM, and stop looking at patients through the lens of ill-defined lists of symptoms, but instead see them as actual individuals, we can better alleviate their suffering.  Yes, a new system will need to be devised to ensure that we can prescribe the interventions that we believe are most appropriate (and yes, to get paid for them), but a patient-centered approach is preferable to a formula-based approach anytime.