The Curious Psychology of “Disability”

December 28, 2011

I’ll start this post with a brief clinical vignette:

I have been seeing Frank, a 44 year-old man, on a regular basis for about six months.  He first came to our community clinic with generalized, nonspecific complaints of “anxiety,” feeling “uncomfortable” in public, and getting “angry all the time,” especially toward people who disagreed with him.  His complaints never truly met official criteria for a DSM-IV disorder, but he was clearly dissatisfied with much in his life and he agreed to continue attending biweekly appointments.  Frank once requested Xanax, by name, but I did not prescribe any medication; I never felt it was appropriate for his symptoms, and besides, he responded well to a combined cognitive-interpersonal approach exploring his regret over past activities as a gang member (and related incarcerations), feelings of being a poor father to his four daughters, and efforts to improve his fragile self-esteem.  Even though Frank still has not met criteria for a specific disorder (he currently holds the imprecise and imperfect label of “anxiety NOS”), he has shown significant improvement and a desire to identify and reverse some of his self-defeating behaviors.

Some of the details (including his name) have been changed to preserve Frank’s privacy.  However, I think the general story still gets across:  a man with low self-worth, guilty feelings, and self-denigration from his overidentification with past misdeeds, came to me for help.  We’ve made progress, despite a lack of medications, and the lack of a clear DSM-IV (or, most likely, DSM-5) diagnosis.  Not dramatic, not earth-shattering, but a success nonetheless.  Right?

Not so fast.

Shortly after our appointment last week, I received a request for Frank’s records from the Social Security Administration, along with a letter from a local law firm he hired to help him obtain benefits.  He had apparently applied for SSI disability and the reviewers needed to see my notes.

I should not have been surprised by this request.  After all, our clinic receives several of these requests each day.  In most cases, I don’t do anything; our clinic staff prints out the records, sends them to SSA, and the evaluation process proceeds generally without any further input from us (for a detailed description of the disability evaluation process, see this article).  But for some reason, this particular request was uniquely heartbreaking.  It made me wonder about the impact of the “disability” label on a man like Frank.

Before I go further, let me emphasize that I’m looking at Frank’s case from the viewpoint of a psychiatrist, a doctor, a healer.  I’m aware that Frank’s family is under some significant financial strain—as are many of my patients in this clinic (a topic about which I’ve written before)—and some sort of welfare or financial support, such as SSI disability income, would make his life somewhat easier.  It might even alleviate some of his anxiety.

However, in six months I have already seen a gradual improvement in Frank’s symptoms, an increase in his motivation to recover, and greater compassion for himself and others.  I do not see him as “disabled”; instead, I believe that with a little more effort, he may be able to handle his own affairs with competence, obtain some form of gainful employment, and raise his daughters as a capable father.  He, too, recognizes this and has expressed gratitude for the progress we have made.

There is no way, at this time, for me to know Frank’s motives for applying for disability.  Perhaps he simply saw it as a way to earn some supplementary income.  Perhaps he believes he truly is disabled (although I don’t think he would say this—and if he did, I wish he’d share it with me!).  I also have no evidence to suggest that Frank is trying to “game the system.”  He may be following the suggestions of a family member, a friend, or even another healthcare provider.  All of the above are worthwhile topics to discuss at our next appointment.

But once those records are sent, the evaluation process is out of my hands.  And even if Frank’s request is denied, I wonder about the psychological effect of the “disability” label on Frank’s desire to maintain the gains he has made.  Labels can mean a lot.  Psychiatric diagnoses, for instance, often needlessly and unfairly label people and lead to unnecessary treatment (and it doesn’t look like DSM-5 will offer much improvement).  Likewise, labels like “chronic,” “incurable,” and “disabled” can also have a detrimental impact, a sentiment expressed emphatically in the literature on “recovery” from mental illness.  The recovery movement, in fact, preaches that mental health services should promote self-direction, empowerment, and patient choice.  If, instead, we convey pessimism, hopelessness, and the stigma of “disability,” we may undermine those goals.

As a healer, I believe that my greatest responsibility and most difficult (although most rewarding) task is to instill hope and optimism in my patients.  Even though not all of them will be entirely “symptom-free” and able to function competently in every situation life hands them, and some may require life-long medication and/or psychosocial support (and, perhaps, disability income), I categorically refuse to believe that most are “disabled” in the sense that they will never be able to live productive, satisfying lives.

I would bet that most doctors and most patients agree with me.  With the proper supports and interventions, all patients (or “users” or “consumers,” if you prefer those terms) can have the opportunity to succeed, and potentially extricate themselves from invisible chains of mental illness.  In Frank’s case, he is was almost there.

But the fact that we as a society provide an institution called “disability,” which provides benefits to people with a psychiatric diagnosis, requiring that they see a psychiatrist, and often requiring that they take medication, sends a very powerful—and potentially unhealthy—psychological message to those who can overcome their disability.  To Frank, it directly contradicts the messages of hope and encouragement I try to offer at each visit.  It makes him dependent upon me, rather than upon himself and his own resources and abilities.  In other words, to a man like Frank, disability is anti-recovery.

I don’t have an easy answer to this problem.  For starters, changing the name of “disability” to something like “temporary psychological material support”—a substitute label, nothing more—might be helpful.  Also, rewarding recipients (e.g., not repealing their benefits) for meeting predetermined milestones of recovery (part-time work, independent housing, etc) may also help.  But the more I think about the life-affirming and empowering potential of recovery, and about how we allocate our scarce resources, the more I believe that the recovery-based—as opposed to disability-based—practice of psychiatry has much more to offer the future of our patients, our profession, and our nation, than the current status quo.  For the sake of Frank’s recovery, and the recovery of countless other men and women like him, maybe it’s time to make that happen.


Biomarker Envy V: BDNF and Cocaine Relapse

October 18, 2011

The future of psychiatric diagnosis and treatment lies in the discovery and development of “biomarkers” of pathological processes.  A biomarker, as I’ve written before, is something that can be measured or quantified, usually from a biological specimen like a blood sample, which helps to diagnose a disease or predict response to a treatment.

Biomarkers are the embodiment of the new “personalized medicine”:  instead of wasting time talking to a patient, asking questions, and possibly drawing incorrect conclusions, the holy grail of a biomarker allows the clinician to order a simple blood test (or brain scan, or genotype) and make a decision about that specific patient’s case.  But “holy grail” status is elusive, and a recent study from the Yale University Department of Psychiatry, published this month in the journal Biological Psychiatry, provides yet another example of a biomarker which is not quite there—at least not yet.

The Yale group, led by Rajita Sinha, PhD, were interested in the question, what makes newly-abstinent cocaine addicts relapse?, and set out to identify a biological marker for relapse potential.  If such a biomarker exists, they argue, then it could not only tell us more about the biology of cocaine dependence, craving, and relapse, but it might also be used clinically, as a way to identify patients who might need more aggressive treatment or other measures to maintain their abstinence.

The researchers chose BDNF, or brain-derived neurotrophic factor, as their biomarker.  In studies of cocaine-dependent animals who are forced into prolonged abstinence, those animals show elevations in BDNF when exposed to a stressor; moreover, cocaine-seeking is associated with BDNF elevations, and BDNF injections can promote cocaine-seeking behavior in these same abstinent animals.  In their recent study, Sinha’s group took 35 cocaine-dependent (human) patients and admitted them to the hospital for 4 weeks.  After three weeks of NO cocaine, they measured blood levels of BDNF and compared these numbers to the levels measured in “healthy controls.”  Then they followed all 35 cocaine users for the next 90 days to determine which of them would relapse during this three-month period.

The results showed that the abstinent cocaine users generally had higher BDNF levels than the healthy controls (see figure below, A).  However, when the researchers looked at the patients who relapsed on cocaine during the 3-month follow-up (n = 23), and compared them to those who stayed clean (n = 12), they found that the relapsers, on average, had higher BDNF levels than the non-relapsers (see figure, B).  Their conclusion is that high levels of BDNF may predict relapse.

These results are intriguing, and Dr Sinha presented her findings at the California Society of Addiction Medicine (CSAM) annual conference last week.  Audience members—all of whom treat drug and alcohol addiction—asked about how they might measure BDNF levels in their patients, and whether the same BDNF elevations might be found in dependence on other drugs.

But one question really got to what I think is the heart of the matter.  Someone asked Dr Sinha: “Looking back at the 35 patients during their four weeks in the hospital, were there any characteristics that separated the high BDNF patients from those with low BDNF?”  In other words, were there any behavioral or psychological features that might, in retrospect, be correlated with elevated BDNF?  Dr Sinha responded, “The patients in the hospital who seemed to be experiencing the most stress or who seemed to be depressed had higher BDNF levels.”

Wait—you mean that the patients at high risk for relapse could be identified by talking to them?  Dr Sinha’s answer shows why biomarkers have little place in clinical medicine, at least at this point.  Sure, her group showed correlations of BDNF with relapse, but nowhere in their paper did they describe personal features of the patients (psychological test scores, psychiatric complaints, or even responses to a checklist of symptoms).  So those who seemed “stressed or depressed” had higher BDNF levels, and—as one might predict—relapsed.  Did this (clinical) observation really require a BDNF blood test?

Dr Sinha’s results (and the results of others who study BDNF and addiction) make a strong case for the role of BDNF in relapse or in recovery from addiction.  But as a clinical tool, not only is it not ready for prime time, but it distracts us from what really matters.  Had Dr Sinha’s group spent four weeks interviewing, analyzing, or just plain talking with their 35 patients instead of simply drawing blood on day 21, they might have come up with some psychological measures which would be just as predictive of relapse—and, more importantly, which might help us develop truly “personalized” treatments that have nothing to do with BDNF or any biochemical feature.

But I wouldn’t hold my breath.  As Dr Sinha’s disclosures indicate, she is on the Scientific Advisory Board of Embera NeuroTherapeutics, a small biotech company working to develop a compound called EMB-001.  EMB-001 is a combination of oxazapam (a benzodiazepine) and metyrapone.  Metyrapone inhibits the synthesis of cortisol, the primary stress hormone in humans.  Dr Sinha, therefore, is probably more interested in the stress responses of her patients (which would include BDNF and other stress-related proteins and hormones) than in whether they say they feel like using cocaine or not.

That’s not necessarily a bad thing.  Science must proceed this way.  If EMB-001 (or a treatment based on BDNF) turns out to be an effective therapy for addiction, it may save hundreds or thousands of lives.  But until science gets to that point, we clinicians must always remember that our patients are not just lab values, blood samples, or brain scans.  They are living, thinking, and speaking beings, and sometimes the best biomarker of all is our skilled assessment and deep understanding of the patient who comes to us for help.


Talk Is Cheap

October 9, 2011

I work part-time in a hospital psychiatry unit, overseeing residents and medical students on their inpatient psychiatry rotations.  They are responsible for three to six patients at any given time, directing and coordinating the patients’ care while they are admitted to our hospital.

To an outsider, this may seem like a generous ratio: one resident taking care of only 3-6 patients.  One would think that this should allow for over an hour of direct patient contact per day, resulting in truly “personalized” medicine.  But instead, the absolute opposite is true: sometimes doctors only see patients for minutes at a time, and develop only a limited understanding of patients for whom they are responsible.  I noticed this in my own residency training, when halfway through my first year I realized the unfortunate fact that even though I was “taking care” of patients and getting my work done satisfactorily, I couldn’t tell you whether my patients felt they were getting better, whether they appreciated my efforts, or whether they had entirely different needs that I had been ignoring.

In truth, much of the workload in a residency program (in any medical specialty) is related to non-patient-care concerns:  lectures, reading, research projects, faculty supervision, etc.  But even outside of the training environment, doctors spend less and less time with patients, creating a disturbing precedent for the future of medicine.  In psychiatry in particular, the shrinking “therapy hour” has received much attention, most recently in a New York Times front-page article (which I blogged about it here and here).  The responses to the article echoed a common (and growing) lament among most psychiatrists:  therapy has been replaced with symptom checklists, rapid-fire questioning, and knee-jerk prescribing.

In my case, I don’t mean be simply one more voice among the chorus of psychiatrists yearning for the “glory days” of psychiatry, where prolonged psychotherapy and hour-long visits were the norm.  I didn’t practice in those days, anyway.  Nevertheless, I do believe that we lose something important by distancing ourselves from our patients.

Consider the inpatient unit again.  My students and residents sometimes spend hours looking up background information, old charts, and lab results, calling family members and other providers, and discussing differential diagnosis and possible treatment plans, before ever seeing their patient.  While their efforts are laudable, the fact remains that a face-to-face interaction with a patient can be remarkably informative, sometimes even immediately diagnostic to the skilled eye.  In an era where we’re trying to reduce our reliance on expensive technology and wasteful tests, patient contact should be prioritized over the hours upon hours that trainees spend hunched over computer workstations.

In the outpatient setting, direct patient-care time has been largely replaced by “busy work” (writing notes; debugging EMRs; calling pharmacies to inquire about prescriptions; completing prior-authorization forms; and performing any number of “quality-control,” credentialing, or other mandatory “compliance” exercises required by our institutions).  Some of this is important, but at the same time, an extra ten or fifteen minutes with a patient may go a long way to determining that patient’s treatment goals (which may disagree with the doctor’s), improving their motivation for change, or addressing unresolved underlying issues– matters that may truly make a difference and cut long-term costs.

The future direction of psychiatry doesn’t look promising, as this vanishing emphasis on the patient’s words and deeds is likely to make treatment even less cost-effective.  For example, there is a growing effort to develop biomarkers for diagnosis of mental illness and to predict medication response.  In my opinion, the science is just not there yet (partly because the DSM is still a poor guide by which to make valid diagnoses… what are depression and schizophrenia anyway?).  And even if the biomarker strategy were a reliable one, there’s still nothing that could be learned in a $745+ blood test that couldn’t be uncovered in a good, thorough clinical examination by a talented diagnostician, not to mention the fact that the examination would also uncover a large amount of other information– and establish valuable rapport– which would likely improve the quality of care.

The blog “1boringoldman” recently featured a post called “Ask them about their lives…” in which a particularly illustrative case was discussed.  I’ll refer you there for the details, but I’ll repost the author’s summary comments here:

I fantasize an article in the American Journal of Psychiatry entitled “Ask them about their lives!” Psychiatrists give drugs. Therapists apply therapies. Who the hell interviews patients beyond logging in a symptom list? I’m being dead serious about that…

I share Mickey’s concern, as this is a vital question for the future of psychiatry.  Personally, I chose psychiatry over other branches of medicine because I enjoy talking to people, asking about their lives, and helping them develop goals and achieve their dreams.  I want to help them overcome the obstacles put in their way by catastrophic relationships, behavioral missteps, poor insight, harmful impulsivity, addiction, emotional dysregulation, and– yes– mental illness.

However, if I don’t have the opportunity to talk to my patients (still my most useful diagnostic and therapeutic tool), I must instead rely on other ways to explain their suffering:  a score on a symptom list, a lab value, or a diagnosis that’s been stuck on the patient’s chart over several years without anyone taking the time to ask whether it’s relevant.  Not only do our patients deserve more than that, they usually want more than that, too; the most common complaint I hear from a patient is that “Dr So-And-So didn’t listen to me, he just prescribed drugs.”

This is not the psychiatry of my forefathers.  This is neither Philippe Pinel’s “moral treatment,” Emil Kraepelin’s meticulous attention to symptoms and patterns thereof, nor Aaron Beck’s cognitive re-strategizing.  No, it’s the psychiatry of HMOs, Wall Street, and an over-medicalized society, and in this brave new world, the patient is nowhere to be found.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Google Is My New Hippocampus

August 6, 2011

A few days ago, upon awakening but before my brain was fully alert, I was reviewing the events of the previous few days in preparation for the new one.  At one point I tried to remember a conversation I had had with a colleague about three days prior, but I could not quite remember the specifics of our discussion.  “No big deal,” I thought to myself, “I’ll just Google it.”

Almost immediately, I recognized the folly of this thought.  Obviously, there is no way to “Google” the events of our personal lives.  But while impractical, the solution was a logical one.  If I want to know any fact or piece of information, I Google it online.  If I want to find a file on my computer, I use Google Desktop.  All of my email conversations for the last five years are archived in my Google Mail account, so I can quickly find correspondence (and people, and account numbers, and emailed passwords, etc) at the click of the “Search” button.  No wonder I immediately thought of Googling myself.

A recent article in Science claims that the permeation of Google and other search engines into our lives—and now onto our smartphones and other portable gadgets—has not only made it easier for us to retrieve information, but it has also changed the way we remember.  In their experiments, three cognitive psychologists from Columbia, Harvard, and UW-Madison demonstrated that we are more likely to forget information if we know that we can access it (e.g., by a search engine) in the future.  Moreover, even for simple data, we’re more likely to remember where we store pieces of information than the subject matter itself.

The implication here is that the process of memory storage & retrieval is rapidly changing in the Online Age.  Humans no longer need to memorize anything (who was the 18th president?  What’s the capital of Australia?  When was the Six-Day War?), but instead just need to know how to access it.

Is this simply a variation of the old statement that “intelligence is not necessarily knowing everything but instead where to find it”?  Perhaps.  An optimist might look at this evolution in human memory as presenting an opportunity to use more brain power for processing complex pieces of information that can’t be readily stored.  In my work, for instance, I’m glad I don’t need to recall precise drug mechanisms, drug-drug interactions, or specific diagnostic criteria (I can look them up quite easily), but can instead spend pay closer attention to the process of listening to my patients and attending to more subtle concerns.  (Which often does more good in the long run anyway.)

The difference, however, is that I was trained in an era in which I did have to memorize all of this information without the advantage of an external online memory bank.  Along the way, I was able to make my own connections among sets of seemingly unrelated facts.  I was able to weed out those that were irrelevant, and retain those that truly made a difference in my daily work.  This resulted, in my opinion, in a much richer understanding of my field.

While I’ve seen no studies of this issue, I wonder whether students in medicine (or, for that matter, other fields requiring mastery of a large body of information) are developing different sets of skills in the Google Era.  Knowing that one can always “look something up” might make a student more careless or lazy.  On the other hand, it might help one to develop a whole new set of clinical skills that previous generations simply didn’t have time for.

Unfortunately, those skills are not the things that are rewarded in our day-to-day work.    We value information and facts, rather than substance and process.  In general, patients want to know drug doses, mechanisms, and side effects, rather than developing a “therapeutic relationship” with their doctor.  Third-party payers don’t care about the insights or breakthroughs that might happen during therapy, but instead that the proper diagnoses and billing codes are given, and that patients improve on some objective measurement.  And when my charts are reviewed by an auditor (or a lawyer), what matters is not the quality of the doctor-patient interaction, but instead the documentation, the informed consent, the checklists, the precise drug dosing, details in the treatment plan, and so on.

I think immediate access to information is a wonderful thing.  Perhaps I rely on it too much.  (My fiancé has already reprimanded me for looking up actors or plot twists on IMDB while we’re watching movies.)  But now that we know it’s changing the way we store information and—I don’t think this is too much of a stretch—the way we think, we should look for ways to use information more efficiently, creatively, and productively.  The human brain has immense potential; now that our collective memories are external (and our likelihood of forgetting is essentially nil), let’s tap that potential do some special and unique things that computers can’t do.  Yet.


Maybe Stuart Smalley Was Right All Along

July 31, 2011

To many people, the self-help movement—with its positive self-talk, daily feel-good affirmations, and emphasis on vague concepts like “gratitude” and “acceptance”—seems like cheesy psychobabble.  Take, for instance, Al Franken’s fictional early-1990s SNL character Stuart Smalley: a perennially cheerful, cardigan-clad “member of several 12-step groups but not a licensed therapist,” whose annoyingly positive attitude mocked the idea that personal suffering could be overcome with absurdly simple affirmative self-talk.

Stuart Smalley was clearly a caricature of the 12-step movement (in fact, many of his “catchphrases” came directly from 12-step principles), but there’s little doubt that the strategies he espoused have worked for many patients in their efforts to overcome alcoholism, drug addiction, and other types of mental illness.

Twenty years later, we now realize Stuart may have been onto something.

A review by Kristin Layous and her colleagues, published in this month’s Journal of Alternative and Complementary Medicine, shows evidence that daily affirmations and other “positive activity interventions” (PAIs) may have a place in the treatment of depression.  They summarize recent studies examining such interventions, including two randomized controlled studies in patients with mild clinical depression, which show that PAIs do, in fact, have a significant (and rapid) effect on reducing depressive symptoms.

What exactly is a PAI?  The authors offer some examples:  “writing letters of gratitude, counting one’s blessings, practicing optimism, performing acts of kindness, meditation on positive feelings toward others, and using one’s signature strengths.”  They argue that when a depressed person engages in any of these activities, he or she not only overcomes depressed feelings (if only transiently) but can also can use this to “move past the point of simply ‘not feeling depressed’ to the point of flourishing.”

Layous and her colleagues even summarize results of clinical trials of self-administered PAIs.  They report that PAIs had effect sizes of 0.31 for depressive symptoms in a community sample, and 0.24 and 0.23 in two studies specifically with depressed patients.  By comparison, psychotherapy has an average effect size of approximately 0.32, and psychotropic medications (although there is some controversy) have roughly the same effect.

[BTW, an “effect size” is a standardized measure of the magnitude of an observed effect.  An effect size of 0.00 means the intervention has no impact at all; an effect size of 1.00 means the intervention causes an average change (measured across the whole group) equivalent to one standard deviation of the baseline measurement in that group.  An effect size of 0.5 means the average change is half the standard deviation, and so forth.  In general, an effect size of 0.10 is considered to be “small,” 0.30 is “medium,” and 0.50 is a “large” effect.  For more information, see this excellent summary.]

So if PAIs work about as well as medications or psychotherapy, then why don’t we use them more often in our depressed patients?   Well, there are a number of reasons.  First of all, until recently, no one has taken such an approach very seriously.  Despite its enormous common-sense appeal, “positive psychology” has only been a field of legitimate scientific study for the last ten years or so (one of its major proponents, Sonja Lyubomirsky, is a co-author on this review) and therefore has not received the sort of scientific scrutiny demanded by “evidence-based” medicine.

A related explanation may be that people just don’t think that “positive thinking” can cure what they feel must be a disease.  As Albert Einstein once said, “You cannot solve a problem from the same consciousness that created it.”  The implication is that one must seek outside help—a drug, a therapist, some expert—to treat one’s illness.  But the reality is that for most cases of depression, “positive thinking” is outside help.  It’s something that—almost by definition—depressed people don’t do.  If they were to try it, they may reap great benefits, while simultaneously changing neural pathways responsible for the depression in the first place.

Which brings me to the final two reasons why “positive thinking” isn’t part of our treatment repertoire.  For one thing, there’s little financial incentive (to people like me) to do it.  If my patients can overcome their depression by “counting their blessings” for 30 minutes each day, or acting kindly towards strangers ten times a week, then they’ll be less likely to pay me for psychotherapy or for a refill of their antidepressant prescription.  Thus, psychiatrists and psychologists have a vested interest in patients believing that their expert skills and knowledge (of esoteric neural pathways) are vital for a full recovery, when, in fact, they may not be.

Finally, the “positive thinking” concept may itself become too “medicalized,” which may ruin an otherwise very good idea.  The Layous article, for example, tries to give a neuroanatomical explanation for why PAIs are effective.  They write that PAIs “might be linked to downregulation of the hyperactivated amygdala response” or might cause “activation in the left frontal region” and lower activity in the right frontal region.  Okay, these explanations might be true, but the real question is: does it matter?  Is it necessary to identify a mechanism for everything, even interventions that are (a) non-invasive, (b) cheap, (c) easy, (d) safe, and (e) effective?   In our great desire to identify neural mechanisms or “pathways” of PAIs, we might end up finding nothing;  it would be a shame if this result (or, more accurately, the lack thereof) leads us to the conclusion that it’s all “pseudoscience,” hocus-pocus, psychobabble stuff, and not worthy of our time or resources.

At any rate, it’s great to see that alternative methods of treating depression are receiving some attention.  I just hope that their “alternative-ness” doesn’t earn immediate rejection by the medical community.  On the contrary, we need to identify those for whom such approaches are beneficial; engaging in “positive activities” to treat depression is an obvious idea whose time has come.


Mental Illness IS Real After All… So What Was I Treating Before?

July 26, 2011

I recently started working part-time on an inpatient psychiatric unit at a large county medical center.  The last time I worked in inpatient psychiatry was six years ago, and in the meantime I’ve worked in various office settings—community mental health, private practice, residential drug/alcohol treatment, and research.  I’m glad I’m back, but it’s really making me rethink my ideas about mental illness.

An inpatient psychiatry unit is not just a locked version of an outpatient clinic.  The key difference—which would be apparent to any observer—is the intensity of patients’ suffering.  Of course, this should have been obvious to me, having treated patients like these before.  But I’ll admit, I wasn’t prepared for the abrupt transition.  Indeed, the experience has reminded me how severe mental illness can be, and has proven to be a “wake-up” call at this point in my career, before I get the conceited (yet naïve) belief that “I’ve seen it all.”

Patients are hospitalized when they simply cannot take care of themselves—or may be a danger to themselves or others—as a result of their psychiatric symptoms.  These individuals are in severe emotional or psychological distress, have immense difficulty grasping reality, or are at imminent risk of self-harm, or worse.  In contrast to the clinic, the illnesses I see on the inpatient unit are more incapacitating, more palpable, and—for lack of a better word—more “medical.”

Perhaps this is because they also seem to respond better to our interventions.  Medications are never 100% effective, but they can have a profound impact on quelling the most distressing and debilitating symptoms of the psychiatric inpatient.  In the outpatient setting, medications—and even psychotherapy—are confounded by so many other factors in the typical patient’s life.  When I’m seeing a patient every month, for instance—or even every week—I often wonder whether my effort is doing any good.  When a patient assures me it is, I think it’s because I try to be a nice, friendly guy.  Not because I feel like I’m practicing any medicine.  (By the way, that’s not humility, I see it as healthy skepticism.)

Does this mean that the patient who sees her psychiatrist every four weeks and who has never been hospitalized is not suffering?  Or that we should just do away with psychiatric outpatient care because these patients don’t have “diseases”?  Of course not.  Discharged patients need outpatient follow-up, and sometimes outpatient care is vital to prevent hospitalization in the first place.  Moreover, people do suffer and do benefit from coming to see doctors like me in the outpatient setting.

But I think it’s important to look at the differences between who gets hospitalized and who does not, as this may inform our thinking about the nature of mental illness and help us to deliver treatment accordingly.  At the risk of oversimplifying things (and of offending many in my profession—and maybe even some patients), perhaps the more severe cases are the true psychiatric “diseases” with clear neurochemical or anatomic foundations, and which will respond robustly to the right pharmacological or neurosurgical cure (once we find it), while the outpatient cases are not “diseases” at all, but simply maladaptive strategies to cope with what is (unfortunately) a chaotic, unfair, and challenging world.

Some will argue that these two things are one and the same.  Some will argue that one may lead to the other.  In part, the distinction hinges upon what we call a “disease.”  At any rate, it’s an interesting nosological dilemma.  But in the meantime, we should be careful not to rush to the conclusion that the conditions we see in acutely incapacitated and severely disturbed hospital patients are the same as those we see in our office practices, just “more extreme versions.”  In fact, they may be entirely different entities altogether, and may respond to entirely different interventions (i.e., not just higher doses of the same drug).

The trick is where to draw the distinction between the “true” disease and its “outpatient-only” counterpart.  Perhaps this is where biomarkers like genotypes or blood tests might prove useful.  In my opinion, this would be a fruitful area of research, as it would help us better understand the biology of disease, design more suitable treatments (pharmacological or otherwise), and dedicate treatment resources more fairly.  It would also lead us to provide more humane and thoughtful care to people on both sides of the double-locked doors—something we seem to do less and less of these days.


Psychiatry, Homeostasis, and Regression to the Mean

July 20, 2011

Are atypical antipsychotics overprescibed?  This question was raised in a recent article on the Al Jazeera English website, and has been debated back and forth for quite some time on various blogs, including this one.  Not surprisingly, their conclusion was that, yes, these medications are indeed overused—and, moreover, that the pharmaceutical industry is responsible for getting patients “hooked” on these drugs via inappropriate advertising and off-label promotion of these agents.

However, I don’t know if this is an entirely fair characterization.

First of all, let’s just be up front with what should be obvious.  Pharmaceutical companies are businesses.  They’re not interested in human health or disease, except insofar as they can exploit people’s fears of disease (sometimes legitimately, sometimes not) to make money.  Anyone who believes that a publicly traded drugmaker might forego their bottom line to treat malaria in Africa “because it’s the right thing to do” is sorely mistaken.  The mission of companies like AstraZeneca, Pfizer, and BMS is to get doctors to prescribe as much Seroquel, Geodon, and Abilify (respectively) as possible.  Period.

In reality, pharmaceutical company revenues would be zero if doctors (OK, and nurse practitioners and—at least in some states—psychologists) didn’t prescribe their drugs.  So it’s doctors who have made antipsychotics one of the most prescribed classes of drugs in America, not the drug companies.  Why is this?  Has there been an epidemic of schizophrenia?  (NB:  most cases of schizophrenia do not fully respond to these drugs.)  Are we particularly susceptible to drug marketing?  Do we believe in the clear and indisputable efficacy of these drugs in the many psychiatric conditions for which they’ve been approved (and those for which they haven’t)?

No, I like to think of it instead as our collective failure to appreciate that patients are more resilient and adaptive than we give them credit for, not to mention our infatuation with the concept of biological psychiatry.  In fact, much of what we attribute to our drugs may in fact be the result of something else entirely.

For an example of what I mean, take a look at the following figure:

This figure has nothing to do with psychiatry.  It shows the average body temperature of two groups of patients with fever—one who received intravenous Tylenol, and the other who received an intravenous placebo.  As you can easily see, Tylenol cut the fever short by a good 30-60 minutes.  But both groups of patients eventually reestablished a normal body temperature.

This is a concept called homeostasis.  It’s the innate ability of a living creature to keep things constant.  When you have a fever, you naturally perspire to give off heat.  When you have an infection, you naturally mobilize your immune system to fight it.  (BTW, prescribing antibiotics for viral respiratory infections is wasteful:  the illness resolves itself “naturally” but the use of a drug leads us to believe that the drug is responsible.)  When you’re sad and hopeless, lethargic and fatigued, you naturally engage in activities to pull yourself out of this “rut.”  All too often, when we doctors see these symptoms, we jump at a diagnosis and a treatment, neglecting the very real human capacity—evolutionarily programmed!!—to naturally overcome these transient blows to our psychological stability and well-being.

There’s another concept—this one from statistics—that we often fail to recognize.  It’s called “regression to the mean.”  If I survey a large number of people on some state of their psychological function (such as mood, or irritability, or distractibility, or anxiety, etc), those with an extreme score on their first evaluation will most likely have a more “normal” score on their next evaluation, and vice versa, even in the absence of any intervention.  In other words, if you’re having a particularly bad day today, you’re more likely to be having a better day the next time I see you.

This is perhaps the best argument for why it takes multiple sessions with a patient—or, at the very least, a very thorough psychiatric history—to make a confident psychiatric diagnosis and to follow response to treatment.  Symptoms—especially mild ones—come and go.  But in our rush to judgment (not to mention the pressures of modern medicine to determine a diagnosis ASAP for billing purposes), endorsement of a few symptoms is often sufficient to justify the prescription of a drug.

Homeostasis and regression to the mean are not the same.  One is a biological process, one is due to natural, semi-random variation.  But both of these concepts should be considered as explanations for our patients “getting better.”  When these changes occur in the context of taking a medication (particularly one like an atypical antipsychotic, with so many uses for multiple nonspecific diagnoses), we like to think the medication is doing the trick, when the clinical response may be due to something else altogether.

Al Jazeera was right: the pharmaceutical companies have done a fantastic job in placing atypical antipsychotics into every psychiatrist’s armamentarium.  And yes, we use them, and people improve.  The point, though, is that the two are sometimes not connected.  Until and unless we find some way to recognize this—and figure out what really works—Big Pharma will continue smiling all the way to the bank.


When A Comorbidity Isn’t “Comorbid” At All

July 7, 2011

When medical professionals speak of the burden of illness, we use the term “morbidity.”  This can refer either to the impact of an illness on a patient’s quality of life, or to the overall impact of a disease on a defined community.  We also speak of “co-morbidities,” which, as you might expect, are two concurrent conditions, both of which must be treated in order for a patient to experience optimal health.

Comorbidities can be entirely unrelated, as in the case of a tooth abscess and fecal incontinence (at least I hope those are unrelated!).  Alternatively, they can be intimately connected, like CHF and coronary artery disease.  They may also represent seemingly discrete phenomena which, upon closer inspection, might be related after all—at least in some patients—like schizophrenia and obesity, depression and HIV, or chronic fatigue syndrome and XMRV (oops, scratch that last one!).  The idea is that it’s most parsimonious to find the connections between and among these comorbidities (when they exist) and treat both disorders simultaneously in order to achieve the best outcomes for patients.

I was recently asked to write an article on the comorbidity of alcoholism and anxiety disorders, and how best to manage these conditions when they co-occur.  Being the good (and modest—ha!) researcher that I am, I scoured the literature and textbooks for clinical trials, and found several studies of treatment interventions for combined anxiety and alcoholism.  Some addressed the disorders sequentially, some in parallel, some in an integrated fashion.  I looked at drug trials and therapy trials, in a variety of settings and for various lengths of time.

I quickly found that there’s no “magic bullet” to treat anxiety and alcoholism.  No big surprise.  But when I started to think about how these conditions appear in the real world (in other words, not in a clinical trial), I began to understand why.

You see, there’s great overlap among most psychiatric diagnoses—think of “anxious depression” or “bipolar with psychotic features.”  As a result, psychiatrists in practice more often treat symptoms than diseases.  And nowhere is this more the case than in the diagnosis and treatment of addictions.

Addictions are incredibly complex phenomena.  While we like to think of addictions like alcoholism as “diseases,” I’m starting to think they really are not.  Instead, an addiction like alcoholism is a manifestation or an epiphenomenon of some underlying disorder, some underlying pain or deficiency, or some sense of helplessness or powerlessness (for a more elaborate description, see Lance Dodes’ book The Heart of Addiction).  In other words, people drink not because of a dopamine receptor mutation, or a deficiency in some “reward chemical,” or some “sensation-seeking” genotype, but because of anxiety, depression, or other painful emotional states.  They could just as easily be “addicted” to gambling, running, bike riding, cooking (and yes, sex) as ways of coping with these emotions.  Incidentally, what’s “problematic” differs from person to person and from substance to substance.  (And it is notable, for instance, that mainlining heroin = “bad” and running marathons = “good.”  Who made that rule?)

“But wait,” you might say, “there’s your comorbidity right there… you said that people drink because they’re anxious.”  Okay, so what is that “anxiety”?  Panic disorder?  Post-traumatic stress disorder?  Social phobia?  Yes, there are certainly some alcoholics with those “pre-existing conditions” who use alcohol as a way of coping with them, but they are a small minority.  (And even within that minority, I’m sure there are those whose drinking has been a remarkably helpful coping mechanism, despite the fact that it would be far more supportive of our treatment paradigm if they just took a pill that we prescribed to them.)

For the great majority of people, however, the use of alcohol (or another addictive behavior) is a way to deal with a vastly more complicated set of anxieties, deficiencies, and an inability to deal with the here and now in a more direct way.  And that’s not necessarily a bad thing.  In fact, it can be quite adaptive.

Unfortunately, when we psychiatrists hear that word “anxiety,” we immediately think of the anxiety disorders as written in the DSM-IV and think that all anxious alcoholics have a clear “dual diagnosis” which—if we diagnose correctly—can be treated according to some formula.  Instead, we ought to think about anxiety in a more diffuse and idiosyncratic way:  i.e., the cognitive, emotional, behavioral, and existential phenomena that uniquely affect each of our patients.  (I’m tempted to venture into psychodynamic territory and describe the tensions between unconscious drives and the patient’s ego, but I’m afraid that might be too quaint for the sensibilities of the 21st century mind.)

Thus, I predict that the rigorous, controlled (and expensive, and time-consuming) studies of medications and other interventions for “comorbid” anxiety disorders and alcoholism are doomed to fail.  This is because alcoholism and anxiety are not comorbid in the sense that black and white combine to form the stripes of a zebra.  Rather, they make various shades of grey.  Some greys are painful and everlasting, while others are easier to erase.  By simplifying them as black+white and treating them accordingly, we miss the point that people are what matter, and that the “grey areas” are key to understanding each patient’s anxieties, insecurities, and motivations—in other words, to figuring out how each patient is unique.


I Just Don’t Know What (Or Whom) To Believe Anymore

July 2, 2011

de-lu-sion [dih-loo-zhuhn] Noun.  1. An idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality, typically a symptom of mental disorder.

The announcement this week of disciplinary action against three Harvard Medical School psychiatrists (which you can read about here and here and here and here) for violating that institution’s conflict-of-interest policy comes at a pivotal time for psychiatry.  Or at least for my own perceptions of it.

As readers of this blog know, I can be cynical, critical, and skeptical about the medicine I practice on a daily basis.  This arises from two biases that have defined my approach to medicine from Day One:  (1) a respect for the patient’s point of view (which, in many ways, arose out of my own personal experiences), and (2) a need to see and understand the evidence (probably a consequence of my years of graduate work in basic molecular neuroscience before becoming a psychiatrist).

Surprisingly, I have found these attributes to be in short supply among many psychiatrists—even among the people we consider to be our leaders in the field.  And Harvard’s action against Biederman, Spencer, and Wilens might unfortunately just be the tip of the iceberg.

I entered medical school in the late 1990s.  I recall one of my preclinical lectures at Cornell, in which the chairman of our psychiatry department, Jack Barchas, spoke with breathless enthusiasm about the future of psychiatry.  He expounded passionately about how the coming era would bring deeper knowledge of the biological mechanisms of mental illness and new, safer, more effective medications that would vastly improve our patients’ lives.

My other teachers and mentors were just as optimistic.  The literature at the time was filled with studies of new pharmaceuticals (the atypical antipsychotics, primarily), molecular and neuroimaging discoveries, and novel research into genetic markers of illness.  As a student, it was hard not to be caught up in the excitement of the coming revolution in biological psychiatry.

But I now wonder whether we may have been deluding ourselves.  I have no reason to think that Dr Barchas was lying to us in that lecture at Cornell, but those who did the research about which he pontificated may not have been giving us the whole story.  In fact, we’re now learning that those “revolutionary” new drugs were not quite as revolutionary as they appeared.  Drug companies routinely hid negative results and designed their studies to make the new drugs appear more effective.  They glossed over data about side effects, and frequently drug companies would ghostwrite books and articles that appeared to come from their (supposedly unbiased) academic colleagues.

This went on for a long time.  And for all those years, these same academics taught the current generation of psychiatrists like me, and lectured widely (for pay, of course) to psychiatrists in the community.

In my residency years in the mid-2000s, for instance, each one of my faculty members (with only one exception that I’m aware of) spoke for drug companies or was being paid to do research on drugs that we were actively prescribing in the clinic and on the wards.  (I didn’t know this at the time, of course; I learned this afterward.)  And this was undoubtedly the case in other top-tier academic centers throughout the country, having a trickle-down effect on the practice of psychiatry worldwide.

Now, there’s nothing wrong with academics doing research or being paid to do it.  For me, the problem is that those two “pillars” by which I practice medicine (i.e., respect for the patient’s well-being, and a desire for hard evidence) were not the priorities of much of this clinical research.  Patients weren’t always getting better with these new drugs (certainly not in the long run), and the data were finessed and refined in ways that embellished the main message.  This was, of course, exacerbated by the big paychecks many of my academic mentors received.  Money has a remarkable way of influencing what people say and how (and how often) they say it.

But how is a student—or a practicing doc in the community who is several decades out of medical school—supposed to know this?  In my opinion, those who teach medical students and psychiatry residents probably should not be on a pharma payroll or give promotional talks for drugs.  These “academic leaders” are supposed to be fair, neutral, thoughtful authorities who make recommendations on patient outcomes data and nothing else.  Isn’t that why we have academic medical centers in the first place?   (Hey, at least we know that drug reps are paid handsome salaries & bonuses by drug companies… But don’t we expect university professors to be different?)

Just as a series of little white lies can snowball into an enormous unintended deception, I’m afraid that the last 10-20 years of cumulative tainted messages (sometimes deliberate, sometimes not) about the “promises” of psychiatry have created a widespread shared delusion about what we can offer our patients.  And if that’s too much of an exaggeration, then we might at least agree that our field now suffers a crisis of confidence in our leaders.  As Daniel Carlat commented in a story about the Harvard action: “When I get on the phone now and talk to a colleague about a study… [I ask] ‘was this industry funded, and can we trust the study?'”

It may be too late to avoid irreparable damage to this field or our confidence in it.  But at least some of this is coming to light.  If nothing else, we’re taking a cue from our area of clinical expertise, and challenging the delusional thought processes that have driven our actions for many, many years.


%d bloggers like this: