Advertisements
 

Biomarker Envy V: BDNF and Cocaine Relapse

October 18, 2011

The future of psychiatric diagnosis and treatment lies in the discovery and development of “biomarkers” of pathological processes.  A biomarker, as I’ve written before, is something that can be measured or quantified, usually from a biological specimen like a blood sample, which helps to diagnose a disease or predict response to a treatment.

Biomarkers are the embodiment of the new “personalized medicine”:  instead of wasting time talking to a patient, asking questions, and possibly drawing incorrect conclusions, the holy grail of a biomarker allows the clinician to order a simple blood test (or brain scan, or genotype) and make a decision about that specific patient’s case.  But “holy grail” status is elusive, and a recent study from the Yale University Department of Psychiatry, published this month in the journal Biological Psychiatry, provides yet another example of a biomarker which is not quite there—at least not yet.

The Yale group, led by Rajita Sinha, PhD, were interested in the question, what makes newly-abstinent cocaine addicts relapse?, and set out to identify a biological marker for relapse potential.  If such a biomarker exists, they argue, then it could not only tell us more about the biology of cocaine dependence, craving, and relapse, but it might also be used clinically, as a way to identify patients who might need more aggressive treatment or other measures to maintain their abstinence.

The researchers chose BDNF, or brain-derived neurotrophic factor, as their biomarker.  In studies of cocaine-dependent animals who are forced into prolonged abstinence, those animals show elevations in BDNF when exposed to a stressor; moreover, cocaine-seeking is associated with BDNF elevations, and BDNF injections can promote cocaine-seeking behavior in these same abstinent animals.  In their recent study, Sinha’s group took 35 cocaine-dependent (human) patients and admitted them to the hospital for 4 weeks.  After three weeks of NO cocaine, they measured blood levels of BDNF and compared these numbers to the levels measured in “healthy controls.”  Then they followed all 35 cocaine users for the next 90 days to determine which of them would relapse during this three-month period.

The results showed that the abstinent cocaine users generally had higher BDNF levels than the healthy controls (see figure below, A).  However, when the researchers looked at the patients who relapsed on cocaine during the 3-month follow-up (n = 23), and compared them to those who stayed clean (n = 12), they found that the relapsers, on average, had higher BDNF levels than the non-relapsers (see figure, B).  Their conclusion is that high levels of BDNF may predict relapse.

These results are intriguing, and Dr Sinha presented her findings at the California Society of Addiction Medicine (CSAM) annual conference last week.  Audience members—all of whom treat drug and alcohol addiction—asked about how they might measure BDNF levels in their patients, and whether the same BDNF elevations might be found in dependence on other drugs.

But one question really got to what I think is the heart of the matter.  Someone asked Dr Sinha: “Looking back at the 35 patients during their four weeks in the hospital, were there any characteristics that separated the high BDNF patients from those with low BDNF?”  In other words, were there any behavioral or psychological features that might, in retrospect, be correlated with elevated BDNF?  Dr Sinha responded, “The patients in the hospital who seemed to be experiencing the most stress or who seemed to be depressed had higher BDNF levels.”

Wait—you mean that the patients at high risk for relapse could be identified by talking to them?  Dr Sinha’s answer shows why biomarkers have little place in clinical medicine, at least at this point.  Sure, her group showed correlations of BDNF with relapse, but nowhere in their paper did they describe personal features of the patients (psychological test scores, psychiatric complaints, or even responses to a checklist of symptoms).  So those who seemed “stressed or depressed” had higher BDNF levels, and—as one might predict—relapsed.  Did this (clinical) observation really require a BDNF blood test?

Dr Sinha’s results (and the results of others who study BDNF and addiction) make a strong case for the role of BDNF in relapse or in recovery from addiction.  But as a clinical tool, not only is it not ready for prime time, but it distracts us from what really matters.  Had Dr Sinha’s group spent four weeks interviewing, analyzing, or just plain talking with their 35 patients instead of simply drawing blood on day 21, they might have come up with some psychological measures which would be just as predictive of relapse—and, more importantly, which might help us develop truly “personalized” treatments that have nothing to do with BDNF or any biochemical feature.

But I wouldn’t hold my breath.  As Dr Sinha’s disclosures indicate, she is on the Scientific Advisory Board of Embera NeuroTherapeutics, a small biotech company working to develop a compound called EMB-001.  EMB-001 is a combination of oxazapam (a benzodiazepine) and metyrapone.  Metyrapone inhibits the synthesis of cortisol, the primary stress hormone in humans.  Dr Sinha, therefore, is probably more interested in the stress responses of her patients (which would include BDNF and other stress-related proteins and hormones) than in whether they say they feel like using cocaine or not.

That’s not necessarily a bad thing.  Science must proceed this way.  If EMB-001 (or a treatment based on BDNF) turns out to be an effective therapy for addiction, it may save hundreds or thousands of lives.  But until science gets to that point, we clinicians must always remember that our patients are not just lab values, blood samples, or brain scans.  They are living, thinking, and speaking beings, and sometimes the best biomarker of all is our skilled assessment and deep understanding of the patient who comes to us for help.

Advertisements

How To Retire At Age 27

September 4, 2011

A doctor’s primary responsibility is to heal, and all of our efforts and resources should be devoted to that goal.  At times, it is impossible to restore a patient to perfect health and he or she must unfortunately deal with some degree of chronic disability.  Still other times, though, the line between “perfect health” and “disability” is blurred, and nowhere (in my opinion) is this more problematic than in psychiatry.

To illustrate, consider the following example from my practice:

Keisha (not her real name), a 27 year-old resident of a particularly impoverished and crime-ridden section of a large city, came to my office for a psychiatric intake appointment.  I reviewed her intake questionnaire; under the question “Why are you seeking help at this time?” she wrote: “bipolar schizophrenia depression mood swings bad anxiety ADHD panic attacks.”  Under “past medications,” she listed six different psychiatric drugs (from several different categories).  She had never been hospitalized.

When I first saw her, she appeared overweight but otherwise in no distress.  An interview revealed no obvious thought disorder, no evidence of hallucinations or delusions, nor did she complain of significant mood symptoms.  During the interview, she told me, “I just got my SSDI so I’m retired now.”  I asked her to elaborate.  “I’m retired now,” she said.  “I get my check every month, I just have to keep seeing a doctor.”  When I asked why she’s on disability, she replied, “I don’t know, whatever they wrote, bipolar, mood swings, panic attacks, stuff like that.”  She had been off medications for over two months (with no apparent symptoms); she said she really “didn’t notice” any effect of the drugs, except the Valium 20 mg per day, which “helped me settle down and relax.”

Keisha is a generally healthy 27 year-old.  She graduated high school (something rare in this community, actually) and took some nursing-assistant classes at a local vocational school.  She dropped out, however, because “I got stressed out.”  She tried looking for other work but then found out from a family member that she could “apply for disability.”  She applied and was denied, but then called a lawyer who specialized in disability appeals and, after about a year of resubmissions, received the good news that she can get Social Security Disability, ensuring a monthly check.

How is Keisha “disabled”?  She’s disabled because she went to see a doctor and, presumably, told that doctor that she can’t work because of “stress.”  That doctor probably asked her a series of questions like “are you unable to work because of your depressed mood?”, “Do you find it hard to deal in social situations because of your mood swings?” etc., and she answered them in the affirmative.  I’ve seen dozens—if not hundreds—of disability questionnaires, which ask the same questions.

I have no doubt that Keisha lives a stressful life.  I’ve driven through her part of town.  I’ve read about the turf wars being waged by the gangs there.  I know that her city has one of the highest murder rates in America, unemployment is high, schools are bad, and drug abuse and criminal activity are widespread.  I would be surprised if anyone from her neighborhood was not anxious, depressed, moody, irritable, or paranoid.

But I am not convinced that Keisha has a mental illness.

Lest you think that I don’t care about Keisha’s plight, I do.  Keisha may very well be struggling, but whether this is “major depression,” a true “anxiety disorder,” or simply a reaction to her stressful situation is unclear.  Unfortunately, psychiatry uses simple questions to arrive at a diagnosis—and there are no objective tests for mental illness—so a careless (or unscrupulous) provider can easily apply a label, designating Keisha’s situation as a legitimate medical problem.  When combined with the law firms eager to help people get “the government money they deserve,” and the very real fact that money and housing actually do help people like Keisha, we’ve created the illusion that mental illness is a direct consequence of poverty, and the way to treat it is to give out monthly checks.

As a physician, I see this as counter-therapeutic for a number of reasons.  With patients like Keisha, I often wonder, what exactly am I “treating”?  What constitutes success?  An improvement in symptoms?  (What symptoms?)  Or successfully getting her on the government dole?  And when a patient comes to me, already on disability after receiving a diagnosis of MDD (296.34) or panic disorder (300.21) from some other doctor or clinic, I can’t just say, “I’m sorry about your situation, but let’s see what we can do to overcome it together,” because there’s no incentive to overcome it.  (This is from someone who dealt with severe 307.51 for sixteen years, but who also had the promise of a bright future to help overcome it.)

Moreover, making diagnoses where there is no true pathology artificially inflates disease prevalence, further enlarging state and county mental health bureaucracies.  It enables massive over-prescription of expensive (e.g., atypical antipsychotics like Seroquel and Zyprexa), addictive (like stimulants and benzodiazepines), or simply ineffective (like SSRIs) medications.  And far from helping the downtrodden who claim to be its “victims,” this situation instead rewards drug companies and doctors, some of whom prefer serving this population because of the assembly-line nature of this sort of practice:  see the patient, make the diagnosis, write the script, and see them again in 3-6 months.

The bottom line is, here in America we’ve got thousands (perhaps millions?) of able-bodied people who, for one socioeconomic (i.e., not psychiatric) reason or another, can’t find work and have fallen upon psychiatric “disability” as their savior.  I’d love to help them, but, almost by definition, I cannot.  And neither can any other doctor.  Sure, they struggle and suffer, but their suffering is relieved by a steady job, financial support, and yes, direct government assistance.  These are not part of the psychiatric armamentarium.  It’s not medicine.

Psychiatry should not be a tool for social justice.  (We’ve tried that before.  It failed.)  Using psychiatric labels to help patients obtain taxpayers’ money, unless absolutely necessary and legitimate, is wasteful and dishonest.  More importantly, it harms the very souls we have pledged an oath to protect.


Antidepressants and “Stress” Revisited

April 13, 2011

If you have even the slightest interest in the biology of depression (or if you’ve spent any time treating depression), you’ve heard about the connection between stress and depressive illness.  There does seem to be a biological—maybe even a causative—link, and in many ways, this seems intuitive:  Stressful situations make us feel sad, hopeless, helpless, etc—many of the features of major depression—and the physiological changes associated with stress probably increase the likelihood that we will, in fact, become clinically depressed.

To cite a specific example, a steroid hormone called cortisol is elevated during stress, and—probably not coincidentally—is also usually elevated in depression.  Some researchers have attempted to treat depression by blocking the effects of cortisol in the brain.  Although we don’t (yet) treat depression this way, it is a tantalizing hypothesis, if for no reason other than the fact that it makes more intuitive sense than the “serotonin hypothesis” of depression, which has little evidence to back it up.

A recent article in Molecular Psychiatry (pdf here) adds another wrinkle to the stress hormone/depression story.  Researchers from King’s College London, led by Christoph Anacker, show that antidepressants actually promote the growth and development of new nerve cells in the hippocampus, and both processes depend on the stress hormone receptor (also known as the glucocorticoid receptor or GR).

Specifically, the group performed their experiments in a cell culture system using human hippocampal progenitor cells (this avoids some of the complications of doing such experiments in animals or humans).  They found that neither sertraline (Zoloft) alone, nor stress steroids (in this case, dexamethasone or DEX) alone, caused cells to proliferate, but when given together, proliferation occurred—in other words, the hippocampal progenitor cells started to divide rapidly.  [see figure above]

Furthermore, when they continued to incubate the cells with Zoloft, the cells “differentiated”—i.e., they turned into cells with all the characteristics of mature nerve cells.  But in this case, differentiation was inhibited by dexamethasone. [see figure at right]

To make matters more complicated, the differentiation process was also inhibited by RU486, a blocker of the receptor for dexamethasone (and other stress hormones).  What’s amazing is that RU486 prevented Zoloft-induced cell differentiation even in the absence of stress hormones.  (However, it did prevent the damaging effects of dexamethasone, consistent with what we might predict.) [see figure at left]

The take-home message here is that antidepressants and dexamethasone (i.e., stress hormones) are required for cell proliferation (first figure), but only antidepressants cause cell differentiation and maturation (second figure).  Furthermore, both processes can be inhibited by RU486, a stress hormone antagonist (third figure).

All in all, this research makes antidepressants look “good.”  (Incidentally, the researchers also got the same results with amitripytline and clomipramine, two tricyclic antidepressants, so the effect is not unique to SSRIs like Zoloft.)  However, it raises serious questions about the relationship between stress hormones and depression.  If antidepressants work by promoting the growth and development of hippocampal neurons, then this research also says that stress hormones (like dexamethasone) might be required, too—at least for part of this process (i.e., they’re required for growth/proliferation, but not for differentiation).

This also raises questions about the effects of RU486.  Readers may recall the enthusiasm surrounding RU486 a few years ago as a potential treatment for psychotic depression, promoted by Alan Schatzberg and his colleagues at Corcept Pharmaceuticals.  Their argument (a convincing one, at the time) was that if we could block the unusually high levels of cortisol seen in severe, psychotic depression, we might treat the disease more effectively.  However, clinical trials of their drug Corlux (= RU486) were unsuccessful.  The experiments in this paper show one possible explanation why:   Instead of simply blocking stress hormones, RU486 blocks the stress hormone receptor, which seems to be the key intermediary for the positive effects of antidepressants (see the third figure).

The Big Picture:   I’m well aware that this is how science progresses:  we continually refine our hypotheses as we collect new data, and sometimes we learn how medications work only after we’ve been using them successfully for many years.  (How long did it take to learn the precise mechanism of salicylic acid, also known as aspirin?  More than two millennia, at least.)  But here we have a case in which antidepressants seem to work in a fashion that is so different from what we originally thought (incidentally, the word “serotonin” is used only three times in their 13-page article!!).  Moreover, the new mechanism (making new brain cells!!) is quite significant.  And the involvement of stress hormones in this new mechanism doesn’t seem very intuitive or “clean” either.

It makes me wonder (yet again) what the heck these drugs are doing.  I’m not suggesting we call a moratorium on the further use of antidepressants until we learn exactly how they work, but I do suggest that we practice a bit of caution when using them.  At the very least, we need to change our “models” of depression.  Fast.

Overall, I’m glad this research is being done so that we can learn more about the mechanisms of antidepressant action (and develop new, more specific ones… maybe ones that target the glucocorticoid receptor).  In the meantime, we ought to pause and recognize that what we think we’re doing may be entirely wrong.  Practicing a little humility is good every once in a while, even especially for a psychopharmacologist.


Stress, Illness, and Biological Determinism

March 27, 2011

Two interesting articles caught my attention this week, on the important subject of “stress” and its relationship to human disease—both psychological and physical.  Each offers some promising ways to prevent stress-related disease, but they also point out some potential biases in precisely how we might go about doing so.

A piece by Paul Tough in the New Yorker profiled Nadine Burke, a San Francisco pediatrician (the article is here, but it’s subscription-only; another link might be here).  Burke works in SF’s poverty-stricken Bayview-Hunters Point neighborhood, where health problems are rampant.  She recognized that in this population, the precursors of disease are not just the usual suspects like poor access to health care, diet/lifestyle, education, and high rates of substance use, but also the impact of “adverse childhood experiences” or ACEs.

Drawing upon research by Vincent Felitti and Robert Anda, Burke found that patients who were subjected to more ACEs (such as parental divorce, physical abuse, emotional neglect, being raised by a family member with a drug problem, etc.) had worse outcomes as adults.  These early traumatic experiences had an effect on the development of illnesses such as cancer, heart disease, respiratory illness, and addiction.

The implication for public health, obviously, is that we must either limit exposure to stressful events in childhood, or decrease their propensity to cause long-term adverse outcomes.  The New Yorker article briefly covers some biological research in the latter area, such as how early stress affects DNA methylation in rats, and how inflammatory markers like C-reactive protein are elevated in people who were mistreated as children.  Burke is quoted as saying, “In many cases, what looks like a social situation is actually a neurochemical situation.”  And a Harvard professor claims, “this is a very exciting opportunity to bring biology into early-childhood policy.”

With words like “neurochemical” and “biology” (not to mention “exciting”) being used this way, it doesn’t take much reading-between-the-lines to assume that the stage is being set for a neurochemical intervention, possibly even a “revolution.”  One can almost hear the wheels turning in the minds of academics and pharmaceutical execs, who are undoubtedly anticipating an enormous market for endocrine modulators, demethylating agents, and good old-fashioned antidepressants as ways to prevent physical disease in the children of Hunters Point.

To its credit, the article stops short of proposing that all kids be put on drugs to eliminate the effects of stress.  The author emphasizes that Burke’s clinic engages in biofeedback, child-parent therapy, and other non-pharmacological interventions to promote secure attachment between child and caregiver.  But in a society that tends to favor the “promises” of neuropharmacology—not to mention patients who might prefer the magic elixir of a pill—is this simply window-dressing?  A way to appease patients and give the impression of doing good, until the “real” therapies, medications, become available?

More importantly, are we expecting drugs to reverse the effects of social inequities, cultural disenfranchisement, and personal irresponsibility?

***

The other paper is a study published this month in the Journal of Epidemiology and Community Health.  In this paper, researchers from Sweden measured “psychological distress” and its effects on long-term disability in more than 17,000 “average” Swedish adults.  The subjects were given a baseline questionnaire in 2002, and researchers followed them over a five-year period to see how many received new disability benefits for medical or psychiatric illness.

Not surprisingly, there was a direct correlation between high “psychological distress” and high rates of disability.  It is, of course, quite possible that people who had high baseline distress were distressed about a chronic and disabling health condition, which worsened over the next five years.  But the study also found that even low levels of psychological stress at baseline were significantly correlated with the likelihood of receiving a long-term disability benefit, for both medical and psychiatric illness.

The questionnaire used by the researchers was the General Health Questionnaire, a deceptively simple, 12-question survey of psychological distress (a typical question is “Have you recently felt like you were under constant strain?” with four possible answers, “not at all” up to “much more than usual”)  and scored on a 12-point scale.  Interestingly, people who scored only 1 point out of 12 were twice as likely to receive a disability reward than those who scored zero, and the rates only went up from there.

I won’t delve into other details of the results here, but as Sweden resembles the US in its high rates of psychiatric “disability” (between 1990 and 2007, the percentage of disability rewards due to psychiatric illness rose from ~15% to over 40%), the implication is clear: even mild psychological “distress” is a risk factor for future illness—both physical and mental—and to reverse this trend, the effects of this distress must be treated or prevented in some way.

***

Both of these articles—from different parts of the world, using different measurement instruments, and looking at somewhat different outcomes—nevertheless reach the same conclusion:  early life stress is a risk factor for future disease.  This is a long-recognized phenomenon (for an easily accessible exploration of the topic, read Why Zebras Don’t Get Ulcers, by Stanford’s Robert Sapolsky, a former mentor of mine).

But what do we do with this knowledge?  My fear is that, rather than looking at ways to minimize “stress” in the first place (through social programs, education, and other efforts to raise awareness of the detrimental effects of stress), we as a society are instead conditioned to think about how we can intervene with a drug or some other way to modulate the “neurochemical situation,” as Nadine Burke put it.  In other words, we’re less inclined to act than to react, and our reactions are essentially chemical in nature.

As a psychiatrist who has worked with an inner-city population for many years, I’m already called upon to make diagnoses and prescribe medications not for what are obviously (to me) clear-cut cases of significant and disabling mental illness, but, rather, the accumulated effects of stress and trauma.  (I’ll write more about this fascinating interface of society and biology in the future.)   True, sometimes the diagnoses do “fit,” and indeed sometimes the medications work.  But I am doing nothing to prevent the initial trauma, nor do I feel that I am helping people cope with their stress by telling them to take a pill once or twice a day.

We as a society need to make sure we don’t perpetuate the false promises of biological determinism.  I applaud Nadine Burke and I’m glad epidemiologists (and the New Yorker) are asking serious questions about precursors of disease.  But let’s think about what really helps, rather than looking solely to biology as our savior.

(Thanks to Michael at The Trusting Heart for leading me to the New Yorker article.)


%d bloggers like this: