Measuring The Immeasurable

Is psychiatry a quantitative science?  Should it be?

Some readers might say that this is a ridiculous question.  Of course it should be quantitative; that’s what medicine is all about.  Psychiatry’s problem, they argue, is that it’s not quantitative enough.  Psychoanalysis—that most qualitative of “sciences”—never did anyone any good, and most psychotherapy is, likewise, just a bunch of hocus pocus.  A patient saying he feels “depressed” means nothing unless we can measure how depressed he is.  What really counts is a number—a score on a screening tool or checklist, frequency of a given symptom, or the blood level of some biomarker—not some silly theory about motives, drives, or subconscious conflicts.

But sometimes measurement can mislead us.  If we’re going to measure anything, we need to make sure it’s something worth measuring.

By virtue of our training, physicians are fond of measuring things.  What we don’t realize is that the act of measurement itself leads to an almost immediate bias.  As we assign numerical values to our observations, we start to define values as “normal” or “abnormal.”  And medical science dictates that we should make things “normal.”  When I oversee junior psychiatry residents or medical students, their patient presentations are often filled with such statements as “Mr. A slept for 5 hours last night” or “Ms. B ate 80% of her meals,” or “Mrs. C has gone two days without endorsing suicidal ideation,” as if these are data points to be normalized, just as potassium levels and BUN/Cr ratios need to be normalized in internal medicine.

The problem is, they’re not potassium levels or BUN/Cr ratios.  When those numbers are “abnormal,” there’s usually some underlying pathology which we can discover and correct.  In psychiatry, what’s the pathology?  For a woman who attempted suicide two days ago, does it really matter how much she’s eating today?  Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?  Even the questions that we ask patients—“are you still hearing voices?”, “how many panic attacks do you have each week?” and the overly simplistic “can you rate your mood on a scale of 1 to 10, where 1 is sad and 10 is happy?”— attempt to distill a patient’s overall subjective experience into an elementary quantitative measurement or, even worse, into a binary “yes/no” response.

Clinical trials take measurement to an entirely new level.  In a clinical trial, often what matters is not a patient’s overall well-being or quality of life (although, to be fair, there are ways of measuring this, too, and investigators are starting to look at this outcome measure more closely), but rather a HAM-D score, a MADRS score, a PANSS score, a Y-BOCS score, a YMRS score, or any one of an enormous number of other assessment instruments.  Granted, if I had to choose, I’d take a HAM-D score of 4 over a score of 24 any day, but does a 10- or 15-point decline (typical in some “successful” antidepressant trials) really tell you anything about an individual’s overall state of mental health?  It’s hard to say.

One widely used instrument, the Clinical Global Impression scale, endeavors to measure the seemingly immeasurable.  Developed in 1976 and still in widespread use, the CGI scale has three parts:  the clinician evaluates (1) the severity of the patient’s illness relative to other patients with the same diagnosis (CGI-S); (2) how much the patient’s illness has improved relative to baseline (CGI-I); and (3) the efficacy of treatment.  (See here for a more detailed description.)  It is incredibly simple.  Basically, it’s just a way of asking, “So, doc, how do you think this patient is doing?” and assigning a number to it.  In other words, subjective assessment made objective.

The problem is, the CGI has been criticized precisely for that reason—it’s too subjective.  As such, it is almost never used as a primary outcome measure in clinical trials.  Any pharmaceutical company that tries to get a drug approved on the basis of CGI improvement alone would probably be laughed out of the halls of the FDA.  But what’s wrong with subjectivity?  Isn’t everything that counts subjective, when it really comes down to it?  Especially in psychiatry?  The depressed patient who emerges from a mood episode doesn’t describe himself as “80% improved,” he just feels “a lot better—thanks, doc!”  The psychotic patient doesn’t necessarily need the voices to disappear, she just needs a way to accept them and live with them, if at all possible.  The recovering addict doesn’t think in terms of “drinking days per month,” he talks instead of “enjoying a new life.”

Nevertheless, measurement is not a fad, it’s here to stay.  And as the old saying goes, resistance is futile.  Electronic medical records, smartphone apps to measure symptoms, online checklists—they all capitalize on the fact that numbers are easy to record and store, easy to communicate to others, and satisfy the bean counters.  They enable pharmacy benefit managers to approve drugs (or not), they enable insurers to reimburse for services (or not), and they allow pharmaceutical companies to identify and exploit new markets.  And, best of all, they turn psychiatry into a quantitative, valid science, just like every other branch of medicine.

If this grand march towards increased quantification persists, the human science of psychiatry may cease to exist.  Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments.  In some settings, that that’s what we have now.  I don’t think it’s too late to salvage the human element of what we do.  A first step might be simply to use great caution when we’re asked to give a number, measure a symptom, or perform a calculation, on something that is intrinsically a subjective phenomenon.  And to remind ourselves that numbers don’t capture everything.

32 Responses to Measuring The Immeasurable

  1. Rob Lindeman says:

    “Unless we can replace these instruments with outcome measures that truly reflect patients’ abilities and strengths, rather than pathological symptoms, psychiatry may be replaced by an impersonal world of questionnaires, checklists, and knee-jerk treatments. In some settings, that that’s what we have now.”

    SOME settings? You’re being generous.

    BTW, I was going to comment that the title to this post should have been “Measuring the UN-measurable”, which would have been more correct if you were simply writing about objective testing of un-testable parameters. But if you are alluding to the depth of human experience, that is beyond measure (immeasurable), you are spot-on.

  2. Ah, trying to be completely objective about subjectivity – a fool’s errand. And how good is measurement in any science when there are thousands and thousands of interacting variables that all feed into one another simultaneously? And we are always counting on the subjective observations of both patients and doctors/scientists when whenever we do clinical trials in psychiatry.

    When will we ever be able measure subjective experience with lab tests? Probably never. These facts are true of all social sciences.

  3. Nathan says:

    I think it is disingenuous to try to disentangle qualitative/quantitative differences as subjective and objective, and then criticize how quantitative/objective measures do not capture all depth of human experience. All measurement whether, qualitative or quantitative, is subjective and never will capture complete “depth” of experience. This is true in physical sciences as well as in human ones. I agree then that just trying to quantify everything for the sake of it is not necessarily all that helpful, but that doesn’t mean qualitative methods of inquiry are truer, more complete, capture changes, or are more helpful.

    I think there is a continued misunderstanding and misuse of quantitative methods in analyzing outcomes. As mental health care continues down an evidence-based path (which I agree with), mental health researchers and professionals think that anything with a number attached to it becomes something of value or a meaningful outcome. So instead of actually identifying, pursuing, and evaluating outcomes, people identify, count, and draw illogical conclusions from outputs, because they are easier to quantify. Basically, what happens is that instead of measuring what is important, people measure what it easy, and then try to show that what they measured is important, when it often is not, or at least not by itself.

    This does not mean that outcomes can not be assessed quantitatively, it just means that people have to be critical, thoughtful, clear, and creative in identifying outcomes (which in mental heath are often very subjective) and how to demonstrate that they occurred and hopefully the mechanism of what led to them. Qualitative inquiries have a role in this, but that does not diminish or end the usefulness in quantitative methods in mental health. Actually, I find a lot of research in medicine generally to poorly utilize quantitative methods to answer poorly defined questions with data that is not really helpful. This kind of usage of course is even more meaningless or even harmful in psychiatry when outcomes are often very complex.

    I agree that it is important to pursue research and practice that leads to benefits (outcomes) to patients. Quantitative methods can be very helpful in this process, as long as people are more sophisticated in actually identifying and pursuing outcomes, as opposed to trying to just thoughtlessly assign quantitative values to subjective experiences or try to make process outputs look like outcomes.

    • Nathan – I did not mean to imply that quantitative research was useless or that clinical trials should not take place. It is one more valuable source of information. I actually agree with everything you said.

      My problem is people who over-state what their quantitative results actually mean, and who also denigrate all clinical experience, no matter how widespread, as useless. Widespread clinical experience is not synonymous with “anecdotal.”

      • Nathan says:

        Hi Dr. Allen,

        I wasn’t attempting to say that I thought quantitative research was useless and apologize if I wrote it that way.

        I think even anecdotal data is still data and potentially useful, but like you say of folks who over-state the meaning of their quantitative results (which I agree happens all the time and at the detriment of advancing the field), many people overstate the meaning, value, and generalizability of their anecdotal data. I think we both agree that the goals of any mode of inquiry/treatment is to show meaningful and robust outcomes for people participate in it.

        I agree with you that clinical experience can be immensely valuable and really informative. Many experienced clinicians treat patients effectively just based on accumulated heuristics. I do think it is one of those places where systemic inquiry, even “quantifying clinicians’ subjective experience” may be useful, because I also see that clinicians often attribute their treatment successes/failures to irrelevant factors, they may miss important aspects of patients’ experiences that could be helpful for treatment planning because they are accustomed to doing/thinking in a particular way, or just do something really well without much intention behind it that if investigated more systematically could meaningfully advance treatment for other people.

        Ultimately, i do believe that medicine/psychiatry/psychotherapy are interventions, their effect should be able to be demonstrated by scientific investigation, it is only ethical to intervene if the benefits of interventions outweigh risks of the intervention or not intervening, and the burden of showing that those benefits outweigh the risks is on the interventionalist. I know this is difficult in human sciences but i can be done. I believe the alternative is a pre-scientific understanding of medicine before the 20th century that probably did as much harm (if not more) than it helped.

  4. David K says:

    Loved this essay, despite being an empiricist, an advocate of measurement and precision! But, oh, are you correct, especially in this: “The problem is, they’re not potassium levels or BUN/Cr ratios. When those numbers are ‘abnormal,’ there’s usually some underlying pathology which we can discover and correct. In psychiatry, what’s the pathology?” The problem is precisely that we do not know very much about the underlying pathologies we deal with, as evidenced by CBT’s efficacy over most pharma solutions in trials. Obviously, you ask more questions, and speculate more, too, than you declare, state or answer, but that uncertainty is a real reflection of our ignorance. But, the empathy we so much need? You emphasized that beautifully!

    • Rob Lindeman says:

      Why do you say “we don’t know very much about the underlying pathologies that we deal with” when you mean “we don’t know anything about the underlying pathologies that we deal with”?

      Do we soften the language out of embarassment? Or are we still deluding ourselves that we have found pathology that we have NOT found?

  5. David K says:

    I just thought of this Jonah Lehrer article in January’s WIRED, which is, to some degree, germane to this topic.

    http://www.wired.com/magazine/2011/12/ff_causation/all/1

  6. David K says:

    My primary concern with the use of quant. data as the ‘short form’ for what would have been clinical description is precisely that those data are used in lieu of (as a substitute for) clinical descriptions.

    A secondary concern (because it is less common) is that colleagues who frequently quote measurement outcome data, also tend to substitute the collection of that data (i.e., having a patient fill out or verbally respond to a measurement tool) for the sort of interaction with the patient we used to undertake (when we weren’t restricting appointment times to 15 mins.!).

    Neither of my concerns are indictments of data collection and analysis, nor of their utility. Measurement and empathic interaction are not mutually exclusive, but often reflect each other.

    But, because data (and its collection process) can be ‘short hand’ for description (and interaction), we tend to use them (and misuse them). We like brevity. And, since the measures are so often weak or inaccurate, or (at best) imprecise, we then fail our patients.

    • David K. – You are exactly right. I can always find way more complete and accurate information (about a neurologically-intact human being) from a clinical interview than from any symptom checklist – or any other psychological tests for that matter. Without follow-up questions, simple answers to a self report instrument often mask as much information as they provide.

  7. Santa Diego, MD says:

    I love your site! I found it via a link from Dr Pho’s website. I agree with your comments, but I like to quantify/describe psychiatric states as accurately as possible. This was the goal of DSM-III and subsequent revisions. I like to liberally supplement my thinking with symptom measures. I learned about psychometrics at the University of Minnesota, where we were taught that the measures are simply structured samples of behavior. The clinicians job is to figure out what they mean. I like to use symptom checklists for the purpose of helping me get more data to address in the clinical interview. There is a risk of the test replacing clinical judgment, but this does not have to happen. For example, the Beck Depression Inventory was initially designed as a structured interview, and later changed into a self report measure. In the Rush et al original text on CBT, the Beck Inventory is given at each visit and used as a basis for that weeks review of symptoms. It is tempting to use these measures as a replacement for detailed clinical interviewing, but it is never appropriate.

    I firmly believe we can measure subjective states. The science of self report Is alive and well. Psychiatry hasn’t been particularly quick to adopt it.

    • Altostrata says:

      Having been on the receiving end of those checklists and questionnaires, I can say they have a lot to do with my vow never, ever, ever to discuss my emotional reality with an MD again.

      If your situation doesn’t fit into a checklist — example: adverse drug events — the doctor will shoehorn it in to come up with a result he or she can easily comprehend, one that will put the onus on your “pathology” and won’t require addressing the shortcomings of pharmaceutical treatment.

      I firmly believe we cannot measure subjective states, and MDs, misled by such instruments into thinking they are omniscient, are becoming ever more incapable of interpreting patient self-reports in any way meaningful or constructive for the patient’s care.

    • “I firmly believe we can measure subjective states. The science of self report Is alive and well. Psychiatry hasn’t been particularly quick to adopt it.”

      Exactly so. Our current checklists and questionnaires don’t begin to approach this, which is why Altostrata and other thoughtful people recoil at the reductionism and dehumanization of these instruments. The report and measurement of subjective states was studied in detail by Charles Tart (check google or wikipedia) and others, and is largely an untapped area in Western thinking.

  8. Insomniac says:

    “Does it really matter whether an acutely psychotic patient (on a new medication, in a chaotic inpatient psych unit with nurses checking on him every hour) sleeps 4 hours or 8 hours each night?”

    I wouldn’t pay much attention to that data. All the nurse does is peek her head into the room and see if the patient is breathing. Often it’s pitch black, the patients’ backs are to the door, and their eyes are wide open. I knew every time the door opened because I was awake. You would be much better off listening to what the patient says about how they’re sleeping.

  9. jamzo says:

    see “the idolatry of the surrogate” BMJ 2011;343d7995 28 December 2011

    “easier to measure surrogate outcomes are often used instead of patient important outcomes such as death, quality of life, or functional capacity when assessing treatments”

    “our obsesssion with surrogates is damaging patient care”

  10. Carol Levy says:

    The p robolem with many of these ‘tests’ is the outcome can be changed by vagaries of the patients experience. As my anecdote, my trigeminal neuralgia was out of control, I also had anaesthesia dolorosa and was advised, at on point, that rational suicide was acceptable in my case.
    When Dr. Wm Sweet decided a mini frontal lobotomy (mesencephalic tractotomy, which I refused, of course) was in order, “you’ll still have the pain, you just won’t care that you do.” he insisted on testing of IQ and psych prior to surgery (it took some persuading on my part to convince him I was not agreeable to this op.) for comparison with after.
    When I answered truthfully to ?’s such as ‘I have numbness in parts of my body, my head hurts all the time, I feel life is not worth living, and social questions that did not take into consideration that I was virtually housebound and in a new city so alone,’ the results came back with psychiatric disorder. When I took it again later,on my own, and answered as someone without chronic facial pain, numbness, etc the results were much different.
    Even clinical testing is ultimately subjective.

  11. Daniel Zigman says:

    Great article. Totally agree with the above that rating scales should not replace clinical diagnoses. But they can be helpful in monitoring response to treatment.

    I would argue that real craziness is trying to do meta-analyses with mean scores from rating scales like the HAM-D. On an individual patient level, if a patient improves from 25 to 10 on the HAMD, then it likely reflects a significant improvement from severely depressed to mildly depressed. However, it is conceptually ridiculous to average out scores between different treatment arms in clinical trials and compare them as you would a real measurement (like weight or blood pressure).

    • Nathan says:

      In experiments, scores aren’t averaged between different groups, but group averages are compared to each other. This allows for statistical tests that allow for people to make a reasoned prediction of an extent of an active treatment effect beyond placebo for the population of people included in the study on the variable assessed (as measured by hopefully validated and reliable indicators, like the HAM-D).

      The other way to use scales like the HAM-D in research is to tally percentages of people who reach clinically meaningful benchmarks. For research in depression, these are often considered “response” (participant score is reduced by half) and “remission” (participant score is <7). Instead of averaging, the percentage of people who respond and remit in treatment and control arms are compared, showing if and how often people in a treatment show meaningful reduction of depressive symptoms.

      These uses differ than how assessment tools like the HAM-D are used in clinical practice. Using assessments like the HAM-D in practice is not to determine the efficacy of a kind of treatment or to build evidence to support its use, but simply to build an understanding some discreet symptoms people experience and to see how these symptoms change or don't change over time. Patients don't come with control groups, and assessments can help patients/clinicians decide whether to change treatment or to focus on particular symptoms that patients care about more than others. It may show that while symptoms are reduced but patients are still distressed, allowing for some more informed dialogue about what else is distressing/impairment.

      Ultimately, clinical decision making should not solely rely on these quantitative measurement scales, but neither should treatment efficacy be determined by clinical impression nor clinical treatment/diagnosis be based on subject understanding of a clinician. Remember, all measurement, even seemingly easily quantifiable measures like weight and blood pressure is subjective (the act of measuring is subjective and their meanings are subjective and contextual), but especially in research, when trying to support treatments (particularly drugs) that come with serious risks, having the methodological and mathematical tools to assess mechanisms and extent of change is important.

      • Altostrata says:

        This is an example of logic based on nonsense. True, if the HAM-D measured anything meaningful, averages would have meaning. An average of nonsense is nonsense. GIGO.

        In clinical use, the doctor might as well be using phrenology or Tarot to ascertain the patient’s condition.

      • Nathan says:

        I agree that this dilemma rests on whether or not scales like HAM-D have any meaningful validity. I think I agree with most people here that the HAM-D or other scales should not be the sole or primary (or secondary) instrument for diagnoses, but it does give a language to describe some symptoms and gauge changes across many dimensions. Whether or not and to what extent these symptoms are important to people seeking support is more helpful in shaping treatment, and even more useful are the goals and desires that patients want to experience. This does not mean that having some way, and again, hopefully valid and reliable way, to measure change in symptoms isn’t important for useful research or helpful in some elements of tracking treatment.

        I think this is different than using phrenology as a means for ascertaining someone’s condition. I would agree just asking is more useful and people are the experts on what they are feeling, but change doesn’t always happen smoothly, some things get better, somethings get worse, sometimes things like not sleeping are more important to people than not feeling pleasure, and I have found that having assessment tools (not necessarily the HAM-D) can be useful in getting patients/clinicians on the same page and be more informed when deciding to keep or alter a treatment (whether drugs, psychotherapy, physical activity, diet, or whatever). Assessments like the HAM_D should not at all be deterministic in themselves (because they aren’t) of to be used as the only way to get information or make a diagnosis (the are useless out of context), but I still think measurement has tremendous value in researching benefits and harms of treatment options and can be a helpful tool for many folk in practice.

        Tough I would prefer some better tools than the HAM-D for research, it is used and difficult to change as the benchmark measure because it has been used so long it allows for statistical comparisons to be made. Again, if it isn’t so valid, then it’s not useful in the first place and must be changed. Personally, having been a patient and researcher, I find assessments like the HAM-D more useful in research, and personalized self-designed measures (which probably lack validity and reliability) in treatment.

      • Daniel Zigman says:

        I guess that I did not make myself clear. My problem is not with using rating scales in clinical trials. My problem is with comparing _mean change_ between groups in clinical trials. The clinically meaningful outcome measure is the difference in % responders and % remitters, and not mean change.

        See: http://www.ncbi.nlm.nih.gov/pubmed/18621509

  12. Altostrata says:

    Nathan, it’s interesting that, as a researcher, you believe research and clinical practice occur in different worlds.

    In my observation as a patient:

    Bad research —> bad clinical practice —> patient injury

    Now, I guess you could say research is not responsible for practice, which is true, the individual doctor is technically responsible for what happens to his or her patients.

    But in the recent history of psychiatric drugs, you can readily see how research’s misinforming doctors has caused an untold amount of misery and real damage to patients.

    So, if you admit HAM-D is a poor instrument to assess patients in the real world, how can it be worthwhile in the research world? It sounds like you’re saying HAM-D and similar faux-quantifying instruments enable research — a laudable end in itself.

    But what kind of research do they enable? For 20 years, it’s been crap, and not just because of sloppy subject selection, where people with mild-to-moderate depression somehow get into studies on major depression because it’s convenient for the recruiter.

    If a researcher wants to express his or her bias in diagnosis or assessing improvement, phrenology or Tarot would work just as well and be a lot more fun.

    • DavidK says:

      This isn’t really a serious problem because most practitioners/clinicians do not read the literature in our field! (LOL)

    • Nathan says:

      I think research and practice most certainly do operate differently and often at the expense of the care of patients. I would say that responsible practice MUST be based on rigorous research, and I definitely agree that the research done on psychiatric drugs has been horrendous. The vast majority of psychiatric medication research in recent history has been funded by drug companies. Their research methodology on drugs are biased towards intended drug effect, and even with that bias, the drugs fail to often meaningfully beat placebo and even in the short-term come with a lot of risk (let alone the poorly understood long-term effects and the understudied terrible withdrawal effects which we are indebted to you to spotlighting.)

      I don’t think it should be a surprise to anyone that drug companies do and want to make giant profits, it is actually mandated that they try to maximize profit for their shareholders. Knowing this is really helpful because it should force all of us to take drug company research with at least some grains of salt.

      To mitigate the exploitive effects of unbridled profit motive of drug companies, I think that checks come from 3 primary sources: Psychiatrists/independent medical researchers, the public/government, and patients/users.

      I hold psychiatrists, both in practice and academia, to be really complicit in the popularization of invalid research findings. Unlike the drug companies, who I expect to be in business to make money as the prime goal, I expect doctors/psychiatrists, who take oaths to do no harm, receive tremendous societal respect, are considered experts in health, and are trusted by their patients, to have their patients’ wellbeing as their primary professional concern. While I don’t presume to know every psychiatrist’s motive, they certainly went along with drug companies in popularizing false research, not checking really crappy research methodology, and telling people lies about medications (their effectiveness, why they work, etc.). Whether or not they did this because many doctors get poor research training, were swayed by drug company perks or the too-good-to-be true hope in the meds that drug companies were selling, rationalized improvement in their patients to drug effects while ignoring any harm that happened, already recognized that the data supporting psychodynamic psychotherapy (a hallmark of psychiatric training) was poor (or more accurately, good data was not available), or that they could make a lot more money having psychopharm practices, psychiatrists, psychiatrists-researchers, medical schools, residency programs, and continuing education programs all went along with this.

      While I do believe that the psychiatry establishment had the most to gain (besides the drug companies) in choosing to side with bad research, the public/government, which in our country does regulate claims of pharmaceutical companies for the public’s benefit, also failed. The FDA’s policies for getting indications for drug use is poor, especially in terms of longer term effectiveness, longer term risks, and discontinuation. They also did not make data they receive public, which hopefully would have tempered the enthusiasm for psychiatric medications starting in the late 80s. I’m sure very many well-meaning and well-educated psychiatrists would have come to more nuanced conclusions about many of the drugs they prescribe if they had access to the very many negative trials drug companies conducted and not just the skewed positive ones that get published in medical journals (which also advertise drugs). Some of FDA’s funding comes from drug companies themselves, which is a setup for disaster for a regulating agency. Additionally, the US is one of two countries that allows direct to consumer advertising, allowing drug companies to sell their products directly to consumers who are often just desperate for any relief, and their advertisements shape public awareness and belief that psychiatric drugs could be helpful beyond what the data on them suggest.
      Ultimately, this really has hurt people who trusted their doctors and government to have their interest at heart when they seek help for distress. While I do believe people should be very involved in their treatment, people in distress go to experts precisely because they are distressed/afraid/impaired/overwhelmed and experts say that they can help and that you should trust them. At this point, the cat is out of the bag, and patient advocates, like you Alto, have actually been the most active check on drug company research, but I think we both agree that it certainly hasn’t been enough.

      At the end of the day, drug companies don’t do good research (many are now closing their psychiatric drug research departments because even their biased research cannot show effects that are profitable enough for them to invest in), psychiatrists both probably made a lot of money by supporting the transition to being more drug-centered and also believed (too wishfully) that these medications were helpful, medical researchers did not do enough independent research (many academic studies were funded by drug company grants and conducted by people who consult for drug companies), research done by none-psychiatrists were less respected, regulating agencies failed to have high enough standards to assess for drug effectiveness/risks and allowed for drug companies to market their products to consumers themselves with their own messaging, and patients who were really in need were looking for anything that could help and were willing to trust their doctors and the FDA.
      So where does this leave us? Certainly with the need of better research. The problem is that as this blog has talked about, we do not have a strong, unified theory of mental health from which to make reasonable hypotheses about what might be helpful for supporting well-being. A lot of things seem like common sense (making sure people have basic needs met, have meaningful work, maintaining physical fitness), but we are only coming to this research now, and frankly, many things that are helpful are not really in the bounds of psychiatry/mental health/medicine, and it makes sense that psychiatrists and other mental health researchers were not theorizing mental health or treatment of distress in ways that were not medical/psychotherapeutic. Also, human experience is difficult to assess, and if you do want to quality research on potential treatment effects that include risks, having meaningful assessments are important but hard to come by. Ultimately, doctors need to have some humility that while they do want to help people, they may not always have the tools and should not just do prescribe medicine or another intervention that lacks research support just because it is popular. Doctors should be doing research and pressuring researchers (independent and pharmaceutical alike) for better research that can actually show with more confidence the effects of treatments before they are widely utilized, and then also better evaluate their own practices. Patients, potential patients, and allies need to pressure the government for more and better allocated funding for research that is done by people not affiliated with companies that sell the products/treatments being investigated, as well as popularize their own experiences in their own voices so that policymakers and doctors/mental health professionals learn what our issues are outside of the consulting room/hospital.

      That was a long rant, but I did also want to address your comment on mild/moderately depressed people in clinical trials. The DSM (also a flawed diagnostic system with no convincing validity and limited reliability) subtypes Major Depressive Disorder along many dimensions, including severity, so people can be diagnosed with MDD mild or moderate subtype. Clinicians chime in, but I believe this is based primarily the number of different symptoms people present with, not just the severity of their distress (ex. Just having the 5 minimum symptoms to meet criteria is mild while having more symptoms would be more sever). The HAM-D is a linearly scaled assessment, so classifications of severity of depression are determined at equally arbitrary cut off points (ex. <7 no depression, 8-14 is mild depression, etc.). The HAM-D is more interested in globally rating the severity of depression symptoms, not the number of symptoms people present. This becomes a semantics issue because many people who meet criteria of a certain severity of Major Depressive Disorder in the DSM might show different severity on the HAM-D. the DSM criteria is meant to diagnose, and the HAM-D is intended to measure changes in severity of depression in a way that can be statistically useful, but the terms of mild/moderate/severity between them do not always align with each other. At least that is my understanding. I guess the point of this though is that at least the final stages of research on new treatments should be conducted with participants who are representative of the client/patient population who are seen in clinical practice. The FDA does not require this, and I believe they should. Some drug companies and non-company researchers investigate this after drugs are approved, but by then, most psychiatrists don’t seem to have access to or care much about the findings.

      • Altostrata says:

        A most excellent rant.

        The lack of consistency of “depression” diagnosis across studies (and in practice) is indeed a problem. But one cannot dismiss it as merely a semantics issue since for decades it’s been *major” depression that’s the indication for medication, which Kirsch et al’s studies revived.

        There has been a laissez-faire attitude by study sponsors towards lax recruiting practices. Instruments such as the HAM-D were only the beard. The elision of distinction in depression must have been intentional, at least in part — pharma could not have made vast billions off medicating MDD alone.

  13. […] then what other data (Type 2) should you collect?  Treatment history?  Questionnaires?  Biomarkers?  A comprehensive assessment of social context?  And ultimately, how do you use this information […]

  14. Altostrata says:

    Here’s an example of how useless these “instruments” are:

    http://www.bmj.com/content/344/bmj.e1566

    56 elderly (mean age 85.3 years), ill people with dementia in nursing homes were taken off antidepressants over a week. The Cornell scale of depression in dementia and the neuropsychiatric inventory (10 item version) were used to evaluate symptoms.

    Result: “….significantly more patients worsened in the discontinuation group than in the continuation group (32 (54%) v 17 (29%)….

    Conclusions Discontinuation of antidepressant treatment in patients with dementia and neuropsychiatric symptoms leads to an increase in depressive symptoms, compared with those patients who continue with treatment.”

    The study did not log withdrawal symptoms. Incredibly, the design had passed ethics reviews.

  15. leejcaroll says:

    strange I got this notice about a reply which sent me to this page:
    in response to Carol Levy:

    The p robolem with many of these ‘tests’ is the outcome can be changed by vagaries of the patients experience. As my anecdote, my trigeminal neuralgia was out of control, I also had anaesthesia dolorosa and was advised, at on point, that rational suicide was acceptable in my case. When Dr. Wm Sweet decided a […]

    >It must be the only lab blog which has a dedicated setcion for Restaurant suggestions and Food.(sigh) well Bosco also commented that we have so much time to blog about what we eat etc. Did you read the posting about the lab’s eating habits?

    Bosco is a name I know from another blog and I see no Gabriel here or this replybut definitely from my post.

    Tght Id let you know something is off.

    • stevebMD says:

      Thanks— I’ve been dealing with a sudden influx of spam over the last 72 hours or so. Something’s amiss. I’ll try to figure it out, in the meantime enjoy the robots. 🙂

  16. leejcaroll says:

    Think it may be wordpress because I am on another wordpress blog and that is where I know “Bosco”. The remark seems to me to be related to one of the posts from over there.
    (Lots of spammers out lately, everywhere it seems. You’d think theyd have better things to do with their time. )
    Thanks, Steve. You too. ((*_*))

Leave a Reply to How To Think Like A Psychiatrist « Thought Broadcast Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: