Psychopharm R&D Cutbacks II: A Response to Stahl

August 28, 2011

A lively discussion has emerged on the NEI Global blog and on Daniel Carlat’s psychiatry blog about a recent post by Stephen Stahl, NEI chairman, pop(ular) psychiatrist, and promoter of psychopharmaceuticals.  The post pertains to the exodus of pharmaceutical companies from neuroscience research (something I’ve blogged about too), and the changing face of psychiatry in the process.

Dr Stahl’s post is subtitled “Be Careful What You Ask For… You Just Might Get It” and, as one might imagine, it reads as a scathing (some might say “ranting”) reaction against several of psychiatry’s detractors: the “anti-psychiatry” crowd, the recent rules restricting pharmaceutical marketing to doctors, and those who complain about Big Pharma funding medical education.  He singles out Dr Carlat, in particular, as an antipsychiatrist, implying that Carlat believes mental illnesses are inventions of the drug industry, medications are “diabolical,” and drugs exist solely to enrich pharmaceutical companies.  [Not quite Carlat's point of view, as  a careful reading of his book, his psychopharmacology newsletter, and, yes, his blog, would prove.]

While I do not profess to have the credentials of Stahl or Carlat, I have expressed my own opinions on this matter in my blog, and wanted to enter my opinion on the NEI post.

With respect to Dr Stahl (and I do respect him immensely), I think he must re-evaluate his influence on our profession.  It is huge, and not always in a productive way.  Case in point: for the last two months I have worked in a teaching hospital, and I can say that Stahl is seen as something of a psychiatry “god.”  He has an enormous wealth of knowledge, his writing is clear and persuasive, and the materials produced by NEI present difficult concepts in a clear way.  Stahl’s books are directly quoted—unflinchingly—by students, residents, and faculty.

But there’s the rub.  Stahl has done such a good job of presenting his (i.e., the psychopharmacology industry’s) view of things that it is rarely challenged or questioned.  The “pathways” he suggests for depression, anxiety, psychosis, cognition, insomnia, obsessions, drug addiction, medication side effects—basically everything we treat in psychiatry—are accompanied by theoretical models for how some new pharmacological agent might (or will) affect these pathways, when in fact the underlying premises or the proposed drug mechanisms—or both—may be entirely wrong.  (BTW, this is not a criticism of Stahl, this is simply a statement of fact; psychiatry as a neuroscience is decidedly still in its infancy.)

When you combine Stahl’s talent with his extensive relationships with drug companies, it makes for a potentially dangerous combination.  To cite just two examples, Stahl has written articles (in widely distributed “throwaway” journals) making compelling arguments for the use of low-dose doxepin (Silenor) and L-methylfolate (Deplin) in insomnia and depression, respectively, when the actual data suggest that their generic (or OTC) equivalents are just as effective.  Many similar Stahl productions are included as references or handouts in drug companies’ promotional materials or websites.

How can this be “dangerous”?  Isn’t Stahl just making hypotheses and letting doctors decide what to do with them?  Well, not really.  In my experience, if Stahl says something, it’s no longer a hypothesis, it becomes the truth.

I can’t tell you how many times a student (or even a professor of mine) has explained to me “Well, Stahl says drug A works this way, so it will probably work for symptom B in patient C.”  Unfortunately, we don’t have the follow-up discussion when drug A doesn’t treat symptom B; or patient C experiences some unexpected side effect (which was not predicted by Stahl’s model); or the patient improves in some way potentially unrelated to the medication.  And when we don’t get the outcome we want, we invoke yet another Stahl pathway to explain it, or to justify the addition of another agent.  And so on and so on, until something “works.”  Hey, a broken clock is still correct twice a day.

I don’t begrudge Stahl for writing his articles and books; they’re very well written, and the colorful pictures are fun to look at– it makes psychiatry almost as easy as painting by numbers.  I also (unlike Carlat) don’t get annoyed when doctors do speaking gigs to promote new drugs.  (When these paid speakers are also responsible for teaching students in an academic setting, however, that’s another issue.)  Furthermore, I accept the fact that drug companies will try to increase their profits by expanding market share and promoting their drugs aggressively to me (after all, they’re companies—what do we expect them to do??), or by showing “good will” by underwriting CME, as long as it’s independently confirmed to be without bias.

The problem, however, is that doctors often don’t ask for the data.  We don’t  ask whether Steve Stahl’s models might be wrong (or biased).  We don’t look closely at what we’re presented (either in a CME lesson or by a drug rep) to see whether it’s free from commercial influence.  And, perhaps most distressingly, we don’t listen enough to our patients to determine whether our medications actually do what Stahl tells us they’ll do.

Furthermore, our ignorance is reinforced by a diagnostic tool (the DSM) which requires us to pigeonhole patients into a small number of diagnoses that may have no biological validity; a reimbursement system that encourages a knee-jerk treatment (usually a drug) for each such diagnosis; an FDA approval process that gives the illusion that diagnoses are homogeneous and that all patients will respond the same way; and only the most basic understanding of what causes mental illness.  It creates the perfect opportunity for an authority like Stahl to come in and tell us what we need to know.  (No wonder he’s a consultant for so many pharmaceutical companies.)

As Stahl writes, the departure of Big Pharma from neuroscience research is unfortunate, as our existing medications are FAR from perfect (despite Stahl’s texts making them sound pretty darn effective).  However, this “breather” might allow us to pay more attention to our patients and think about what else—besides drugs—we can use to nurse them back to health.  Moreover, refocusing our research efforts on the underlying psychology and biology of mental illness (i.e., research untainted by the need to show a clinical drug response or to get FDA approval) might open new avenues for future drug development.

Stahl might be right that the anti-pharma pendulum has swung too far, but that doesn’t mean we can’t use this opportunity to make great strides forward in patient care.  The paychecks of some docs might suffer.  Hopefully our patients won’t.

About these ads

Do Antipsychotics Treat PTSD?

August 23, 2011

Do antipsychotics treat PTSD?  It depends.  That seems to be the best response I can give, based on the results of two recent studies on this complex disorder.  A better question, though, might be: what do antipsychotics treat in PTSD?

One of these reports, a controlled, double-blinded study of the atypical antipsychotic risperidone (Risperdal) for the treatment of “military service-related PTSD,” was featured in a New York Times article earlier this month.  The NYT headline proclaimed, somewhat unceremoniously:  “Antipsychotic Use is Questioned for Combat Stress.”  And indeed, the actual study, published in the Journal of the American Medical Association (JAMA), demonstrated that a six-month trial of risperidone did not improve patients’ scores in a scale of PTSD symptoms, when compared to placebo.

But almost simultaneously, another paper was published in the online journal BMC Psychiatry, stating that Abilify—a different atypical antipsychotic—actually did help patients with “military-related PTSD with major depression.”

So what are we to conclude?  Even though there are some key differences between the studies (which I’ll mention below), a brief survey of the headlines might leave the impression that the two reports “cancel each other out.”  In reality, I think it’s safe to say that neither study contributes very much to our treatment of PTSD.  But it’s not because of the equivocal results.  Instead, it’s a consequence of the premises upon which the two studies were based.

PTSD, or post-traumatic stress disorder, is an incredibly complicated condition.  The diagnosis was first given to Vietnam veterans who, for years after their service, experienced symptoms of increased physiological arousal, avoidance of stimuli associated with their wartime experience, and continual re-experiencing (in the form of nightmares or flashbacks) of the trauma they experienced or observed.  It’s essentially a re-formulation of conditions that were, in earlier years, labeled “shell shock” or “combat fatigue.”

Since the introduction of this disorder in 1980 (in DSM-III), the diagnostic umbrella of PTSD has grown to include victims of sexual and physical abuse, traumatic accidents, natural disasters, terrorist attacks (like the September 11 massacre), and other criminal acts.  Some have even argued that poverty or unfortunate psychosocial circumstances may also qualify as the “traumatic” event.

Not only are the types of stressors that cause PTSD widely variable, but so are the symptoms that ultimately develop.  Some patients complain of minor but persistent symptoms, while others experience infrequent but intense exacerbations.  Similarly, the neurobiology of PTSD is still poorly understood, and may vary from person to person.  And we’ve only just begun to understand protective factors for PTSD, such as the concept of “resilience.”

Does it even make sense to say that one drug can (or cannot) treat such a complex disorder?  Take, for instance, the scale used in the JAMA article to measure patients’ PTSD symptoms.  The PTSD score they used as the outcome measure was the Clinician-Administered PTSD Scale, or CAPS, considered the “gold standard” for PTSD diagnosis.  But the CAPS includes 30 items, ranging from sleep disturbances to concentration difficulties to “survivor guilt”:

It doesn’t take a cognitive psychologist or neuroscientist to recognize that these 30 domains—all features of what we consider “clinical” PTSD—could be explained by just as many, if not more, neural pathways, and may be experienced in entirely different ways, depending upon on one’s psychological makeup and the nature of one’s past trauma.

In other words, saying that Risperdal is “not effective” for PTSD is like saying that acupuncture is not effective for chronic pain, or that a low-carb diet is not an effective way to lose weight.  Statistically speaking, these interventions might not help most patients, but in some, they may indeed play a crucial role.  We just don’t understand the disorders well enough.

[By the way, what about the other study, which reported that Abilify was helpful?  Well, this study was a retrospective review of patients who were prescribed Abilify, not a randomized, placebo-controlled trial.  And it did not use the CAPS, but the PCL-M, a shorter survey of PTSD symptoms.  Moreover, it only included 27 of the 123 veterans who agreed to take Abilify, and I cannot, for the life of me, figure out why the other 96 were excluded from their analysis.]

Anyway, the bottom line is this:  PTSD is a complicated, multifaceted disorder—probably a combination of disorders, similar to much of what we see in psychiatry.  To say that one medication “works” or another “doesn’t work” oversimplifies the condition almost to the point of absurdity.  And for the New York Times to publicize such a finding, only gives more credence to the misconception that a prescription medication is (or has the potential to be) the treatment of choice for all patients with a given diagnosis.

What we need is not another drug trial for PTSD, but rather a better understanding of the psychological and neurobiological underpinnings of the disease, a comprehensive analysis of which symptoms respond to which drug, which aspects of the disorder are not amenable to medication management, and how individuals differ in their experience of the disorder and in the tools (pharmacological and otherwise) they can use to overcome their despair.  Anything else is a failure to recognize the human aspects of the disease, and an issuance of false hope to those who suffer.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Critical Thinking and Drug Advertising

August 14, 2011

One of the advantages of teaching medical students is that I can keep abreast of changes in medical education.  It’s far too easy for a doctor (even just a few years out of training) to become complacent and oblivious to changes in the modern medical curriculum.  So I was pleasantly surprised earlier this week when a fourth-year medical student told me that his recent licensing examination included a vignette which tested his ability to interpret data from a pharmaceutical company advertisement.  Given that most patients (and, indeed, most doctors) now get their information from such sources, it was nice to see that this is now part of a medical student’s education.

For those of you unfamiliar with the process, the US Medical Licensing Examination (USMLE) is a three-step examination that all medical students must take in order to obtain a medical license in the United States.  Most students take steps 1 and 2 during medical school, while step 3 is taken during residency.

Effective this month, the drug-ad questions will appear in the Step 2 examination.  Obviously, I don’t have access to the particular ad that my med student saw, but here’s a sample item taken from the USMLE website (click to enlarge):


It’s attractive and seems concise.  It’s certainly easier to read—some might even say more “fun”—than a dry, boring journal article or data table.  But is it informative?  What would a doctor need to know to confidently prescribe this new drug?  That’s the emphasis of this new type of test question.  Specifically, the two questions pertaining to this item ask the student (1) to identify which statement is most strongly supported by information in the ad, and (2) which type of research design would give the best data in support of using this drug.

It’s good to know that students are being encouraged to ask such questions of themselves (and, more importantly, one would hope, of the people presenting them with such information).  For comparison, here are two “real-world” examples of promotional advertising I have received for two recently launched psychiatric drugs:


Again, nice to look at.  But essentially devoid of information.  Okay, maybe that’s unfair:  Latuda was found to be effective in “two studies for each dose,” and the Oleptro ad claims that “an eight-week study showed that depression symptoms improved for many people taking Oleptro.”  But what does “effective” mean?  What does “improved” mean?  Where’s the data?  How do these drugs compare to medications we’ve been using for years?  Those are the questions that we need to ask, not only to save costs (new drugs are expensive) but also to prevent exposing our patients to adverse effects that only emerge after a period of time on a drug.

(To be fair, it is quite easy to obtain this information on the drug company’s web sites, or by asking the respective drug reps.  But first impressions count for a lot, and how many providers actually ask for the info?  Or can understand it once they do get it??)

The issue of drug advertising and its influence on doctors has received a good degree of attention lately.  An article in PLoS Medicine last year found that exposure to pharmaceutical company information was frequently (although not always) associated with more prescriptions, higher health care costs, or lower prescribing quality.  Similarly, a report last May in the Archives of Otolaryngology evaluated 50 drug ads in otolaryngology (ENT) journals and found that only 14 (28%) of those claims were based on “strong evidence.”  And the journal Emergency Medicine Australasia went one step further last February and banned all drug company advertising, claiming that “marketing of drugs by the pharmaceutical industry, whose prime aim is to bias readers towards prescribing a particular product, is fundamentally at odds with the mission of medical journals.”

The authors of the PLoS article even wrote the editors of the Lancet, one of the world’s top medical journals, to ask if they’d be willing to ban drug ads, too.  Unfortunately, banning drug advertising may not solve the problem either.  As discussed in an excellent article by Harriet Washington in this summer’s American Scholar, drug companies have great influence over the research that gets funded, carried out, and published, regardless of advertising.  Washington writes: “there exist many ways to subvert the clinical-trial process for marketing purposes, and the pharmaceutical industry seems to have found them all.”

As I’ve written before, I have no philosophical—or practical—opposition to pharmaceutical companies, commercial R&D, or drug advertising.  But I am opposed to the blind acceptance of messages that are the direct product of corporate marketing departments, Madison Avenue hucksters, and drug-company shills.  It’s nice to know that the doctors of tomorrow are being taught to ask the right questions, to become aware of bias, and to develop stronger critical thinking skills.  Hopefully this will help them to make better decisions for their patients, rather than serve as unwitting conduits for big pharma’s more wasteful wares.


Antidepressants: The New Candy?

August 9, 2011

It should come as no surprise to anyone paying attention to health care (not to mention modern American society) that antidepressants are very heavily prescribed.  They are, in fact, the second most widely prescribed class of medicine in America, with 253 million prescriptions written in 2010 alone.  Whether this means we are suffering from an epidemic of depression is another thing.  In fact, a recent article questions whether we’re suffering from much of anything at all.

In the August issue of Health Affairs, Ramin Mojtabai and Mark Olfson present evidence that doctors are prescribing antidepressants at ever-higher rates.  Over a ten-year period (1996-2007), the percentage of all office visits to non-psychiatrists that included an antidepressant prescription rose from 4.1% to 8.8%.  The rates were even higher for primary care providers: from 6.2% to 11.5%.

But there’s more.  The investigators also found that in the majority of cases, antidepressants were given even in the absence of a psychiatric diagnosis.  In 1996, 59.5% of the antidepressant recipients lacked a psychiatric diagnosis.  In 2007, this number had increased to 72.7%.

In other words, nearly 3 out of 4 patients who visited a nonpsychiatrist and received a prescription for an antidepressant were not given a psychiatric diagnosis by that doctor.  Why might this be the case?  Well, as the authors point out, antidepressants are used off-label for a variety of conditions—fatigue, pain, headaches, PMS, irritability.  None of which have any good data supporting their use, mind you.

It’s possible that nonpsychiatrists might add an antidepressant to someone’s medication regimen because they “seem” depressed or anxious.  It is also true that primary care providers do manage mental illness sometimes, particularly in areas where psychiatrists are in short supply.  But remember, in the majority of cases the doctors did not even give a psychiatric diagnosis, which suggests that even if they did a “psychiatric evaluation,” the evaluation was likely quick and haphazard.

And then, of course, there were probably some cases in which the primary care docs just continued medications that were originally prescribed by a psychiatrist—in which case perhaps they simply didn’t report a diagnosis.

But is any of this okay?  Some, like a psychiatrist quoted in a Wall Street Journal article on this report, argue that antidepressants are safe.  They’re unlikely to be abused, often effective (if only as a placebo), and dirt cheap (well, at least the generic SSRIs and TCAs are).  But others have had very real problems discontinuing them, or have suffered particularly troublesome side effects.

The increasingly indiscriminate use of antidepressants might also open the door to the (ab)use of other, more costly drugs with potentially more devastating side effects.  I continue to be amazed, for example, by the number of primary care docs who prescribe Seroquel (an antipsychotic) for insomnia, when multiple other pharmacologic and nonpharmacologic options are ignored.  In my experience, in the vast majority of these cases, the (well-known) risks of increased appetite and blood sugar were never discussed with the patient.  And then there are other antipsychotics like Abilify and Seroquel XR, which are increasingly being used in primary care as drugs to “augment” antidepressants and will probably be prescribed as freely as the antidepressants themselves.  (Case in point: a senior medical student was shocked when I told her a few days ago that Abilify is an antipsychotic.  “I always thought it was an antidepressant,” she remarked, “after seeing all those TV commercials.”)

For better or for worse, the increased use of antidepressants in primary care may prove to be yet another blow to the foundation of biological psychiatry.  Doctors prescribe—and continue to prescribe—these drugs because they “work.”  It’s probably more accurate, however, to say that doctors and patients think they work.  And this may have nothing to do with biology.  As the saying goes, it’s the thought that counts.

Anyway, if this is true—and you consider the fact that these drugs are prescribed on the basis of a rudimentary workup (remember, no diagnosis was given 72.7% of the time)—then the use of an antidepressant probably has no more justification than the addition of a multivitamin, the admonition to eat less red meat, or the suggestion to “get more fresh air.”

The bottom line: If we’re going to give out antidepressants like candy, then let’s treat them as such.  Too much candy can be a bad thing—something that primary care doctors can certainly understand.  So if our patients ask for candy, then we need to find a substitute—something equally soothing and comforting—or provide them instead with a healthy diet of interventions to address the real issues, rather than masking those problems with a treat to satisfy their sweet tooth and bring them back for more.


Google Is My New Hippocampus

August 6, 2011

A few days ago, upon awakening but before my brain was fully alert, I was reviewing the events of the previous few days in preparation for the new one.  At one point I tried to remember a conversation I had had with a colleague about three days prior, but I could not quite remember the specifics of our discussion.  “No big deal,” I thought to myself, “I’ll just Google it.”

Almost immediately, I recognized the folly of this thought.  Obviously, there is no way to “Google” the events of our personal lives.  But while impractical, the solution was a logical one.  If I want to know any fact or piece of information, I Google it online.  If I want to find a file on my computer, I use Google Desktop.  All of my email conversations for the last five years are archived in my Google Mail account, so I can quickly find correspondence (and people, and account numbers, and emailed passwords, etc) at the click of the “Search” button.  No wonder I immediately thought of Googling myself.

A recent article in Science claims that the permeation of Google and other search engines into our lives—and now onto our smartphones and other portable gadgets—has not only made it easier for us to retrieve information, but it has also changed the way we remember.  In their experiments, three cognitive psychologists from Columbia, Harvard, and UW-Madison demonstrated that we are more likely to forget information if we know that we can access it (e.g., by a search engine) in the future.  Moreover, even for simple data, we’re more likely to remember where we store pieces of information than the subject matter itself.

The implication here is that the process of memory storage & retrieval is rapidly changing in the Online Age.  Humans no longer need to memorize anything (who was the 18th president?  What’s the capital of Australia?  When was the Six-Day War?), but instead just need to know how to access it.

Is this simply a variation of the old statement that “intelligence is not necessarily knowing everything but instead where to find it”?  Perhaps.  An optimist might look at this evolution in human memory as presenting an opportunity to use more brain power for processing complex pieces of information that can’t be readily stored.  In my work, for instance, I’m glad I don’t need to recall precise drug mechanisms, drug-drug interactions, or specific diagnostic criteria (I can look them up quite easily), but can instead spend pay closer attention to the process of listening to my patients and attending to more subtle concerns.  (Which often does more good in the long run anyway.)

The difference, however, is that I was trained in an era in which I did have to memorize all of this information without the advantage of an external online memory bank.  Along the way, I was able to make my own connections among sets of seemingly unrelated facts.  I was able to weed out those that were irrelevant, and retain those that truly made a difference in my daily work.  This resulted, in my opinion, in a much richer understanding of my field.

While I’ve seen no studies of this issue, I wonder whether students in medicine (or, for that matter, other fields requiring mastery of a large body of information) are developing different sets of skills in the Google Era.  Knowing that one can always “look something up” might make a student more careless or lazy.  On the other hand, it might help one to develop a whole new set of clinical skills that previous generations simply didn’t have time for.

Unfortunately, those skills are not the things that are rewarded in our day-to-day work.    We value information and facts, rather than substance and process.  In general, patients want to know drug doses, mechanisms, and side effects, rather than developing a “therapeutic relationship” with their doctor.  Third-party payers don’t care about the insights or breakthroughs that might happen during therapy, but instead that the proper diagnoses and billing codes are given, and that patients improve on some objective measurement.  And when my charts are reviewed by an auditor (or a lawyer), what matters is not the quality of the doctor-patient interaction, but instead the documentation, the informed consent, the checklists, the precise drug dosing, details in the treatment plan, and so on.

I think immediate access to information is a wonderful thing.  Perhaps I rely on it too much.  (My fiancé has already reprimanded me for looking up actors or plot twists on IMDB while we’re watching movies.)  But now that we know it’s changing the way we store information and—I don’t think this is too much of a stretch—the way we think, we should look for ways to use information more efficiently, creatively, and productively.  The human brain has immense potential; now that our collective memories are external (and our likelihood of forgetting is essentially nil), let’s tap that potential do some special and unique things that computers can’t do.  Yet.


Follow

Get every new post delivered to your Inbox.

Join 1,379 other followers

%d bloggers like this: