Disruptive Technology Vs. The Disruptive Physician

February 26, 2012

The technological advances of just the last decade—mobile computing, social networking, blogging, tablet computers—were never thought to be “essential” when first introduced.  But while they started as novelties, their advantages became apparent, and today these are all part of our daily lives.  These are commonly referred to as “disruptive technologies”:  upstart developments that originally found their place in niche markets outside of the mainstream, but gradually “disrupted” the conventional landscape (and conventional wisdom) to become the established ways of doing things.

In our capitalist economy, disruptive technology is considered a very good thing.  It has made our lives easier, more enjoyable, and more productive.  It has created no small number of multimillionaires.  Entrepreneurs worldwide are constantly looking for the next established technologies to disrupt, usurp, and overturn, in hopes of a very handsome payoff.

In medicine, when we talk about “disruption,” the implication is not quite as positive.  In fact, the term “disruptive physician” is an insult, a black mark on one’s record that can be very hard to overcome.  It refers to someone who doesn’t cooperate, doesn’t follow established protocols, yells at people, discriminates against others, who might abuse drugs or alcohol, or who is generally incompetent.  These are not good things.

Really?  Now, no one would argue that substance abuse, profanity, spreading rumors, degrading one’s peers, or incompetence are good.  But what about the physician who “expresses political views that are disagreeable to the hospital administration”?  How about the physician who speaks out about deficiencies in patient care or patient safety, or who (legitimately) points out the incompetence of others?  How about the physician who prioritizes his own financial and/or business objectives over those of the hospital (when in fact it may be the only way to protect one’s ability to practice)?  All of these have been considered to be “disruptive” behaviors and could be used by highly conservative medical staffs to discipline physicians and preserve the status quo.

Is this fair?  In modern psychiatry, with its shrinking appointment lengths, overreliance on the highly deficient DSM, excessive emphasis on pharmacological solutions, and an increasing ignorance of developmental models and psychosocial interventions among practitioners, maybe someone should stand up and express opinions that the “powers that be” might consider unacceptable.  Someone should speak out on behalf of patient safety.  Someone should point out extravagant examples of waste, incompetence, or abuse of privilege.  Plenty of psych bloggers and a few renegade psychiatrists do express these opinions, but they (we?) are a minority.  I don’t know of any department chairmen or APA officers who are willing to be so “disruptive.”  As a result, we’re stuck with what we’ve got.

That’s not to say there aren’t any disruptive technologies in psychiatry.  What are they?  Well, medications, for instance.  Drug treatment “disrupted” psychoanalysis and psychotherapy, and represent the foundation of most psychiatric treatment today.  Over the last 30 years, pharmaceutical companies (and prescribers) have earned millions of dollars from SSRIs, SNRIs, second-generation antipsychotics, psychostimulants, and many others.  But are people less mentally ill now than they were in the early 1980s?  Today—just in time for patent expirations!—we’re already seeing the next disruptive medication technologies, like those based on glutamate and glutamine signaling.  According to Stephen Stahl at the most recent NEI Global Congress, “we’ve beaten the monoamine horse sixteen ways to Sunday” (translation: we’ve milked everything we can out of the serotonin and dopamine stories) and glutamate is the next blockbuster drug target to disrupt the marketplace.

Another disruptive technology is the DSM.  I don’t have much to add to what’s already been written about the DSM-5 controversy except to point out what should be obvious:  We don’t need another DSM right now.  Practically speaking, a new DSM is absolutely unnecessary.  It will NOT help me treat patients any better.  But it’s coming, like it or not.  It will disrupt the way we have conducted our practices for the last 10 years (guided by the equally imperfect DSM-IV-TR), and it will put millions more dollars in the coffers of the APA.

And then, of course, is the electronic medical record (EMR).  As with the DSM-5, I don’t need to have an EMR to practice psychiatry.  But some politicians in Washington, DC, decided that, as a component of the Affordable Care Act (and in preparation for truly nationalized health care), we should all use EMRs.  They even offered a financial incentive to doctors to do so (and are levying penalties for not doing so).  And despite some isolated benefits (which are more theoretical than practical, frankly), EMRs are disruptive.  Just not in the right way.  They disrupt work flow, the doctor-patient relationship, and, sometimes, common sense.  But they’re here to stay.

Advances in records & database management, in general, are the new disruptive technologies in medicine.  Practice Fusion, a popular (and ad-supported) EMR has earned tens of millions of dollars in venture capital funding and employs over 150 people.  And what does it do with the data from the 28 million patients it serves?  It sells it to others, of course.  (And it can tell you fun things like which cities are most “lovesick.”  How’s that for ROI?)

There are many other examples of companies competing for your health-care dollar, whose products are often only peripherally related to patient care but which represent that holy grail of the “disruptive technology.”  There are online appointment scheduling services, telepsychiatry services, educational sites heavily sponsored by drug companies, doctor-only message boards (which sell doctors’ opinions to corporations), drug databases (again, sponsored by drug companies), and others.

In the interest of full disclosure, I use some of the above services, and some are quite useful.  I believe telemedicine, in particular, has great potential.  But at the end of the day, these market-driven novelties ignore some of the bigger, more entrenched problems in medicine, which only practicing docs see.  In my opinion, the factors that would truly help psychiatrists take better care of patients are of a different nature entirely:  improving psychiatric training (of MDs and non-MD prescribers); emphasizing recovery and patient autonomy in our billing and reimbursement policies; eliminating heavily biased pharmaceutical advertising (both to patients and to providers); revealing the extensive and unstated conflicts of interest among our field’s “key opinion leaders”; reforming the “disability” system and disconnecting it from Medicaid, particularly among indigent patients; and reallocating health-care resources more equitably.  But, as a physician, if I were to go to my superiors with any ideas to reform the above in my day-to-day work, I run the risk of being labeled “disruptive.”  When in fact, that would be my exact intent:  to disrupt some of the damaging, wasteful practices that occur in our practices almost every day.

I agree that disruption in medicine can be a good thing, and can advance the quality and cost-effectiveness of care.  But when most of the “disruptions” come from individuals who are not actively in the trenches, and who don’t know where needs are the greatest, we may be doing absolutely nothing to improve care.  Even worse, when we fail to embrace the novel ideas of physicians—but instead discipline those physicians for being “disruptive”—we risk punishing creativity, destroying morale, and fostering a sense of helplessness that, in the end, serves no one.


Where Doctors Get Their Information

January 24, 2012

Doctors spend four years in medical school, still more years in residency, and some devote even more years to fellowship training.   All of this work is done under direct supervision, and throughout the process, trainees learn from their teachers, mentors, and supervisors.  But medicine changes very rapidly.  After all of this training—i.e., once the doctor is “out in the real world”—how does he or she keep up with the latest developments?

Medical journals are the most obvious place to start.  Many doctors subscribe to popular journals like the New England Journal of Medicine or JAMA, or they get journals as a perk of membership in their professional society (for example, the American Journal of Psychiatry for members of the APA).  But the price of journals—and professional society memberships—can accumulate quickly, as can the stacks of unread issues on doctors’ desks.

A second source is continuing medical education credit.  “CMEs” are educational units that doctors are required to obtain in order to keep their medical license.  Some CME sources are excellent, although most CMEs are absurdly easy to obtain (e.g., you watch an online video; answer a few multiple-choice questions about a brief article; or show up for the morning session of a day-long conference, sign your name, then head out the door for a round of golf), making their educational value questionable.  Also, lots of CMEs are funded by pharmaceutical or medical device manufacturers (see here), where bias can creep in.

Direct communication with drug companies—e.g., drug sales reps—can also be a source of information.  Some universities and health-care organizations have “cracked down” on this interaction, citing inappropriate sales techniques and undue influence on doctors.  While docs can still contact the medical departments (or “medical science liaisons”) of big drug companies, this source of info appears to be running dry.

So what’s left?  Medical textbooks?  They’re usually several years out of date, even at the time of publication.  Medical libraries?  Unless you’re affiliated with a teaching hospital, those libraries are off-limits.  “Throwaway” journals?  Every specialty has them—they arrive in the mail, usually unrequested, and contain several topical articles and lots of advertising; but these articles generally aren’t peer-reviewed, and the heavy advertising tends to bias their content.  Medical websites?  Same thing.  (WebMD, for instance, is heavily funded by industry—a point that has not escaped the attention of watchdog senator Charles Grassley.)

Thus, the doctor in the community (think of the psychiatrist in a small group practice in your hometown) is essentially left alone, in the cold, without any unbiased access to the latest research.  This dilemma has become starkly apparent to me in the last several months.  Since last summer, I have worked primarily in a community hospital.  Because it is not an academic institution, it does not provide its employees or trainees access to the primary literature (and yes, that includes psychiatry residents).  I, on the other hand, have been fortunate enough to have had a university affiliation for most of my years of practice, so I can access the literature.  If I need to look up the details of a recent study, or learn about new diagnostic procedures for a given disorder, or prepare for an upcoming talk, I can find just about anything I need.  But this is not the case for my colleagues.  Instead, they rely on textbooks, throwaway journals, or even Wikipedia.  (BTW, Wikipedia isn’t so bad, according to a recent study out of Australia.  But I digress…)

Obviously, if one uses “free” resources to obtain medical information, that info is likely to be as unbiased as the last “free” Cymbalta dinner he or she attended.  Many doctors don’t recognize this.

When it comes to journals, it gets potentially more interesting.  All of the top medical journals are available online.  And, like many online newspapers and magazines, articles are available for a fee.  But the fees are astronomical—typically $30 or $35 per article—which essentially prohibits any doc from buying more than one or two, let alone doing exhaustive research on a given subject.

Interestingly, some articles are freely available (“open access” is the industry term).  You can try this yourself:  go to pubmed.gov and search for a topic like “bipolar disorder” or “schizophrenia.”  You’ll get thousands of results.  Some results are accompanied by the “Free Article” tag.  You can guess which articles most docs will choose to read.

Why are some articles free while others aren’t?  What’s the catch?  Well, sometimes there is no catch.  For one, the National Institutes of Health (NIH) requires any research done with its funding to be freely available within six months of a paper’s publication.  This makes sense: NIH funds are our tax dollars, so it’s only fair that we get to see the data.  (But even this is coming under attack, since the publishers want to protect their content—and revenue stream.)

Interestingly, though, some journals also have a “pay-for-open-access” policy, in which an author can pay a higher publication fee to make his/her article freely available.  In other words, if I publish a (non-NIH-funded) study but want it to reach a wider audience than simply those ivory-tower types with access to fully-stocked libraries, I can just pay extra.  That’s right, some publishers give me the option to pay to attract readers like community docs, the lay public, journalists, and others (not to mention potential investors in a company with which I’m affiliated).  The policy for Elsevier, one of the world’s largest academic publishers, on such “sponsored articles” can be found here.

You can see where this might lead.  Call me cynical, but paying for more eyeballs sounds a lot like advertising.  Of course, these are peer-reviewed articles, so they do meet some standards of scientific integrity.  (Or do they?  A recent article suggests that “narrative reviews” often misrepresent or overstate claims of medication efficacy.  See also this summary of the article by Neuroskeptic.)

Anyway, the take-home message is, unfortunately, one that we’ve heard all too often.  Science is supposed to be pristine, objective, and unbiased, but it’s clearly not.  Even when you take out the obvious advertising, the drug-rep showmanship, and the pharma-funded CME, there are still ways for a product-specific message to make its way to a doctor’s eyes and ears.  And if our medical journals supposedly represent the last bastion of scientific integrity—the sacred repository of truth in a world of direct-to-consumer advertising, biased KOLs, and Big Pharma largesse—we should be particularly cautious when they fail to serve that purpose.


Latuda-Palooza: Marketing or Education?

October 2, 2011

In my last blog post, I wrote about an invitation I received to a symposium on Sunovion Pharmaceuticals’ new antipsychotic Latuda.  I was concerned that my attendance might be reported as a “payment” from Sunovion under the requirements of the Physicians Payment Sunshine Act.  I found it a bit unfair that I might be seen as a recipient of “drug money” (and all the assumptions that go along with that) when, in fact, all I wanted to do was learn about a new pharmaceutical agent.

As it turns out, Sunovion confirmed that my participation would NOT be reported (they start reporting to the feds on 1/1/12), so I was free to experience a five-hour Latuda extravaganza yesterday in San Francisco.  I was prepared for a marketing bonanza of epic proportion—a la the Viagra launch scene in “Love And Other Drugs.”  And in some ways, I got what I expected:  two outstanding and engaging speakers (Dr Stephen Stahl of NEI and Dr Jonathan Meyer of UCSD); a charismatic “emcee” (Richard Davis of Arbor Scientia); an interactive “clicker” system which allowed participants to answer questions throughout the session and check our responses in real time; full lunch & breakfast, coffee and snacks; all in a posh downtown hotel.  (No pens or mugs, though.)

The educational program consisted of a plenary lecture by Dr Stahl, followed by workshops in which we broke up into “teams” and participated in three separate activities:  first, a set of computer games (modeled after “Pyramid” and “Wheel Of Fortune”) in which we competed to answer questions about Latuda and earn points for our team; second, a “scavenger hunt” in which we had 5 minutes to find answers from posters describing Latuda’s clinical trials (sample question: “In Study 4 (229), what proportion of subjects withdrew from the Latuda 40 mg/d treatment arm due to lack of efficacy?”); and finally, a series of case studies presented by Dr Meyer which used the interactive clicker system to assess our comfort level in prescribing Latuda for a series of sample patients.  My team came in second place.

I must admit, the format was an incredibly effective way for Sunovion to teach doctors about its newest drug.  It reinforced my existing knowledge—and introduced me to a few new facts—while it was also equally accessible to physicians who had never even heard about Latuda.

Moreover, the information was presented in an unbiased fashion.  Unbiased?, you may ask.  But wasn’t the entire presentation sponsored by Sunovion?  Yes, it was, but in my opinion the symposium achieved its stated goals:  it summarized the existing data on Latuda (although see here for some valid criticism of that data); presented it in a straightforward, effective (and, at times, fun) way; and allowed us doctors to make our own decisions.  (Stahl did hint that the 20-mg dose is being studied for bipolar depression, not an FDA-approved indication, but that’s also publicly available on the clinicaltrials.gov website.)  No one told us to prescribe Latuda; no one said it was better than any other existing antipsychotic; no one taught us how to get insurance companies to cover it; and—in case any “pharmascold” is still wondering—no one promised us any kickbacks for writing prescriptions.

(Note:  I did speak with Dr Stahl personally after his lecture.  I asked him about efforts to identify patient-specific factors that might predict a more favorable response to Latuda than to other antipsychotics.  He spoke about current research in genetic testing, biomarkers, and fMRI to identify responders, but he also admitted that it’s all guesswork at this point.  “I might be entirely wrong,” he admitted, about drug mechanisms and how they correlate to clinical response, and he even remarked “I don’t believe most of what’s in my book.”  A refreshing—and surprising—revelation.)

In all honesty, I’m no more likely to prescribe Latuda today than I was last week.  But I do feel more confident in my knowledge about it.  It is as if I had spent five hours yesterday studying the Latuda clinical trials and the published Prescribing Information, except that I did it in a far more engaging forum.  As I mentioned to a few people (including Mr Davis), if all drug companies were to hold events like this when they launch new agents, rather than letting doctors decipher glossy drug ads in journals or from their drug reps, doctors would be far better educated than they are now when new drugs hit the market.

But this is a very slippery slope.  In fact, I can’t help but wonder if we may be too far down that slope already.  For better or for worse, Steve Stahl’s books have become de facto “standard” psychiatry texts, replacing classics like Kaplan & Sadock, the Oxford Textbook, and the American Psychiatric Press books.  Stahl’s concepts are easy to grasp and provide the paradigm under which most psychiatry is practiced today (despite his own misgivings—see above).  However, his industry ties are vast, and his “education” company, Neuroscience Education Institute (NEI), has close connections with medical communications companies who are basically paid mouthpieces for the pharmaceutical industry.  Case in point: Arbor Scientia, which was hired by Sunovion to organize yesterday’s symposium—and similar ones in other cities—shares its headquarters with NEI in Carlsbad, CA, and Mr Davis sits on NEI’s Board.

We may have already reached a point in psychiatry where the majority of what we consider “education” might better be described as marketing.  But where do we draw the line between the two?  And even after we answer that question, we must ask, (when) is this a bad thing?  Yesterday’s Sunovion symposium may have been an infomercial, but I still felt there was a much greater emphasis on the “info-” part than the “-mercial.”  (And it’s unfortunate that I’d be reported as a recipient of pharmaceutical money if I had attended the conference after January 1, 2012, but that’s for another blog post.)  The question is, who’s out there to make sure it stays that way?

I’ve written before that I don’t know whom to trust anymore in this field.  Seemingly “objective” sources—like lectures from my teachers in med school and residency—can be heavily biased, while “advertising” (like yesterday’s symposium) can, at times, be fair and informative.  The end result is a very awkward situation in modern psychiatry that is easy to overlook, difficult to resolve, and, unfortunately, still ripe for abuse.


How To Get Rich In Psychiatry

August 17, 2011

Doctors choose to be doctors for many reasons.  Sure, they “want to help people,” they “enjoy the science of medicine,” and they give several other predictable (and sometimes honest) explanations in their med school interviews.  But let’s be honest.  Historically, becoming a doctor has been a surefire way to ensure prestige, respect, and a very comfortable income.

Nowadays, in the era of shrinking insurance reimbursements and increasing overhead costs, this is no longer the case.  If personal riches are the goal, doctors must graze other pastures.  Fortunately, in psychiatry, several such options exist.  Let’s consider a few.

One way to make a lot of money is simply by seeing more patients.  If you earn a set amount per patient—and you’re not interested in the quality of your work—this might be for you.  Consider the following, recently posted by a community psychiatrist to an online mental health discussion group:

Our county mental health department pays my clinic $170 for an initial evaluation and $80 for a follow-up.  Of that, the doctor is paid $70 or $35, respectively, for each visit.  There is a wide range of patients/hour since different doctors have different financial requirements and philosophies of care.  The range is 3 patients/hour to 6 patients/hour.

This payment schedule incentivizes output.  A doctor who sees three patients an hour makes $105/hr and spends 20 minutes with each patient.  A doctor who sees 6 patients an hour spends 10 minutes with each patient and makes $210.  One “outlier” doctor in our clinic saw, on average, 7 patients an hour, spending roughly 8 minutes with each patient and earning $270/hr.  His clinical notes reflected his rapid pace…. [but] Despite his shoddy care of patients, he was tolerated at the clinic because he earned a lot of money for the organization.

If this isn’t quite your cup of tea, you can always consider working in a more “legit” capacity, like the Department of Corrections.  You may recall the Bloomberg report last month about the prison psychiatrist who raked in over $800,000 in one year—making him the highest-paid state employee in California.  As it turns out, that was a “data entry error.”  (Bloomberg issued a correction.)  Nevertheless, the cat was out of the bag: prison psychiatrists make big bucks (largely for prescribing Seroquel and benzos).  With seniority and “merit-based increases,” one prison shrink in California was able to earn over $600,000—and that’s for a shrink who was found to be “incompetent.”  Maybe they pay the competent ones even more?

Another option is to be a paid drug speaker.  I’m not referring to the small-time local doc who gives bland PowerPoint lectures to his colleagues over a catered lunch of even blander ham-and-cheese sandwiches.  No sir.  I’m talking about the psychiatrists hired to fly all around the country to give talks at the nicest five-star restaurants in the nation’s biggest drug markets cities.  The advantage here is that you don’t even have to be a great doc.  You just have to own a suit, follow a script, speak well, and enjoy good food and wine.

As most readers of this blog know, ProPublica recently published a list of the sums paid by pharmaceutical companies to doctors for these “educational programs.”  Some docs walked away with checks worth tens—or hundreds—of thousands of dollars.  And, not surprisingly, psychiatrists were the biggest offenders earners.  I guess there is gold in explaining the dopamine hypothesis or the mechanism of neurotransmitter reuptake inhibition to yet another doctor.

Which brings me to perhaps the most tried-and-true way to convert one’s medical education into cash:  become an entrepreneur.  Discovering a new drug or unraveling a new disease process might revolutionize medical care and improve the lives of millions.  And throughout the history of medicine, numerous physician-researchers have converted their groundbreaking discoveries (or luck) into handsome profits.

Unfortunately, in psychiatry, paradigm shifts of the same magnitude have been few and far between.  Instead, the road to riches has been paved by the following formula: (1) “Buy in” to the prevailing disease model (regardless of its biological validity); (2) Develop a drug that “fits” into the model; (3) Find some way to get the FDA to approve it; (4) Promote it ruthlessly; (5) Profit.

In my residency program, for example, several faculty members founded a biotech company whose sole product was a glucocorticoid receptor antagonist which, they believed, might treat psychotic depression (you know, with high stress hormones in depression, etc).  The drug didn’t work (rendering their stock options worth only millions instead of tens of millions).  But that didn’t stop them.  They simply searched for other ways to make their compound relevant.  As I write, they’re looking at it as a treatment for Cushing’s syndrome (a more logical—if far less profitable—indication).

The psychiatry blogger 1boringoldman has written a great deal about the legions of esteemed academic psychiatrists who have gotten caught up in the same sort of rush (no pun intended) to bring new drugs to market.  His posts are definitely worth a read.  Frankly, I see no problem with psychiatrists lending their expertise to a commercial enterprise in the hopes of capturing some of the windfall from a new blockbuster drug.  Everyone else in medicine does it, why not us?

The problem, as mentioned above, is that most of our recent psychiatric meds are not blockbusters.  Or, to be more accurate, they don’t represent major improvements in how we treat (or even understand) mental illness.  They’re largely copycat solutions to puzzles that may have very little to do with the actual pathology—not to mention psychology—of the conditions we treat.

To make matters worse, when huge investments in new drugs don’t pay off, investigators (including the psychiatrists expecting huge dividends) look for back-door ways to capture market share, rather than going back to the drawing board to refine their initial hypotheses.  Take, for instance, RCT Logic, a company whose board includes the ubiquitous Stephen Stahl and Maurizio Fava, two psychiatrists with extensive experience in clinical drug trials.  But the stated purpose of this company is not to develop novel treatments for mental illness; they have no labs, no clinics, no scanners, and no patients.  Instead, their mission is to develop clinical trial designs that “reduce the detrimental impact of the placebo response.”

Yes, that’s right: the new way to make money in psychiatry is not to find better ways to treat people, but to find ways to make relatively useless interventions look good.

It’s almost embarrassing that we’ve come to this point.  Nevertheless, as someone who has decidedly not profited (far from it!) from what I consider to be a dedicated, intelligent, and compassionate approach to my patients, I’m not surprised that docs who are “in it for the money” have exploited these alternate paths.  I just hope that patients and third-party payers wake up to the shenanigans played by my colleagues who are just looking for the easiest payoff.

But I’m not holding my breath.

FootnoteFor even more ways to get rich in psychiatry, see this post by The Last Psychiatrist.


Critical Thinking and Drug Advertising

August 14, 2011

One of the advantages of teaching medical students is that I can keep abreast of changes in medical education.  It’s far too easy for a doctor (even just a few years out of training) to become complacent and oblivious to changes in the modern medical curriculum.  So I was pleasantly surprised earlier this week when a fourth-year medical student told me that his recent licensing examination included a vignette which tested his ability to interpret data from a pharmaceutical company advertisement.  Given that most patients (and, indeed, most doctors) now get their information from such sources, it was nice to see that this is now part of a medical student’s education.

For those of you unfamiliar with the process, the US Medical Licensing Examination (USMLE) is a three-step examination that all medical students must take in order to obtain a medical license in the United States.  Most students take steps 1 and 2 during medical school, while step 3 is taken during residency.

Effective this month, the drug-ad questions will appear in the Step 2 examination.  Obviously, I don’t have access to the particular ad that my med student saw, but here’s a sample item taken from the USMLE website (click to enlarge):


It’s attractive and seems concise.  It’s certainly easier to read—some might even say more “fun”—than a dry, boring journal article or data table.  But is it informative?  What would a doctor need to know to confidently prescribe this new drug?  That’s the emphasis of this new type of test question.  Specifically, the two questions pertaining to this item ask the student (1) to identify which statement is most strongly supported by information in the ad, and (2) which type of research design would give the best data in support of using this drug.

It’s good to know that students are being encouraged to ask such questions of themselves (and, more importantly, one would hope, of the people presenting them with such information).  For comparison, here are two “real-world” examples of promotional advertising I have received for two recently launched psychiatric drugs:


Again, nice to look at.  But essentially devoid of information.  Okay, maybe that’s unfair:  Latuda was found to be effective in “two studies for each dose,” and the Oleptro ad claims that “an eight-week study showed that depression symptoms improved for many people taking Oleptro.”  But what does “effective” mean?  What does “improved” mean?  Where’s the data?  How do these drugs compare to medications we’ve been using for years?  Those are the questions that we need to ask, not only to save costs (new drugs are expensive) but also to prevent exposing our patients to adverse effects that only emerge after a period of time on a drug.

(To be fair, it is quite easy to obtain this information on the drug company’s web sites, or by asking the respective drug reps.  But first impressions count for a lot, and how many providers actually ask for the info?  Or can understand it once they do get it??)

The issue of drug advertising and its influence on doctors has received a good degree of attention lately.  An article in PLoS Medicine last year found that exposure to pharmaceutical company information was frequently (although not always) associated with more prescriptions, higher health care costs, or lower prescribing quality.  Similarly, a report last May in the Archives of Otolaryngology evaluated 50 drug ads in otolaryngology (ENT) journals and found that only 14 (28%) of those claims were based on “strong evidence.”  And the journal Emergency Medicine Australasia went one step further last February and banned all drug company advertising, claiming that “marketing of drugs by the pharmaceutical industry, whose prime aim is to bias readers towards prescribing a particular product, is fundamentally at odds with the mission of medical journals.”

The authors of the PLoS article even wrote the editors of the Lancet, one of the world’s top medical journals, to ask if they’d be willing to ban drug ads, too.  Unfortunately, banning drug advertising may not solve the problem either.  As discussed in an excellent article by Harriet Washington in this summer’s American Scholar, drug companies have great influence over the research that gets funded, carried out, and published, regardless of advertising.  Washington writes: “there exist many ways to subvert the clinical-trial process for marketing purposes, and the pharmaceutical industry seems to have found them all.”

As I’ve written before, I have no philosophical—or practical—opposition to pharmaceutical companies, commercial R&D, or drug advertising.  But I am opposed to the blind acceptance of messages that are the direct product of corporate marketing departments, Madison Avenue hucksters, and drug-company shills.  It’s nice to know that the doctors of tomorrow are being taught to ask the right questions, to become aware of bias, and to develop stronger critical thinking skills.  Hopefully this will help them to make better decisions for their patients, rather than serve as unwitting conduits for big pharma’s more wasteful wares.


The Virtual Clinic Is Open And Ready For Business

July 9, 2011

Being an expert clinician requires mastery of an immense body of knowledge, aptitude in physical examination and differential diagnosis, and an ability to assimilate all information about a patient in order to institute the most appropriate and effective treatment.

Unfortunately, in many practice settings these days, such expertise is not highly valued.  In fact, these age-old skills are being shoved to the side in favor of more expedient, “checklist”-type medicine, often done by non-skilled providers or in a hurried fashion.  If the “ideal” doctor’s visit is a four-course meal at a highly rated restaurant, today’s medical appointments are more like dining at the Olive Garden, if not McDonald’s or Burger King.

At the rate we’re going, it’s only a matter of time before medical care becomes available for take-out or delivery.  Instead of a comprehensive evaluation, your visit may be an online questionnaire followed by the shipment of your medications directly to your door.

Well, that time is now.  Enter “Virtuwell.”

The Virtuwell web site describes itself as “the simplest and most convenient way to solve the most common medical conditions that can get in the way of your busy life.”  It is, quite simply, an online site where (for the low cost of $40) you can answer a few questions about your symptoms and get a “customized Treatment Plan” reviewed and written by a nurse practitioner.  If necessary, you’ll also get a prescription written to your pharmacy.  No appointments, no waiting, no insurance hassles.  And no embarrassing hospital gowns.

As you might expect, some doctors are upset at what they perceive as a travesty of our profession.  (For example, some comments posted on an online discussion group for MDs: “the public will have to learn the hard way that you get what you pay for”; “they have no idea what they don’t know—order a bunch of tests and antibiotics and call it ‘treated'”; and “I think this is horrible and totally undermines our profession.”)  But then again, isn’t this what we have been doing for quite a while already?  Isn’t this what a lot of medicine has become, with retail clinics, “doc-in-a-box” offices in major shopping centers, urgent-care walk-in sites, 15-minute office visits, and managed care?

When I worked in community mental health, I know that some of my fellow MDs saw 30-40 patients per day, and their interviews may just as well have been done over the telephone or online.  It wasn’t ideal, but most patients did just fine, and few complained about it.  (Well, if they did, their complaints carried very little weight, sadly.)  Maybe it’s true that much of what we do does not require 8+ years of specialty education and the immense knowledge that most physicians possess, and many conditions are fairly easy to treat.  Virtuwell is simply capitalizing on that reality.

With the advent of social media, the internet, and services like Virtuwell, the role of the doctor will further be called into question, and new ways of delivering medical care will develop.  For example, this week also saw the introduction of the “Skin Scan,” an iPhone app which allows you to follow the growth of your moles and uses a “proprietary algorithm” to determine whether they’re malignant.  Good idea?  If it saves you from a diagnosis of melanoma, I think the answer is yes.

In psychiatry—a specialty in which treatment decisions are largely based on what the patient says, rather than a physical exam finding—the implications of web-based “office visits” are particularly significant.  It’s not too much of a stretch to envision an HMO providing online evaluations for patients with straightforward complaints of depression or anxiety or ADHD-like symptoms, or even a pharmaceutical company selling its drugs directly to patients based on an online “mood questionnaire.”  Sure, there might be some issues with state Medical Boards or the DEA, but nothing that a little political pressure couldn’t fix.  Would this represent a decline in patient care, or would it simply be business as usual?  Perhaps it would backfire, and prove that a face-to-face visit with a psychiatrist is a vital ingredient in the mental well-being of our patients.  Or it might demonstrate that we simply get in the way.

These are questions we must consider for the future of this field, as in all of medicine.  One might argue that psychiatry is particularly well positioned to adapt to these changes in health care delivery systems, since so many of the conditions we treat are influenced and defined (for better or for worse) by the very cultural and societal trends that lead our patients to seek help in these new ways.

The bottom line is, we can’t just stubbornly stand by outdated notions of psychiatric care (or, for that matter, by our notions of “disease” and “treatment”), because cultural influences are already changing what it means to be healthy or sick, and the ways in which our patients get better.  To stay relevant, we need to embrace sites like Virtuwell, and use these new technologies when we can.  When we cannot, we must demonstrate why, and prove how we can do better.

[Credit goes to Neuroskeptic for the computer-screen psychiatrist.  Classic!]


I Just Don’t Know What (Or Whom) To Believe Anymore

July 2, 2011

de-lu-sion [dih-loo-zhuhn] Noun.  1. An idiosyncratic belief or impression that is firmly maintained despite being contradicted by what is generally accepted as reality, typically a symptom of mental disorder.

The announcement this week of disciplinary action against three Harvard Medical School psychiatrists (which you can read about here and here and here and here) for violating that institution’s conflict-of-interest policy comes at a pivotal time for psychiatry.  Or at least for my own perceptions of it.

As readers of this blog know, I can be cynical, critical, and skeptical about the medicine I practice on a daily basis.  This arises from two biases that have defined my approach to medicine from Day One:  (1) a respect for the patient’s point of view (which, in many ways, arose out of my own personal experiences), and (2) a need to see and understand the evidence (probably a consequence of my years of graduate work in basic molecular neuroscience before becoming a psychiatrist).

Surprisingly, I have found these attributes to be in short supply among many psychiatrists—even among the people we consider to be our leaders in the field.  And Harvard’s action against Biederman, Spencer, and Wilens might unfortunately just be the tip of the iceberg.

I entered medical school in the late 1990s.  I recall one of my preclinical lectures at Cornell, in which the chairman of our psychiatry department, Jack Barchas, spoke with breathless enthusiasm about the future of psychiatry.  He expounded passionately about how the coming era would bring deeper knowledge of the biological mechanisms of mental illness and new, safer, more effective medications that would vastly improve our patients’ lives.

My other teachers and mentors were just as optimistic.  The literature at the time was filled with studies of new pharmaceuticals (the atypical antipsychotics, primarily), molecular and neuroimaging discoveries, and novel research into genetic markers of illness.  As a student, it was hard not to be caught up in the excitement of the coming revolution in biological psychiatry.

But I now wonder whether we may have been deluding ourselves.  I have no reason to think that Dr Barchas was lying to us in that lecture at Cornell, but those who did the research about which he pontificated may not have been giving us the whole story.  In fact, we’re now learning that those “revolutionary” new drugs were not quite as revolutionary as they appeared.  Drug companies routinely hid negative results and designed their studies to make the new drugs appear more effective.  They glossed over data about side effects, and frequently drug companies would ghostwrite books and articles that appeared to come from their (supposedly unbiased) academic colleagues.

This went on for a long time.  And for all those years, these same academics taught the current generation of psychiatrists like me, and lectured widely (for pay, of course) to psychiatrists in the community.

In my residency years in the mid-2000s, for instance, each one of my faculty members (with only one exception that I’m aware of) spoke for drug companies or was being paid to do research on drugs that we were actively prescribing in the clinic and on the wards.  (I didn’t know this at the time, of course; I learned this afterward.)  And this was undoubtedly the case in other top-tier academic centers throughout the country, having a trickle-down effect on the practice of psychiatry worldwide.

Now, there’s nothing wrong with academics doing research or being paid to do it.  For me, the problem is that those two “pillars” by which I practice medicine (i.e., respect for the patient’s well-being, and a desire for hard evidence) were not the priorities of much of this clinical research.  Patients weren’t always getting better with these new drugs (certainly not in the long run), and the data were finessed and refined in ways that embellished the main message.  This was, of course, exacerbated by the big paychecks many of my academic mentors received.  Money has a remarkable way of influencing what people say and how (and how often) they say it.

But how is a student—or a practicing doc in the community who is several decades out of medical school—supposed to know this?  In my opinion, those who teach medical students and psychiatry residents probably should not be on a pharma payroll or give promotional talks for drugs.  These “academic leaders” are supposed to be fair, neutral, thoughtful authorities who make recommendations on patient outcomes data and nothing else.  Isn’t that why we have academic medical centers in the first place?   (Hey, at least we know that drug reps are paid handsome salaries & bonuses by drug companies… But don’t we expect university professors to be different?)

Just as a series of little white lies can snowball into an enormous unintended deception, I’m afraid that the last 10-20 years of cumulative tainted messages (sometimes deliberate, sometimes not) about the “promises” of psychiatry have created a widespread shared delusion about what we can offer our patients.  And if that’s too much of an exaggeration, then we might at least agree that our field now suffers a crisis of confidence in our leaders.  As Daniel Carlat commented in a story about the Harvard action: “When I get on the phone now and talk to a colleague about a study… [I ask] ‘was this industry funded, and can we trust the study?'”

It may be too late to avoid irreparable damage to this field or our confidence in it.  But at least some of this is coming to light.  If nothing else, we’re taking a cue from our area of clinical expertise, and challenging the delusional thought processes that have driven our actions for many, many years.


Big Brother in Your Medicine Cabinet

June 29, 2011

If there’s one thing I’ve learned from working as a doctor, it is that “what the doctor ordered” is not always what the patient gets.  Sure, I’ve encountered the usual obstacles—like pharmacy “benefit” (ha!) managers whose restrictive formularies don’t cover the medications ordered by their physicians—but I’ve also been amazed by the number of patients who don’t take medications as prescribed.  In psychiatry, the reasons are numerous:  patients may take their SSRI “only when I feel depressed,” they double their dose of a benzodiazepine “because I like the way it makes me feel,” they stop taking two or three of their six medications out of sheer confusion, or they take a medication for entirely different purposes than those for which it was originally prescribed.  (If I had a nickel for every patient who takes Seroquel “to help me sleep,” I’d be a very rich man.)

In the interest of full disclosure, this is not limited to my patients.  Even in my own life, I found it hard to take my antidepressant daily (it really wasn’t doing anything for me, and I was involved in other forms of treatment and lifestyle change that made a much bigger difference).  And after a tooth infection last summer, it was a real challenge to take my penicillin three times a day.  I should know better.  Didn’t I learn about this in med school??

This phenomenon used to be called “noncompliance,” a term which has been replaced by the more agreeable term, “nonadherence.”  It’s rampant.  It is estimated to cost the US health care system hundreds of billions of dollars annually.  But how serious is it to human health?  The medical community—with the full support of Big Pharma, mind you—wants you to believe that it is very serious indeed.  In fact, as the New York Times reported last week, we now have a way to calculate a “risk score” for patients who are likely to skip their medications.  Developed by the FICO company, the “Medication Adherence Score” can predict “which patients are at highest risk for skipping or incorrectly using” their medications.

FICO?  Where have you heard of them before?  Yes, that’s right, they’re the company who developed the credit score:  that three-digit number which determines whether you’re worthy of getting a credit card, a car loan, or a home mortgage.  And now they’re using their clout and influence actuarial skills to tell whether you’re likely to take your meds correctly.

To be sure, some medications are important to take regularly, such as antiretrovirals for HIV, anticoagulants, antiarrhythmics, etc, because of the risk of severe consequences after missed doses.  As a doctor, I entered this profession to improve lives—and oftentimes medications are the best way for my patients to thrive.  [Ugh, I just can’t use that word anymore… Kaiser Permanente has ruined it for me.]

But let’s consider psychiatry, shall we?  Is a patient going to suffer by skipping Prozac or Neurontin for a few days?  Or giving them up altogether to see an acupuncturist instead?  That’s debatable.

Anyway, FICO describes their score as a way to identify patients who would “benefit from follow-up phone calls, letters, and emails to encourage proper use of medication.”  But you can see where this is going, can’t you?  It’s not too much of a stretch to see the score being used to set insurance premiums and access (or lack thereof) to name-brand medications.  Hospitals and clinics might also use it to determine which patients to accept and which to avoid.

Independently (and coincidentally?), the National Consumers League inaugurated a program last month called “Script Your Future,” which asks patients to make “pledges” to do things in the future (like “walk my daughter down the aisle” or “always be there for my best friend”) that require—or so it is implied—adherence to their life-saving medications.  Not surprisingly, funds for the campaign come from a coalition including “health professional groups, chronic disease groups, health insurance plans, pharmaceutical companies, [and] business organizations.”  In other words: people who want you to take drugs.

The take-home message to consumers patients, of course, is that your doctors, drug companies, and insurers care deeply about you and truly believe that adherence to your medication regimen is the key to experiencing the joy of seeing your children graduate from college or retiring to that villa in the Bahamas.  Smile, take our drugs, and be happy.  (And don’t ask questions!)

If a patient doesn’t want to take a drug, that’s the patient’s choice—which, ultimately, must always be respected (even if ends up shortening that patient’s life).  At the same time, it’s the doctor’s responsibility to educate the patient, figure out the reasons for this “nonadherence,” identify the potential dangers, and help the patient find suitable alternatives.  Perhaps there’s a language barrier, a philosophical opposition to drugs, a lack of understanding of the risks and benefits, or an unspoken cultural resistance to Western allopathic medicine.  Each of these has its merits, and needs to be discussed with the patient.

Certainly, if there are no alternatives available, and a patient still insists on ignoring an appropriate and justifiable medical recommendation, we as a society have to address how to hold patients accountable, so as not to incur greater costs to society down the road (I’m reminded here of Anne Fadiman’s excellent book The Spirit Catches You And You Fall Down).  At the same time, though, we might compensate for those increased costs by not overprescribing, overtreating, overpathologizing, and then launching campaigns to make patients complicit in (and responsible for!) these decisions.

Giving patients a “score” to determine whether they’re going to take their meds is the antithesis of good medicine.  Good medicine requires discussion, interaction, understanding, and respect.  Penalizing patients for not following doctors’ orders creates an adversarial relationship that we can do without.


Psychopharm R&D Cutbacks: Crisis or Opportunity?

June 19, 2011

The scientific journal Nature ran an editorial this week with a rather ominous headline: “Psychopharmacology in Crisis.”  What exactly is this “crisis” they speak of?  Is it the fact that our current psychiatric drugs are only marginally effective for many patients?  Is it the fact that they can often cause side effects that some patients complain are worse than the original disease?  No, the “crisis” is that the future of psychopharmacology is in jeopardy, as pharmaceutical companies, university labs, and government funding agencies devote fewer resources to research and development in psychopharmacology.  Whether this represents a true crisis, however, is entirely in the eye of the beholder.

In 2010, the pharmaceutical powerhouses Glaxo SmithKline (GSK) and AstraZeneca closed down R&D units for a variety of CNS disorders, a story that received much attention.  They suspended their research programs because of the high cost of bringing psychiatric drugs to market, the potential for lawsuits related to adverse events, and the heavy regulatory burdens faced by drug companies in the US and Europe.  In response, organizations like the European College of Neuropsychopharmacology (ECNP) and the Institute of Medicine in the US have convened summits to determine how to address this problem.

The “problem,” of course, for pharmaceutical companies is the potential absence of a predictable revenue stream.  Over the last several years, big pharma has found it to be more profitable not to develop novel drugs, but new niches for existing agents—a decision driven by MBAs instead of MDs and PhDs.  As Steve Hyman, NIMH director, told Science magazine last June,  “It’s hardly a rich pipeline.  It suggests a sad dearth of ideas and … lots of attempts at patent extensions and new indications for old drugs.”

Indeed, when I look back at the drug approvals of the last three or four years, there really hasn’t been much to get excited about:  antidepressants (Lexapro, Pristiq, Cymbalta) that are similar in mechanism to other drugs we’ve been using for years; new antipsychotics (Saphris, Fanapt, Latuda) that are essentially me-too drugs which don’t dramatically improve upon older treatments; existing drugs (Abilify, Seroquel XR) that have received new indications for “add-on” treatment; existing drugs (Silenor, Nuedexta, Kapvay) that have been tweaked and reformulated for new indications; and new drugs (Invega, Oleptro, Invega Sustenna) whose major attraction is a fancy, novel delivery system.

Testing and approval of the above compounds undoubtedly cost billions of dollars (investments which, by the way, are being recovered in the form of higher health care costs to you and me), but the benefit to patients as a whole has been only marginal.

The true crisis, in my mind, is that with each new drug we psychiatrists are led to believe that we’re witnessing the birth of a blockbuster.  Not to mention the fact that patients expect the same, especially with the glut of persuasive direct-to-consumer advertising (“Ask your doctor if Pristiq is right for you!”).  Most third-party payers, too, are more willing to pay for medication treatment than anything else (although—thankfully—coverage of newer agents often requires the doctor to justify his or her decision), even though many turn out to be a dud.

In the meantime, this focus on drugs neglects the person behind the illness.  Not every person who walks into my office with a complaint of “depression” is a candidate for Viibryd or Seroquel XR.  Or even a candidate for antidepressants at all.  But the overwhelming bias is that another drug trial might work.  “Who knows—maybe the next drug is the ‘right’ one for this patient!”

Recently, I’ve joked with colleagues that I’d like to see a moratorium on psychiatric drug development.  Not because our current medications meet all of our needs, or that we can get by without any further research.  Not at all.  But if we had, say, five years with NO new drugs, we might be able to catch our collective breaths, figure out exactly what we’re treating after all (maybe even have a more fruitful and unbiased discussion about what to put in the new DSM-5), and, perhaps, devote resources to nonpharmacological treatments, without getting caught up in the ongoing psychopharmacology arms race that, for many patients, focuses our attention where it doesn’t belong.

So it looks like my wish might come true.  Maybe we can use the upcoming slowdown to determine where the real needs are in psychiatry.  Maybe if we devote resources to community mental health services, to drug and alcohol treatment, pay more attention to our patients’ personality traits, lifestyle issues, co-occurring medical illnesses, and respond to their goals for treatment rather than AstraZeneca’s or Pfizer’s, we can improve the care we provide and figure out where new drugs might truly pay off.  Along the way, we can spend some time following the guidelines discussed in a recent report in the Archives of Internal Medicine, and practice “conservative prescribing”—i.e., making sensible decisions about what we prescribe and why.

Sometimes, it is true that less is more.  When Big Pharma backs out of drug development, it’s not necessarily a bad thing.  In fact, it may be precisely what the doctor ordered.


How Much Should Addiction Treatment Cost?

May 22, 2011

Drug and alcohol abuse are widespread social, behavioral, and—if we are to believe the National Institutes of Health and most addiction professionals—medical problems.  In fact, addiction medicine has evolved into its own specialty, and a large number of other allied health professionals have become engaged in the treatment of substance abuse and dependence.

If addiction is a disease, then we should be able to develop ways to treat addictions effectively, and the costs of accepted treatments can be used to determine how we provide (and reimburse for) these services.  Unfortunately, unlike virtually every other (non-psychiatric) disease process—and despite tremendous efforts to develop ways to treat addictions effectively—there are still no universally accepted approaches for management of addictive disorders.  And the costs of treating an addict can range from zero to tens (or hundreds) of thousands of dollars.

I started thinking of this issue after reading a recent article on abcnews.com, in which addiction psychiatrist Stefan Kruszewski, MD, criticized addiction treatment programs for their tendency to take people off one addictive substance and replace it with another one (e.g., from heroin to Suboxone; or from alcohol to a combination of a benzodiazepine, an antidepressant, and an antipsychotic), often at a very high cost.  When seen through the eyes of a utilization reviewer, this seems unwise, expensive, and wasteful.

I agree with Dr Kruszewski, but for a slightly different reason.  To me, current treatment approaches falsely “medicalize” addiction and avoid the more significant psychological (or even spiritual) meaning of our patients’ addictive behaviors.  [See my posts “Misplaced Priorities in Addiction Treatment” and “When Does Treatment End.”]  They also cost a lot of money:  Suboxone induction, for instance, can cost hundreds of dollars, and the medication itself can cost several hundred more per month.  Likewise, the amounts being spent to develop new pharmacotherapies for cocaine and stimulant addiction are very high indeed.

Residential treatment programs—particularly the famous ones like Cirque Lodge, Sierra Tucson, and The Meadows—are also extremely expensive.  I, myself, worked for a time as a psychiatrist for a long-term residential drug and alcohol treatment program.  Even though we tried to err on the side of avoiding medications unless absolutely necessary (and virtually never discharged patients on long-term treatments like Suboxone or methadone), our services were quite costly:  upwards of $30,000 for a four-month stay, plus $5000/month for “aftercare” services.  (NB:  Since my departure, the center has closed, due in part to financial concerns.)

There are cheaper programs, like state- and county-sponsored detox centers for those with no ability to pay, as well as free or low-cost longer-term programs like the Salvation Army.  There are also programs like Phoenix House, a nonprofit network of addiction treatment programs with a variety of services—most of which are based on the “therapeutic community” approach—which are free to participants, paid for by public and private funding.

And then, of course, are the addicts who quit “cold turkey”—sometimes with little or no support at all—and those who immerse themselves in a mutual support program like Alcoholics Anonymous (AA).  AA meetings can be found almost everywhere, and they’re free.  Even though the success rate of AA is probably quite low (probably less than 10%, although official numbers don’t exist), the fact of the matter is that some people do recover completely without paying a dime.

How to explain this discrepancy?  The treatment “industry,” when challenged on this point, will argue that the success rate of AA alone is abysmal, and without adequate long-term care (usually in a group setting), relapse is likely, if not guaranteed.  This may in fact be partially true; it has been shown, for instance, that the likelihood of long-term sobriety does correlate with duration of treatment.

But at what cost?  Why should anyone pay $20,000 to $50,000 for a month at a premiere treatment center like Cirque Lodge or Promises Malibu?  Lindsay Lohan and Britney Spears can afford it, but few else—and virtually no insurance plans—can.

And the services offered by these “premiere” treatment programs sound like a spa menu, rather than a treatment protocol:  acupuncture, biofeedback, equine therapy, massage, chiropractic, art therapy, nature hikes, helicopter rides, gourmet meals or private chef services, “light and sound neurotherapy,” EMDR, craniosacral therapy, reiki training, tai chi, and many others.

Unfortunately, the evidence that any one of these services improves a patient’s chance of long-term sobriety is essentially nil.  Moreover, if addiction is purely a medical illness, then learning how to ride a horse should do absolutely nothing to help someone kick a cocaine habit.  Furthermore, medical insurance should not pay for those services (or, for that matter, for group therapy or a therapeutic-community approach).

Nevertheless, some recovering addicts may genuinely claim that they owe their sobriety to some of these experiences:  trauma recovery treatment, experiential therapy, “male bonding” activities (hat tip to the Prescott House), and yes, even the helicopter rides.

The bottom line is, we still don’t know how to treat addiction, or even what it really is in the first place.  Experts have their own ideas, and those in recovery have their own explanations.  My opinion is that, in the end, treatment must be individualized.  For every alcoholic who gets sober by attending daily AA meetings, or through religious conversion, there’s another addict who has tried and failed AA numerous times, and who must enroll in multiple programs (costing tens of thousands of dollars) to achieve remission.

What are we as a society willing to pay for?  Or should we simply maintain the free-market status quo, in which some can pay big bucks to sober up with celebrities on the beaches of Malibu, while others must detox on the bathroom floor and stagger to the AA meetings down the street?  Until we determine how best to tailor treatment to the individual, there’s no shortage of people who are willing to try just about anything to get help—and a lot of money to be made (and spent) along the way.


%d bloggers like this: