A few days ago, upon awakening but before my brain was fully alert, I was reviewing the events of the previous few days in preparation for the new one. At one point I tried to remember a conversation I had had with a colleague about three days prior, but I could not quite remember the specifics of our discussion. “No big deal,” I thought to myself, “I’ll just Google it.”
Almost immediately, I recognized the folly of this thought. Obviously, there is no way to “Google” the events of our personal lives. But while impractical, the solution was a logical one. If I want to know any fact or piece of information, I Google it online. If I want to find a file on my computer, I use Google Desktop. All of my email conversations for the last five years are archived in my Google Mail account, so I can quickly find correspondence (and people, and account numbers, and emailed passwords, etc) at the click of the “Search” button. No wonder I immediately thought of Googling myself.
A recent article in Science claims that the permeation of Google and other search engines into our lives—and now onto our smartphones and other portable gadgets—has not only made it easier for us to retrieve information, but it has also changed the way we remember. In their experiments, three cognitive psychologists from Columbia, Harvard, and UW-Madison demonstrated that we are more likely to forget information if we know that we can access it (e.g., by a search engine) in the future. Moreover, even for simple data, we’re more likely to remember where we store pieces of information than the subject matter itself.
The implication here is that the process of memory storage & retrieval is rapidly changing in the Online Age. Humans no longer need to memorize anything (who was the 18th president? What’s the capital of Australia? When was the Six-Day War?), but instead just need to know how to access it.
Is this simply a variation of the old statement that “intelligence is not necessarily knowing everything but instead where to find it”? Perhaps. An optimist might look at this evolution in human memory as presenting an opportunity to use more brain power for processing complex pieces of information that can’t be readily stored. In my work, for instance, I’m glad I don’t need to recall precise drug mechanisms, drug-drug interactions, or specific diagnostic criteria (I can look them up quite easily), but can instead spend pay closer attention to the process of listening to my patients and attending to more subtle concerns. (Which often does more good in the long run anyway.)
The difference, however, is that I was trained in an era in which I did have to memorize all of this information without the advantage of an external online memory bank. Along the way, I was able to make my own connections among sets of seemingly unrelated facts. I was able to weed out those that were irrelevant, and retain those that truly made a difference in my daily work. This resulted, in my opinion, in a much richer understanding of my field.
While I’ve seen no studies of this issue, I wonder whether students in medicine (or, for that matter, other fields requiring mastery of a large body of information) are developing different sets of skills in the Google Era. Knowing that one can always “look something up” might make a student more careless or lazy. On the other hand, it might help one to develop a whole new set of clinical skills that previous generations simply didn’t have time for.
Unfortunately, those skills are not the things that are rewarded in our day-to-day work. We value information and facts, rather than substance and process. In general, patients want to know drug doses, mechanisms, and side effects, rather than developing a “therapeutic relationship” with their doctor. Third-party payers don’t care about the insights or breakthroughs that might happen during therapy, but instead that the proper diagnoses and billing codes are given, and that patients improve on some objective measurement. And when my charts are reviewed by an auditor (or a lawyer), what matters is not the quality of the doctor-patient interaction, but instead the documentation, the informed consent, the checklists, the precise drug dosing, details in the treatment plan, and so on.
I think immediate access to information is a wonderful thing. Perhaps I rely on it too much. (My fiancé has already reprimanded me for looking up actors or plot twists on IMDB while we’re watching movies.) But now that we know it’s changing the way we store information and—I don’t think this is too much of a stretch—the way we think, we should look for ways to use information more efficiently, creatively, and productively. The human brain has immense potential; now that our collective memories are external (and our likelihood of forgetting is essentially nil), let’s tap that potential do some special and unique things that computers can’t do. Yet.