I don’t know if you caught it the other night when you were watching the news while skimming your email, checking your twitter and RSS feeds, and updating your Facebook status, but there was an interesting story about multitasking.  Silly me, who actually watches the news anymore? Anyways, much of the recent buzz on this endemic behavior (among the technologically savvy) is not good.  Multitasking is a paradox of sorts – where we tend to romanticize and overestimate our ability to split attention among multiple competing demands. The belief goes something like this: “I’ve got a lot to do and if I work on all my tasks simultaneously I’ll get them done faster.”   However, what most of us fail to realize is that when we split our attention, what we are actually doing is dividing an already limited and finite capacity in a way that hinders overall performance. And some research is showing that chronic multitasking may have deleterious affects on one’s ability to process information even when one is not multitasking (Nass, 2009).

 

Advances in computer technology seem to fuel this behavior.  If you do a Google search on multitasking you will get a mix of information on the technological wonders of machines that can multitask (AKA computers) mixed with news regarding how bad media multitasking is for you.

 

Think about it.  There has been increasing pressure on the workforce to be more productive and gains in productivity have been made lockstep with increases in personal computing power. Applications have been developed on the back of the rising tide of computer capacity, thus making human multitasking more possible.  These advances include faster microprocessors, increased RAM, increased monitor size, the internet itself, browsers that facilitate the use of multiple tabs, relatively inexpensive computers with sufficient power to keep open email, word processing programs, Facebook, Twitter, iTunes, and YouTube. Compound these tools with hardware that allows you to do these things on the go. No longer are you tethered to the desktop computer with an Ethernet cable.  Wifi and 3G connectivity allow all the above activities almost anywhere via use of a smart phone, laptop, iPad, or notebook computer.  Also in the mix are devices such as bluetooth headsets and other headphones that offer hands free operation of telephones.

 

Currently, technology offers one the ability to divide one’s attention in ways inconceivable only a decade ago. The ease of doing so has resulted in the generalization of this behavior across settings and situations including talking on cell phones while driving, texting while driving, texting while engaged in a face to face personal interactions, and even cooking dinner while talking on the phone. Some of these behaviors are dangerous, some rude, and all likely lead to inferior outcomes.

 

Don’t believe it? If you don’t, you are likely among the worst skilled of those who multitask. “Not me!” you may claim. Well research has shown that those who routinely multitask are also the most confident in their ability to do so (Nass, 2009).  But when you look at the products of these “confidently proficient” multitaskers, you find the poorest outcomes.

 

Multitasking involves shifting attention from one task to another, refocusing attention, sustaining attention, and exercising ongoing judgment about the pertinence and salience of various competing demands. Doing this successfully is exceptionally difficult and is likely well beyond the capacity of most typical human beings. Our brains can only generally concentrate on one task at a time, and as such, multitasking necessitates devoting shorter periods of time on dissimilar tasks.  As a result, overall effectiveness, on all tasks is reduced.

 

Researchers at the University of Michigan Brain, Cognition and Action Laboratory, including Professor David E. Meyer, point out that the act of switching focus itself has deleterious effects. When you switch from task A to task B you lose time in making the transition and the completion time of the transition itself increases with the degree of complexity of the task involved. Depending on how often you transition between stimuli, you can waste as much as 40% of your productive time just in task switching (APA, 2006).

 

Shorter periods of focus reduce overall time on task and each transition reduces this time further. Dr. Glenn Wilson at the Institute of Psychiatry, University of London in 2005 discovered that his subjects experienced a 10-point fall in their IQ when distracted by incoming email and phone calls. This effect size was “more than twice that found in studies of the impact of smoking marijuana” and was similar to the effects of losing a night’s sleep (BBC, 2005).

 

As for the negative long term affects of multitasking, Dr. Nass noted that:

 

“We studied people who were chronic multitaskers, and even when we did not ask them to do anything close to the level of multitasking they were doing, their cognitive processes were impaired. So basically, they are worse at most of the kinds of thinking not only required for multitasking but what we generally think of as involving deep thought.”

 

Nass (2009) has found that these habitual multitaskers have chronic filtering difficulties, impaired capacity to manage working memory, and slower task switching abilities. One must be careful to avoid the Illusion of Cause in this situation. Correlation is not causation and we must avoid inferring that multitasking causes these cognitive declines. The reverse may be true or other undetected variables may cause both.

 

Much of the research in this area is in its infancy and thus limited in scope and depth, so it is prudent to be a bit skeptical about whether or not multitasking is bad for you. But with regard to the efficacy of multitasking – when you look at the issue from an anecdotal perspective, apply the tangentially related evidence logically, and then consider the data, you have to conclude that multitasking on important jobs is not a good idea.  If you have important tasks to accomplish, it is best to focus your attention on one task at a time and to minimize distractions.  To do so, avoid temptation to text, tweet, watch TV, check your email, talk on the phone, instant message, chat on Facebook, Skype, or otherwise divide you attention. If you believe employing these other distractions helps you do better, you are deluding yourself and falling victim to the reinforcement systems that make multitasking enjoyable. Socializing, virtually or otherwise, is more pleasurable than the arduous processes involved in truly working or studying.

 

You can likely apply the same principles to plumbing, cooking, housework, woodworking, etc.  The key to success, it seems is to FOCUS on one task at a time, FINISH the job, and then move one.  You’ll save time, be more efficient, and do a better job! Remember – FOCUS & FINISH!

 

References

 

American Psychological Association. (March 20, 2006). Multitasking: Switching Costs.
http://www.apa.org/research/action/multitask.aspx

 

BBC News (2005). ‘Infomania’ worse than marijuana. http://news.bbc.co.uk/2/hi/uk_news/4471607.stm

 

Keim, B. (2009). Multitasking muddles Brains, even when the computer is off. Wired Science News for Your Neurons. http://www.wired.com/wiredscience/2009/08/multitasking/#ixzz11LfOUISp

 

Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive Control in Media Multitaskers. Proceedings of the National Academy of Sciences. v. 106, no. 37. http://www.pnas.org/content/106/37/15583

 

Nass, C. (August 28, 2009).  Talk of the Nation: National Public Radio:  Multitasking May Not Mean Higher Productivity. http://www.npr.org/templates/story/story.php?storyId=112334449

 

Seldon, B. (2009). Multitasking, marijuana, managing? http://www.management-issues.com/2009/9/21/opinion/multitasking–marijuana–managing.asp

Share

The Implicit Associations Test (IAT) is a very popular method for measuring implicit (implied though not plainly expressed) biases. Greenwald, one of the primary test developers, suggests that “It has been self-administered online by millions, many of whom have been surprised—sometimes unpleasantly—by evidence of their own unconscious attitudes and stereotypes regarding race, age, gender, ethnicity, religion, or sexual orientation.” (2010). It purports to tap into our unconscious or intuitive attitudes at a deeper level than those that we are able to rationally express. The best way to get an idea of just what the IAT is, is to take it. If you haven’t done so already, go to the Implicit Associations Test website and participate in a demonstration of the Race Test. It takes about ten minutes.

 

I tend to have a skeptical inclination. This in part stems from the training that I benefited from in acquisition of my PhD in psychology. But it is also just part of who I am. Psychology is, in itself, a rather soft science – full of constructs – and variables that are inherently difficult to measure with any degree of certainty. I learned early in my training that there are dangers associated with inference and measuring intangibles. In fact, my training in personality and projective measures essentially focused on why not to use them – especially when tasked with helping to make important life decisions. Why is this? All psychological measures contain small and predictable amounts of unavoidable error – but those based on constructs and inference are particularly untenable.

 

This is relevant because as we look at thinking processes, we are dealing with intangibles. This is especially true when we are talking about implicit measures. Any discussion of implicit thought necessitates indirect or inferential measures and application of theoretical constructs. So, with regard to the Implicit Associations Test (IAT), one needs to be careful.

 

Currently, increasing evidence suggests that our intuition has a powerful influence over our behavior and moment to moment decision making. Books like Blink by Malcolm Gladwell and How We Decide by Jonah Lehrer point out the power of intuition and emotion in this regard. Chabris and Simons in their book, The Invisible Gorilla, make a strong argument that intuition itself sets us up for errors. Gladwell perhaps glorifies intuition – but the reality is, it (intuition) is a powerful force. Gladwell uses the story of the IAT as evidence of such power. Essentially, if the IAT is a valid and reliable measure, it provides strong evidence of the problems of intuition.

 

I am motivated to shed some light on the IAT – not because of my personal IAT results, which were disappointing, but because the IAT has the risk of gaining widespread application without sufficient technical adequacy. Just think of the ubiquitous Meyers-Briggs Personality Inventory and the breadth and depth of popular use and appeal that it has garnered (without a shred of legitimate science to back it up). Real decisions are made based on the results of this instrument and frankly it is dangerous. The Meyers-Briggs is based on unsubstantiated and long out-of-date Jungian constructs and was built by individuals with little to no training in psychology or psychometrics. This is not the case for the IAT for sure, but the risks of broad and perhaps erroneous application are similar.

 

The authors of the IAT have worked diligently over the years to publish studies and facilitate others’ research in order to establish the technical adequacy of the measure. This is a tough task because the IAT is not one test, but rather, it is a method of measurement that can be applied to measure a number of implicit attitudes. At the very foundation of this approach there is a construct, or belief, that necessitates a leap of faith.

 

So what is the IAT? Gladwell (2005) summarizes it in the following way:

The Implicit Association Test (IAT)…. measures a person’s attitude on an unconscious level, or the immediate and automatic associations that occur even before a person has time to think. According to the test results, unconscious attitudes may be totally different or incompatible with conscious values. This means that attitudes towards things like race or gender operate on two levels:
1. Conscious level- attitudes which are our stated values and which are used to direct behavior deliberately.
2. Unconscious level- the immediate, automatic associations that tumble out before you have time to think.
Clearly, this shows that aside from being a measurement of attitudes, the IAT can be a powerful predictor of how one [may] act in certain kinds of spontaneous situations.

So here is one of the difficulties I have with the measure. Take this statement: “The IAT measures a person’s attitude on an unconscious level, or the immediate and automatic associations that occur even before a person has time to think.” Tell me how one would directly and reliably measure “unconscious attitude” without using inference or indirect measures that are completely dependent on constructs? I am not alone in this concern. In fact, Texas A&M University psychologist Hart Blanton, PhD, worries that the IAT has been used prematurely in research without sufficient technical adequacy. Blanton has in fact published several articles (Blanton, et al., 2007; Blanton, et al., 2009) detailing the IAT’s multiple psychometric failings. He suggests that perhaps the greatest problem with this measure concerns the way that the test is scored.

 

First you have to understand how it all works. The IAT purports to measure the fluency of people’s associations between concepts. On the Race IAT, a comparison is made between how fluent the respondent pairs pictures of European-Americans with words carrying a connotation of “good” and pictures of African-Americans with words connoting “bad.” The task measures the latency between such pairings and draws a comparison to the fluency of responding when the associations are reversed (e.g., how quickly does the respondent pair European-Americans with words carrying a “bad” connotation and African-Americans with words connoting “good.”). If one is quicker at pairing European-Americans with “good” and African Americans with “bad” then it is inferred that the respondent has a European-American preference. The degree of preference is determined by the measure of fluency and dysfluency in making those pairings. Bigger differences in pairing times result in stronger ratings of one’s bias. Blanton questions the arbitrary nature of where the cutoffs for mild, moderate, and strong preferences are set when there is no research showing where the cutoffs should be. Bottom line, Blanton argues, is that the cutoffs are arbitrary. This is a common problem in social psychology.

 

Another issue of concern is the stability of the construct being measured. One has to question whether one’s bias, or racial preferences, are a trait (a stable attribute over time) or a state (a temporary attitude based on acute environmental influences). The test-retest reliability of the IAT is relatively unstable itself. Regardless, according to Greenwald: “The IAT has also shown reasonably good reliability over multiple assessments of the task. …. in 20 studies that have included more than one administration of the IAT, test–retest reliability ranged from .25 to .69, with mean and median test–retest reliability of .50.” Satisfactory test-retest reliability values are in the .70 to.80 range. To me, there is a fair amount of variance unaccounted for and a wide range of values (suggesting weak consistency). My IATs have bounced all over the map. And boy did I feel bad when my score suggested a level of preference that diverges significantly from my deeply held values. Thank goodness I have some level of understanding of the limitations of such metrics. Not everyone has such luxury.

 

As I noted previously, the IAT authors have worked diligently to establish the technical adequacy of this measure and they report statistics attesting to the internal-consistency, test-retest reliability, predictive validity, convergent validity, and discriminant validity, almost always suggesting that results are robust (Greenwald, 2010; Greenwald, 2010; Greenwald, et al, 2009; Lane, et al, 2007) . There are other studies including those carried out by Blanton and colleagues, that suggest otherwise. To me, these analyses are important and worthwhile – however, at the foundation, there is the inescapable problem of measuring unconscious thought.

 

Another core problem is that the validity analyses employ other equally problematic measures of intangibles in order to establish credibility. I can’t be explicit enough – when one enters the realm of the implicit – one enters a realm of intangibles: and like it or not, until minds can be read explicitly, the implicit is essentially immeasurable with any degree of certainty. The IAT may indeed measure what it purports to measure, but the data on this is unconvincing. Substantial questions of reliability and validity persist. I would suggest that you do not take your IAT scores to heart.

 

References

 

Azar, B. (2008). IAT: Fad or fabulous? Monitor on Psychology. July. Vol 39, No. 7,  page 44.

 

Blanton, H., Jaccard, J., Christie, C., and Gonzales, P. M. (2007). Plausible assumptions, questionable assumptions and post hoc rationalizations: Will the real IAT, please stand up? Journal of Experimental Social Psychology. Volume 43, Issue 3, Pages 399-409.

 

Blanton, H., Klick, J., Mitchell, G., Jaccard, J.,Mellers, B., Tetlock, P. E. (2009). Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT. Journal of Applied Psychology. Vol. 94, No. 3, 567–582

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Gladwell, M. 2005. Blink: The Power of Thinking Without Thinking. Little, Brown and Company: New York.

 

Greenwald, A. G. (2010).  I Love Him, I Love Him Not: Researchers adapt a test for unconscious bias to tap secrets of the heart. Scientific American.com: Mind Matters.   http://www.scientificamerican.com/article.cfm?id=i-love-him-i-love-him-not

 

Greenwald, A. G. (2009). Implicit Association Test: Validity Debates. http://faculty.washington.edu/agg/iat_validity.htm

 

Greenwald, A. G., Poehlman, T. A., Uhlmann, E., & Banaji, M. R. (2009). Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology. 97, 17–41.

 

Lane, K. A., Banaji, M. R., Nosek, B. A., & Greenwald, A. G. (2007). Understanding and using the Implicit Association Test: IV. What we know (so far) (Pp. 59–102).  In B. Wittenbrink & N. S. Schwarz (Eds.). Implicit measures of attitudes: Procedures and controversies. New York: Guilford Press.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

My wife and I recently spent some time in New York City and one of our traditions is to take in a Broadway show. This time we stepped a bit off-Broadway to see the bawdy but Tony Award Winning Avenue Q. On the surface, this show seems silly, but it actually addresses some important issues. Essentially it is about the “coming of age” of young adults stepping out into the real world. The way the show is played out is interesting in that it employs a mixture of human actors, human puppets, and monster puppets – with all puppeteers fully visible on stage. As is often the case in theater, It necessitated suspension of reality and letting go of conventional thinking.

 

The play itself satirized the longstanding PBS children’s show Sesame Street both in format and message. Make no mistake however, this is not a show for children, or even for folks put-off by lewd language or sexual situations. Regardless, it delves headlong into issues that challenge the teachings of Sesame Street, laying bare the notion that everyone is “special.”

 

I couldn’t help but hearken back to a post I wrote entitled Self Esteem on a Silver Platter, that highlights the cost of telling children they are smart. I wonder if there are similar costs to telling children they are inherently special? Obviously, the writers of Ave. Q had the same question in mind.

 

As Princeton, the play’s protagonist, struggled with the reality of entering the world of work and his internalized notion of his own specialness, I thought about my college age children and my own experience when I left a small town to attend college. I have to believe that my experience was not unlike Princeton’s and I’m guessing, is very similar to my children’s experiences, as they make the transition from “Big fish in a small pond – to small fish in a big pond.” It’s a humbling transition.

 

Some of the other issues confronted by the cast and characters include racism and homophobia. Each of these prejudices are attitudes played out in a large part by our intuitive brains. That is not to say that we are powerless over them – we can change these deep seated attributes through concerted effort and appropriate exposure. But it begs the question: “Where do these prejudices come from?” I believe the consensus is clear, prejudices are learned from, and taught by those important people around us who model and mold us throughout childhood. It is also important to understand that there seems to be a natural inclination within us to be suspicious of those who are different from us. This tribal tendency to classify outsiders as threats may stem back to our ancestral roots when outsiders were indeed threats to our very survival: and this successful propensity has carried on due to natural selection. It seems that there is a human inclination to be prejudiced. Compound that inclination with other human brain failings (e.g., confirmation bias), and minimal exposure to diversity, as well as influential bigots, and you have a near certain prejudicial clone. To make matters worse, all you have to do is turn on the TV and watch the news to feed those prejudices. Racism in our culture is not very subtle. But I digress.

 

The point that I am trying to make is that we all have biases, and that they are intuitive to a degree. Next week I am going to explore the Implicit Associations Test and its implications that support the notion that stereotypes or prejudices are indeed deeply rooted in our intuition. If you have not taken the Implicit Associations Test, do so, particularly the Race Test. You may be surprised by the results. I know I was. This is in fact, one of the sub-plots in Ave. Q – we are all a bit racist, and perhaps a bit homophobic too; although, I will argue to my grave that I do not value people differently based on their race, gender, or sexual orientation.

 

Ave. Q also deals with schadenfreude, which is the pleasure we gain from other’s pain or struggles. This is a curious proclivity, one I hope to gain a better understanding of. As I think back to childhood, I can recall experiencing a strong compulsion to laugh when a friend was injured through our mutual play. I remember knowing that this was somehow wrong and inappropriate, regardless, there was this deep urge to chuckle. Looking back, I know that it was not a rational response – it was intuitive. The reality is that most of us are at least relieved by the misery of others and we often gain some appreciation that our lives are not so bad after all. The play’s treatment of this very issue normalizes the experience and perhaps explains our societal infatuation with gossip. In my profession, on a daily basis, I see real agony in the lives of the families I work with, and thus find gossip repulsive.

 

One of the major goals of art is to incite thought, and Ave. Q effectively pulled this off. I’d like to say that I have no prejudices, but Ave. Q and the results of my IAT suggest that this may not be absolutely true. In reference to the work of Christopher Chabris and Daniel Simons in their book entitled The Invisible Gorilla, I wonder if perhaps there is an Illusion of an Open Mind? I shall not rest comfortably with this illusion, and I am fully committed to overcoming the failings of my naturally selected and intuitive tendencies. The first step is accepting this reality.

Share

Over the last two weeks I’ve dealt with the issue of vaccines as they pertain to Autism. I first dealt with the back story and then addressed why such an illusion of cause has persisted despite the efforts of the scientific and medical communities. Although I have made reference to some of the data, I thought it would be prudent to put forward some particularly relevant facts and statistics.

 

First, I would like to note the progress mankind has made with regard to average life span and give credit where credit is due. Carl Sagan, in his excellent book, The Demon-Haunted World, addressed this very issue indicating that in pre-agricultural times, 10,000 years ago, human life expectancy was about 20-30 years. That expectancy persisted throughout the rise and fall of the Greek and Roman empires right through Medieval times. Not until the late 19th century did it rise to 40 years. In 1915 it was estimated to be 50 and then as high as 60 by 1930. It rose to 70 in about 1955 and is currently around 80 for individuals living in developed countries.

 

So what can we attribute this growth in life expectancy to? The answer is clear. Along with advancements in public sanitation (clean water, flush toilets) and vast improvements in nutrition, science has contributed the germ theory of disease and huge advancements in medical care and medical technology. Of particular importance has been our increased capacity to understand and prevent infectious diseases. Understanding how diseases spread has been important in minimizing the spread of illnesses like TB and it continues to be important with regard to HIV; however, another huge variable has been the introduction of immunizations.

 

Not all that long ago, infectious diseases were among the top causes of death for humans in developed nations. And this is still the case in many low income countries. According to World Health Organization statistics, six of the top ten causes of death in low income nations include infectious diseases (respiratory infections 11.2%, Diarrheal diseases, 6.9%, HIV/AIDS 5.7%, TB 3.5%, neonatal infections 3.4%, and Malaria 3.3%). Whereas in high-income countries, heart disease, cerebrovascular disease, and cancer reign supreme. The only infectious disease to make the top 10 in high-income countries is lower respiratory infections (3.8%). Although heart disease, strokes, and cancer afflict the 3rd world, the proportion of deaths attributable to infectious diseases dominates. This discrepancy is essentially due to publicly managed vaccine and infection control programs affordable only to relatively wealthy industrialized nations.

 

If you look back in time at US morbidity and mortality statistics (Roush, Murphy, & the Vaccine-Preventable Disease Table Working Group, 2007) pre- and post-mandated vaccines, the numbers are staggering. The peak annual death rates for diseases like diphtheria was 3065 (1936), measles 552 (1958), mumps 50 (1964), rubella 24 (1968), pertussis 7518 (1934), polio (paralytic) 3145 (1952), and smallpox 2510 (1902). The peak morbidity rate for diphtheria was 30,508 (1938), measles 763,094 (1958), mumps 212,932 (1964), rubella 488,796 (1968), pertussis 265,209 (1934), Polio (paralytic) 21,269 (1952), and smallpox 2510 (1902). In 2004 (the post mandated vaccine era) there were no (zero) deaths in the US attributable to diphtheria, measles, mumps, paralytic polio, rubella, and smallpox. Pertussis persists, having killed 27 people in 2004, afflicting over 15,000 in 2006. Regardless, in the US, our vaccine schedules have essentially eradicated infectious diseases that previously took thousands of children’s lives every year. There has been more than a 92% decline in morbidity and a 99% or greater reduction in deaths attributed to preventable infectious diseases targeted since 1980 by the current vaccine schedule. Endemic transmission of measles, rubella, and the poliovirus have also been eliminated and smallpox has been eradicated worldwide. This is no small accomplishment. One must keep in mind that one who fails to learn from history is doomed to repeat it (Crislip paraphrasing Santayana).

 

The objections to vaccines put forth by the anti-vaccine folks have morphed over time. The initial notions included the presence of mercury (thimerosal) in the vaccines and the vilification of the MMR vaccine itself. Both of these notions have been debunked. The new themes include too many too soon and the presence of other toxins in the vaccines.

 

In my previous post, The Illusion of Cause – Vaccines and Autism, I addressed the innate human propensity to draw causal relationships between vaccines and Autism. I noted that despite the removal of thimerosal from routine childhood vaccines, the numbers of incidences of Autism continues to rise. And I discussed the fact that thimerosal contains ethyl-mercury which poses far less risk than the more dangerous fat soluble methyl-mercury. Eating a six ounce chunk of tuna exposes one to 8959 micrograms of methyl-mercury while the maximum cumulative exposure to mercury through the first six months of life (before the removal of thimerosal) was around 187.5 micrograms of ethyl-mercury (Crislip, 2010). The research has been clear: there is no plausible association between mercury toxicity or even other heavy metal exposure and Autism (Science in Autism Treatment, 2009). In particular, a study published in 2007 in Research in Autism Spectrum Disorders by Williams, Hersh, Allard, and Sears found no significant difference in the levels of mercury detected in hair samples between children diagnosed with Autism and their un-afflicted siblings. Regardless, thimerosal has been removed from routine childhood vaccines (except some influenza and some tetanus multi-dose vials) not due to safety concerns but to reduce non-compliance issues associated with unwarranted fear. Thimerosal is a non-issue.

 

With regards to the MMR vaccine – I previously discussed how Andrew Wakefield misrepresented his personal conflicts of interest and intentionally manipulated the data to support his contention that MMR causes Autism. Study after study, many of which were large scale epidemiological studies, failed to replicate Wakefield’s findings. And what is even more interesting is that some studies suggest that the MMR vaccine is actually associated with decreased incidences of Autism in recipients versus non-recipients (Mrozek-Budzyn, D., Kieltyka, A., and Majewska, R. 2010). This is likely background noise and may not pan out in other studies, but…….. In Jackson County, Oregon 15% of the children have not been vaccinated. Within Jackson County, in the city of Ashland, 25% of the children are not vaccinated. The rate of educational diagnoses of Autism in Ashland is 1.1% – which is the highest rate in the county and above the state average (Crislip, 2010). So the population where there is the lowest rate of vaccination also includes the highest rate of Autism diagnoses. One has to be careful not to fall victim to the illusion of cause with this data.

 

Too Many Too Soon is the new mantra, railed by the anti-vaccine set: but this argument is easily assuaged by gaining a better understanding of the microbiome. Mark Crislip, MD, an immunologist, effectively puts this issue into perspective in his podcast The Vaccine Pseudo Controversy. Crislip notes that for every human cell in the human body there are 10 bacteria cells along for the ride. We are essentially a host organism for 100 billion bacteria representing several thousand species. Although a human baby is born free of such organisms, by the end of the first year of life, a typical baby has been exposed to perhaps billions of such organisms. Many of these bacteria are essential for our survival, but many are in fact pathogens kept at bay by the immune system. Extremely conservative estimates suggest that on average, a child is exposed to at least one pathogen each day just as a function of living. That being said, the vaccine schedule represents 0.694% of the antigen exposure of a six year old. As Dr. Crislip is fond of saying, the vaccines constitute a mere drop in the bucket in terms of the total number of pathogens endured just as a function of living day to day. Seriously, have you ever been around a baby? They crawl around on the ground and mouth everything they can get their hands on. A drop in the bucket indeed. Dr. Crislip notes that “the only thing a delay in vaccination does is increase the time the child is vulnerable to infections” and, I would add, weaken herd immunity. As for evidence, consider a recent study published in Pediatrics by Michael J. Smith, MD and Charles R. Woods, MD, entitled On-Time Vaccine Receipt in the First Year Does Not Adversely Affect Neuropsychological Outcomes. An excerpt of the abstract reads as follows:

 

OBJECTIVES: To determine whether children who received recommended vaccines on time during the first year of life had different neuropsychological outcomes at 7 to 10 years of age as compared with children with delayed receipt or nonreceipt of these vaccines.
METHODS: Publicly available data, including age at vaccination, from a previous Vaccine Safety Datalink study of thimerosal exposure and 42 neuropsychological outcomes were analyzed. Secondary analyses were performed on a subset of children with the highest and lowest vaccine exposures during the first 7 months of life.
RESULTS: Timely vaccination was associated with better performance on 12 outcomes in univariate testing and remained associated with better performance for 2 outcomes in multivariable analyses. No statistically significant differences favored delayed receipt. In secondary analyses, children with the greatest vaccine exposure during the first 7 months of life performed better than children with the least vaccine exposure on 15 outcomes in univariate testing; these differences did not persist in multivariable analyses. No statistically significant differences favored the less vaccinated children.
CONCLUSIONS: Timely vaccination during infancy has no adverse effect on neuropsychological outcomes 7 to 10 years later. These data may reassure parents who are concerned that children receive too many vaccines too soon. Pediatrics 2010;125:1134–1141

 

And then there is the contention that there are toxins in the vaccines. Well this is undeniably true. The Center for Disease Control makes known the additives for each vaccine. The list may initially seem foreboding, but the CDC and Dr. Crislip, as well as others consulted who posses far more expertise than I, attempt to assure us that these additives perform important functions and pose no notable risk. The CDC notes: “Chemicals commonly used in the production of vaccines include a suspending fluid (sterile water, saline, or fluids containing protein); preservatives and stabilizers (for example, albumin, phenols, and glycine); and adjuvants or enhancers that help improve the vaccine’s effectiveness. Vaccines also may contain very small amounts of the culture material used to grow the virus or bacteria used in the vaccine, such as chicken egg protein.

 

The CDC notes that Common substances found in vaccines include:

  • Aluminum gels or salts of aluminum which are added as adjuvants to help the vaccine stimulate a better response to the vaccine. Adjuvants help promote an earlier, more potent response, and more persistent immune response to the vaccine.
  • Formaldehyde is used to inactivate bacterial products for toxoid vaccines, (these are vaccines that use an inactive bacterial toxin to produce immunity.) It is also used to kill unwanted viruses and bacteria that might contaminate the vaccine during production.
  • Monosodium glutamate (MSG) and 2-phenoxy-ethanol which are used as stabilizers in a few vaccines to help the vaccine remain unchanged when the vaccine is exposed to heat, light, acidity, or humidity.
  • Thimerosal is a mercury-containing preservative that is added to vials of vaccine that contain more than one dose to prevent contamination and growth of potentially harmful bacteria.

 

A little more knowledge is helpful. Did you know, for example, that “the average person produces about 1.5 ounces of formaldehyde each day as a part of normal metabolic processes[?]” (Crislip, 2010). It’s true. And as a result, there is a low steady state of formaldehyde in human blood at a concentration of 1 to 2 parts-per-million. The concentration of this additive in vaccines is actually at a lower level than is naturally occurring in your blood. Dr. Crislip notes that by far, the deadliest additive in vaccines is dihydrogen monoxide – which is responsible for nine deaths a day in the US. Otherwise, if you accept the dose-response effect of chemicals and the microscopic doses of the additives in vaccines, you will rest assured that vaccines are safe and serve a very important life saving role in our civilization. The bottom line comes down to belief systems. If you believe something so fully that you are unwilling to put a skeptical eye on it and reject it, if the evidence does not support it, then you are rejecting reality in support of unsubstantiated ideology. Always be wary of unsubstantiated ideology! Oh and the dihydrogen monoxide – that’s water (H2O).

 

References

 

Association for Science in Autism Treatment. (2009). Autism & Vaccines: The Evidence to Date. Vol. 6., No. 1 http://www.asatonline.org/pdf/summer2009.pdf

 

Center for Disease Control. Basics and Common Questions: Ingredients of Vaccines – Fact Sheet. http://www.cdc.gov/vaccines/vac-gen/additives.htm

 

Crislip, M. (2010). The Vaccine Pseudo Controversy. Quackcast # 45. http://www.pusware.com/quackcast/quackcast45.mp3

 

Mrozek-Budzyn, D., Kieltyka, A., and Majewska, R. (2010).Lack of Association Between Measles-Mumps-Rubella Vaccination and Autism in Children: A Case-Control Study.Pediatric Infectious Disease Journal. 29(5):397-400.

 

Roush, S. W., Murphy, T. V., & the Vaccine-Preventable Disease Table Working Group. (2007). Historical Comparisons of Morbidity and Mortality for Vaccine-Preventable Diseases in the United States. JAMA. 298(18):2155-2163 (doi:10.1001/jama.298.18.2155) http://jama.ama-assn.org/cgi/content/full/298/18/2155

 

Sagan, C. (1996). The Demon Haunted Word. The Random House Publishing Group: New York

 

Smith, M. J. and Woods, C. R. (2010). On-time Vaccine Receipt in the First Year Does Not Adversely Affect Neuropsychological Outcomes. Pediatrics published online May 24, 2010; DOI: 10.1542/peds.2009-2489 http://pediatrics.aappublications.org/cgi/content/abstract/peds.2009-2489v1

 

Williams, P. G., Hersh, J. H., Allard, A., and Sears, L. L. A controlled study of mercury levels in hair samples of children with autism as compared to their typically developing siblings.” Research in Autism Spectrum Disorders. 16 May 2007, Volume 2, Issue 1: 170-175.

 

World Health Organization. (2004). The 10 leading causes of death by broad income group. Fact Sheet No. 310. http://www.who.int/mediacentre/factsheets/fs310/en/index.html

Share

There are many well intentioned folks out there who believe that childhood vaccinations cause Autism. Last week I covered the origins of this belief system as well as its subsequent debunking in Vaccines and Autism. Despite the conclusive data that clearly establishes no causal link between vaccines and Autism, the belief lives on. Why is this? Why do smart people fall prey to such illusions? Chabris and Simons contend in their book, The Invisible Gorilla, that we fall prey to such myths because of the Illusion of Cause. Michael Shermer (2000), in his book, How We Believe, eloquently describes our brains as a Belief Engine. Underlying this apt metaphor is the notion that “Humans evolved to be skilled pattern seeking creatures. Those who were best at finding patterns (standing upwind of game animals is bad for the hunt, cow manure is good for the crops) left behind the most offspring. We are their descendants.” (Shermer, p. 38). Chabris and Simons note that this refined ability “serves us well, enabling us to draw conclusions in seconds (or milliseconds) that would take minutes or hours if we had to rely on laborious logical calculations.” (p. 154). However, it is important to understand that we are all prone to drawing erroneous connections between stimuli in the environment and notable outcomes. Shermer further contends that “The problem in seeking and finding patterns is knowing which ones are meaningful and which ones are not.

 

From an evolutionary perspective, we have thrived in part, as a result of our tendency to infer cause or agency regardless of the reality of threat. For example, those who assumed that rustling in the bushes was a tiger (when it was just wind) were more likely to take precautions and thus less likely, in general, to succumb to predation. Those who were inclined to ignore such stimuli were more likely to later get eaten when in fact the rustling was a hungry predator. Clearly from a survival perspective, it is best to infer agency and run away rather than become lunch meat. The problem that Shermer refers to regarding this system is that we are subsequently inclined toward mystical and superstitious beliefs: giving agency to unworthy stimuli or drawing causal connections that do not exist. Dr. Steven Novella, a neurologist, in his blog post entitled Hyperactive Agency Detection notes that humans vary in the degree to which they assign agency. Some of us have Hyperactive Agency Detection Devices (HADD) and as such, are more prone to superstitious thinking, conspiratorial thinking, and more mystical thinking. It is important to understand as Shermer (2000) makes clear:

 

“The Belief Engine is real. It is normal. It is in all of us. Stuart Vyse [a research psychologist] shows for example, that superstition is not a form of psychopathology or abnormal behavior; it is not limited to traditional cultures; it is not restricted to race, religion, or nationality; nor is it only a product of people of low intelligence or lacking education. …all humans possess it because it is part of our nature, built into our neuronal mainframe.” (p. 47).

 

We all are inclined to detect patterns where there are none. Shermer refers to this tendency as patternicity. It is also called pareidolia. I’ve previously discussed this innate tendency noting that “Our brains do not tolerate vague or obscure stimuli very well. We have an innate tendency to perceive clear and distinct images within such extemporaneous stimuli.” It is precisely what leads us to see familiar and improbable shapes in puffy cumulus clouds or the Virgin Mary in a toasted cheese sandwich. Although this tendency can be fun, it can also lead to faulty and sometimes dangerous conclusions. And what is even worse is that when we hold a belief, we are even more prone to perceive patterns that are consistent with or confirm that belief. We are all prone to Confirmation Bias – an inclination to take in, and accept as true, information that supports our belief systems and miss, ignore, or discount information that runs contrary to our beliefs.

 

Patternicity and confirmation bias alone are not the only factors that contribute to the illusion of cause. There are at least two other equally salient intuitive inclinations that lead us astray. First, we tend to infer causation based on correlation. And second, the appeal of chronology, or the coincidence of timing, also leads us toward drawing such causal connections (Chabris & Simons, 2010).

 

A fundamental rule in science and statistics is that correlation does not infer causation. Just because two events occur in close temporal proximity, does not mean that one leads to the other. Chabris and Simons note that this rule is in place because our brains automatically – intuitively – draw causal associations, without any rational thought. We know that causation leads to correlation – but it is erroneous to assume that the opposite is true. Just because A and B occur together does not mean A causes B or vice-versa. There may be a third factor, C, that is responsible for both A and B. Chabris and Simons use ice cream consumption and drownings as an example. There is a sizable positive correlation between these two variables (as ice cream consumption goes up so do the incidences of drowning), but it would be silly to assume that ice cream consumption causes drowning, or that increases in the number of drownings causes increases in ice cream consumption. Obviously, a third factor, summer heat, leads to both more ice cream consumption and more swimming. With more swimming behavior there are more incidents of drowning.

 

Likewise, with vaccines and Autism, although there may be a correlation between the two (increases in the number of children vaccinated and increases in the number of Autism diagnoses), it is incidental, simply a coincidental relationship. But given our proclivity to draw inferences based on correlation, it is easy to see why people would be mislead by this relationship.

 

Add to this the chronology of the provision of the MMR vaccine (recommended between 12 and 18 months), and the typical time at which the most prevalent symptoms of Autism become evident (18-24 months), people are bound to infer causation. Given the fact that millions of children are vaccinated each year, there are bound to be examples of tight chronology.

 

So what is at work here are hyperactive agency detection (or overzealous patternicity), an inherent disposition to infer causality from correlation, and a propensity to “interpret events that happened earlier as the causes of events that happened or appeared to happen later” (Chabris & Simons, 2010, p. 184).  Additionally, you have a doctor like Andrew Wakefield misrepresenting data in such a way to solidify plausibility and celebrities like Jenny McCarthy using powerful anecdotes to convince others of the perceived link. And anecdotes are powerful indeed. “..[W]e naturally generalize from one example to the population as a whole, and our memories for such inferences are inherently sticky. Individual examples lodge in our minds, but statistics and averages do not. And it makes sense that anecdotes are compelling to us. Our brains evolved under conditions in which the only evidence available to us was what we experienced ourselves and what we heard from trusted others. Our ancestors lacked access to huge data sets, statistics, and experimental methods. By necessity, we learned from specific examples…” (Chabris & Simons, 2010, pp. 177-178).  When an emotional mother (Jenny McCarthy) is given a very popular stage (The Oprah Winfrey Show) and tells a compelling story, people buy it – intuitively – regardless of the veracity of the story. And when we empathize with others, particularly those in pain, we tend to become even less critical of the message conveyed (Chabris & Simons, 2010). These authors add that “Even in the face of overwhelming scientific evidence and statistics culled from studies of hundreds of thousands of people, that one personalized case carries undue influence” (p.178).

 

Although the efficacy of science is unquestionable, in terms of answering questions like the veracity of the relationship between vaccines and Autism, it appears that many people are incapable of accepting the reality of scientific inquiry (Chabris & Simons, 2010). Acceptance necessitates the arduous application of reason and the rejection of the influences rendered by the intuitive portion of our brain. This is harder than one might think. Again, it comes down to evolution. Although the ability to infer cause is a relatively recent development, we hominids are actually pretty good at it. And perhaps, in cases such as this one, we are too proficient for our own good (Chabris & Simons, 2010).

 

References

 

Center for Disease Control. (2009). Recommended Immunization Schedule for Persons Aged 0 Through 6 Years. http://www.cdc.gov/vaccines/recs/schedules/downloads/child/2009/09_0-6yrs_schedule_pr.pdf

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. Random House: New York.

 

Novella, S. (2010). Hyperactive Agency Detection. NeuroLogica Blog. http://www.theness.com/neurologicablog/?p=1762

 

Shermer, M. (2000). How We Believe. W.H. Freeman / Henry Holt and Company: New York.

Share

Vaccines and Autism

13 August 2010

It is hard to imagine anything more precious than one’s newborn child. Part of the joy of raising a child is the corresponding hope one has for the future. Don’t we all wish for our children a life less fraught with the angst and struggles we ourselves endured? One of the less pleasant aspects of my job has the effect, at least temporarily, of robbing parents of that hope. This erosion occurs in the parent’s mind and heart as a consequence of a diagnosis I often have to provide. I am a psychologist employed in part to provide diagnostic evaluations of preschool age children suspected of having Autism. My intention is never to crush hope, instead it is to get the child on the right therapeutic path as early as possible in order to sustain as much hope as possible. However, uttering the word AUTISM in reference to one’s child constitutes a serious and devastating emotional blow.

 

Many parents come to my office very aware of their child’s challenges and the subsequent implications. They love their child, accept him as he is, and just want to do whatever they can to make his life better. Others come still steeped in hope that their child’s challenges are just a phase or believing that she is just fine. Regardless, most of them report that they suspected difficulties very early in the child’s development. For example, many note a lack of smiles, chronic agitation and difficulty soothing their child. Some children had not been calmed by being held or may have even resisted it. Some other children I see develop quite typically. They smile, giggle, rejoice at being held, coo and babble, and ultimately start to use a few words with communicative intent. The parents of this latter and rather rare subset, then watch in dismay as their child withdraws, often losing both functional communication and interest in other children.

 

The timing of this developmental back-slide most often occurs at around 18 months of age. This regression happens to coincide with the recommended timing of the provision of the Measles-Mumps-Rubella (MMR) vaccine. This temporal chronology is important as it has lead, in part, to a belief that the vaccine itself is responsible for the development of Autism. What these parents must experience at this time, I can only imagine, is a horrible combination of confusion and grief. They have had their hopes encouraged and reinforced only to have them vanquished. And it is human nature, under such circumstances, to look for a direct cause. It makes perfect sense that parents would, given the chronicity of events in some cases, suspect the MMR vaccine as the cause of their child’s regression.

 

During my occasional community talks on Autism, I often am asked about the alleged connection between vaccines and Autism. The coincidental temporal relationship between the provision of the MMR vaccine and this developmental decay leads to what Chabris and Simons in The Invisible Gorilla refer to as the Illusion of Cause. Chabris and Simons discuss how “chronologies or mere sequences of happenings” lead to the inference “that earlier events must have caused the later ones.” (2010, p. 165). By default, as a result of evolution, our brains automatically infer causal explanations based on temporal associations (Chabris & Simons, 2010).

 

At nearly every talk I give, there is someone in the audience who is convinced that their child (or a relative) is a victim of the MMR vaccine. Their compelling anecdotes are very difficult to refute or discuss. I find that the application of reason, or data, or both, misses the mark and comes off as being cold and insensitive.

 

For such causal relationships to endure and spread they often need some confirmation of the effect by an “expert.” This is where the story of Dr. Andrew Wakefield comes into play. Wakefield, a GI Surgeon from the UK published a paper in the prestigious UK medical journal, The Lancet, alleging a relationship between the MMR vaccine and the development of Autism. His “expert” opinion offered legitimacy to already brewing suspicions backed by the perceived correlates of increases in both vaccination and Autism rates, as well as the apparent chronology between the timing of the vaccines and the onset of Autism. Wakefield provided credibility and sufficient plausibility: and as a result, the news of the alleged relationship gained traction.

 

But hold on! There were major flaws with Wakefield’s study that were not initially detected by The Lancet’s peer review panel. First of all, Wakefield was hired and funded by a personal injury attorney who commissioned him to prove that the MMR vaccine had harmed his clients (caused Autism). His study was not designed to test a hypothesis: it was carried out with the specific objective of positively establishing a link between Autism and provision of the MMR vaccine. From the outset the study was a ruse, disguised as science.

 

Just this year (2010), 12 years after the initial publication of Wakefield’s infamous study, The Lancet retracted it and Dr. Wakefield has been stripped of his privilege to practice medicine in the UK. Problems however, surfaced years ago: as early as 2004, when 10 of 13 co-authors retracted their support of a causal link. In 2005 it was alleged that Wakefield had fabricated data – in fact, some of the afflicted children used to establish the causal link had never actually received the MMR vaccine!

 

Since the initial publication of this study, hundreds of millions of dollars have been spent investigating the purported relationship between vaccines and Autism. Despite extensive large scale epidemiological studies, there have been no replications of Wakefield’s findings. Children who had not been vaccinated developed Autism at the same rate as those who had received the MMR. There is no relationship between the MMR vaccine and the development of Autism. As a result of Wakefield’s greed, hundreds of millions of dollars have been wasted. Those dollars could have been devoted to more legitimate pursuits, and that is not the worst of it. I will get to the real costs in a bit.

 

Another aspect of the history of this controversy is associated with the use of thimerosal as a preservative in vaccines. This notion, which has also been debunked, gained plausibility because thimerosal contains mercury, a known neurotoxin. You may ask: “Why on earth would a neurotoxin be used in vaccines?” Researchers have clearly established that thimerosal poses no credible threat to humans at the dosage levels used in vaccines. However, given the perceived threat, Thimerosal is no longer used as a preservative in routine childhood vaccinations. In fact, the last doses using this preservative were produced in 1999 and expired in 2001. Regardless, the prevalence of autism seems to be rising.

 

It is important to understand that mercury can and does adversely affect neurological development and functioning. However, long term exposure at substantially higher doses than present in thimerosal are necessary for such impact. The mercury in thimerosal is ethyl-mercury, which is not fat-soluble. Unlike the fat-soluble form of methyl-mercury (industrial mercury), ethyl-mercury is flushed from the body very quickly. Methyl-mercury can be readily absorbed into fatty brain tissue and render its damage through protracted contact. Methyl-mercury works its way into the food chain and poses a hazard to us if we eat too much fish (particularly those at the high end of the food chain). In reality, one is at more risk from eating too much seafood (shark and tuna) than from getting an injection of a vaccine preserved with thimerosal. Yet there does not seem to be a movement to implicate seafood as the cause of Autism.

 

Even though the relationship between vaccines and Autism has been thoroughly debunked, there is a movement afoot, steeped in conspiratorial thinking, that alleges that “Big Pharmacy” and the “Government” are colluding to deceive the people and that elaborately fabricated data is used to cover up a relationship. This belief lives on. How can this be so? Even intelligent and well educated people I know are avoiding important childhood immunizations based on the fear and misinformation spread by these well intentioned people.

 

In 2003, in the UK, the MMR vaccine rate had fallen to below 79% whereas a 95% rate is necessary to maintain herd immunity. Currently, the vaccine rates are dropping in the US due to the efforts of celebrities like Jenny McCarthy who purports that her son’s Autism was caused by vaccines. McCarthy campaigns fiercely against childhood immunizations spurred on by the likes of Oprah Winfrey. Even folks like John McCain, Joe Lieberman, and Robert F. Kennedy, Jr have spread such misinformation. Continuing to contend that the MMR vaccine is the culprit, Wakefield has moved to the US and has risen to martyr status among the anti-vaccine folk. You need to know that just months before he published his seminal paper, Wakefield received a patent on a Measles Vaccine that he alleges, “cures” Autism. He has much to gain financially, in his attempt to scare people away from the current safe and effective MMR vaccine.

 

It amazes me that people do not automatically dismiss this alleged vaccine-Autism link. Wakefield’s conflict of interest and discredited research practices alone draw into question anything he has to say. The mountains of epidemiological evidence also favors rejection of a causal relationship between the MMR vaccine and Autism. However, the power of anecdotes and misguided beliefs place millions of children in harm’s way.

 

Imagine yourself as a parent of a child who cannot get the MMR vaccine because of a serious medical condition (e.g., cancer). Such vulnerable children, of which there are millions worldwide, depend on herd immunity for their very survival. Now imagine that your child is inadvertently exposed to measles by coming into contact with a child who wasn’t vaccinated (because of misguided parental fear). Because your child’s compromised immunity, she develops the measles and gets seriously ill or dies. Such a scenario, although improbable is not impossible. It is more likely today largely due to the diminished herd immunity caused by misinformation. Whooping Cough (Pertussis) is likewise posing serious concerns (and one documented death) in unvaccinated clusters because of the anti-vaccine folk. This myth persists, in part, because of the Illusion of Cause, and the consequences have become deadly. Next week I will delve into this Illusion that sustains this erroneous and dangerous belief system.

 

References:

 

Association for Science in Autism Treatment. (2009).  Autism & Vaccines: The Evidence to Date. Vol. 6., No. 1 http://www.asatonline.org/pdf/summer2009.pdf

 

Center for Disease Control. Autism Spectrum Disorders: Data & Statistics. http://www.cdc.gov/ncbddd/autism/data.html

 

Chabris, C. F., & Simons, D. J. (2010).  The Invisible Gorilla. Random House: New York.

 

Plait, P. (2010). The Australian antivax movement takes its toll. Bad Astronomy Blog. http://blogs.discovermagazine.com/badastronomy/2009/04/26/the-australian-antivax-movement-takes-its-toll/

Share

I find myself in an untenable situation. I have plenty to write about but I am finding that the choices I am making right now, in the splendor of summer, give me limited time and energy to write. I’ve decided to take a short hiatus.

 

Over the last seven months my writing has been spurred on by relentless curiosity about belief systems that are held despite mountains of overwhelming evidence to the contrary. This cognitive conservatism absolutely befuddles me. And I am further driven to understand why ideology carries such overwhelming power over people and how it drives people to attack evidence or science in general. In a similar vain, I struggle with politics. The efforts made by the United States on the world’s stage to me seem to be a desperate attempt to slay the Hydra by means of decapitation. People close to me, that I love and have deep respect for, look at this war and even the environment in vastly different ways than I do.

 

Looking back, I have learned a great deal about the thinking processes that drive these different world views. Essentially we have what Michael Shermer calls a Belief Engine for a brain. We are hard wired to believe and make copious errors that incline us to believe – even silly things – regardless of evidence. We have successfully evolved in a world for hundreds of thousands of years devoid of statistics and analysis all the while thriving on snap judgments. Evolution itself, as a process, has inhibited our ability to accept its veracity. Stepping away from the belief engine demands a level of analysis that is foreign and often unpalatable. It is hard to be a skeptic yet oh so easy to go with our hard wired intuitive thinking. If you are new to my blog look back at entries that explore erroneous thinking, rational thought, the adaptive unconscious, memory, morality and even religion.

 

Looking forward I plan on delving further into our enigmatic Belief Engine. I want to further explore the errors of intuition, specifically the illusion of cause, implicit associations, as well as Jonathon Haidt’s work on political affiliation. Later I hope to switch gears and delve into the unique attributes of our planet that makes it hospitable for complex life.

Share

In psychology there are some pretty famous studies that have penetrated popular culture. Many folks are at least familiar with Skinner’s rat box, Pavlov’s salivating dogs, Milgram’s obedience studies, Bandura’s Bobo Dolls, and Harlow’s rhesus monkeys reared by wire frame terry cloth mothers. In recent history, perhaps the most well known study pertains to inattentional blindness. If you have never heard of or seen a video of six college students, three in black shirts and three in white shirts, bouncing a couple basketballs back and forth, see the following video before you proceed.

 

 

So, of course I am referring to Daniel Simons’ Invisible Gorilla study. Just about everyone I know has seen this video, and I don’t recall any of them telling me that they did see the gorilla. I didn’t and I was absolutely flabbergasted – because I tend to be a pretty vigilant guy. This video is a graphic illustration of what Chabris and Simons (2010) refer to as the Illusion of Attention, and about 50% of those who watch the video while counting passes among white shirted players miss the gorilla.

 

This particular illusion concerns me because I spend a fare amount of time riding a bicycle on the roads of Western New York. So why should I or anyone who rides a bicycle or motorcycle, or anyone who drives while texting or talking on a cell phone be concerned?

 

The cold hard truth is that we may completely miss events or stimuli that we do not expect to see. If you don’t expect to see, and therefore fail to look for, bicycles and motorcycles, you may look right at them but fail to see them. LOOKING IS NOT SEEING just as hearing is not listening. This hearing/listening analogy is dead on.  How often have you been caught hearing someone but not listening to what was actually being said?  Chabris and Simons discuss in their book, The Invisible Gorilla, a study conducted by Daniel Memmert of Heidelberg University that demonstrated (using an eye-tracker) that virtually everyone who missed the gorilla looked directly at it at some point in the video (often for a full second). Bikers are the invisible gorillas of the roadways.

 

And as for drivers, if you are distracted by a cell phone conversation or by texting, you are less likely to see unexpected events (e.g., bicycles, motorcycles, pedestrians, wildlife).

 

Most drivers who text and talk on cell phones do not have problems. In fact, most driving is uneventful – as a result, most people get away with these behaviors. However, it is when there is an unexpected event that mobile phone users struggle with seeing and responding fluently to these events. You are under the same illusion as everybody else who has not been in an accident. Everyone believes, until they hit or kill somebody, that they are proficient drivers even while texting or talking on the phone.  And by the way, hands free head sets make no difference. Driving while talking on a cell phone disables you as much as does alcohol.

 

Think about driving down a road not seeing and subsequently hitting a young child on a bike. Think about having to live with killing a middle aged couple with three kids in college who were lawfully riding down the road on a tandem bicycle.  You hit the invisible gorilla.  Live with that!

 

Daniel Simons, in a recently published study, also suggests that even if you are expecting an unexpected event,  it is likely that you will miss other unanticipated events. Check out The Monkey Business Illusion video even if you have seen the invisible gorilla video. Test yourself.

 

 

I have long known that I am at risk while riding my bike on the road.  I have recently incorporated wearing bright hi-vis attire as I ride.  Doing so is completely inconsistent with my style; but I have done so in an effort to be safer.  I was surprised to learn that research shows that doing so will increase your visibility for those that are looking for you – but that it will likely make no difference at all for inattentionally blind drivers. For those drivers who do not expect to see cyclists, hi-vis clothing will not likely increase the likelihood that you will be seen.  Using head and tail lights works on a similar level.  They do increase visibility but only for those looking for such strange sights.  The best way to increase one’s safety while riding is to look like a car.

 

It is also important to note that riding in areas where there are more bikers helps too. Chabris and Simons (2010) noted a report by Peter Jacobson, a public health consultant in California who analyzed data on accidents involving automobiles striking pedestrians or cyclists. He found that in cities where there were more walkers and cyclists, there were actually fewer accidents. More folks walking or riding bikes seems to increase the level of driver expectation for seeing such individuals – thus making one less at risk of being victimized by inattentional blindness. It was further noted that drivers who also ride bikes may actually be more aware – if only more people would get out of their cars and get back on bicycles.

 

The bottom line is that our intuition about our attention is problematic. Intuitively we believe that we attend to and see, what is right before us. Research and real world data shows us that this is not the case. At the very least, when driving, we need to be aware of this erroneous assumption, and work diligently to avoid distractions like talking on the phone or texting. As for cyclists (motor powered or not) we must anticipate that we won’t be seen and behave accordingly. Although hi-vis clothing and lights may not aid in your visibility for some drivers, it will for those that are looking out for you.

 

Chabris and Simons contend that this illusion is a by product of modernity and the subsequent fast paced highly distracting world we live in. We have evolved for millions of years by process of natural selection in a middle sized slow paced world. Traveling faster than a few miles an hour is a relatively new development for our species. Today we travel in motor vehicles at break neck speeds. On top of that we distract ourselves with cell phones, Blackberries, iPhones, iPods and GPS units. Although the consequences of these factors can be grave – in most cases we squeak by – which is a double edged sword because it essentially reinforces the illusion and the behavior.

 

References:

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Simons, D. J., 2010. Monkeying around with the gorillas in our midst: familiarity with an inattentional-blindness task does not improve the detection of unexpected events i-Perception 1(1) 3–6

Share

Imagine yourself walking down a familiar street approaching a stranger who is obviously lost, staring hopelessly at a map.  As you saunter by you provide eye contact and a look of willingness to help. He asks you for directions.  As you begin to offer your advice, you are interrupted by a construction crew carrying a large door.  They walk right between you and the stranger.  Now imagine that as the construction crew parted you visually from the stranger a new and different person covertly took on the same lost role.  This new stranger is wearing different clothes, is taller by three inches, has a different build, and different vocal qualities.  Do you think you would notice?

 

Chabris and Simons (2010) in the The Invisible Gorilla share the results of a study carried out by Dan Simons and a colleague where they tested whether people would notice such changes in a scenario very much like the one I just described. When the scenario was described to undergraduates, 95% believed that they would certainly notice such a change (as is likely the case for you as well). Yet when this experiment was carried out in the real world, nearly 50% of the participants did not notice the switch!

 

This particularly startling data is indicative of change blindness, defined by Chabris and Simons (2010) as failure to notice changes between what was in view moments before and what is in view currently. Essentially, we tend not to compare and thus notice stimuli changes from moment to moment. As a result we tend to be “blind” in many cases to pretty obvious changes. And what is equally salient is that we are unaware of this blindness. If you are like most people you said “No way I’d miss that!” Yet it is likely that about half of you would miss such changes.

 

Unconvinced? So were a group of Harvard undergraduates who had just attended a lecture that covered the above “door study” and change blindness. After the lecture, students were recruited to participate in further research. Interested students were directed to a different floor where they were greeted by an experimenter behind a counter. As the recruits proceeded to review and complete the necessary paperwork, the experimenter who greeted and instructed them regarding the paperwork ducked down behind the counter, presumably to file some papers, only to depart as a new and different experimenter took over the role. Even after being primed with the knowledge of change blindness, not one of the students noticed the swap! This was true even for some of the students who had just moments before boldly stated that they would notice such a change. We are in fact largely blind to our change blindness regardless of our confidence regarding our vigilance.

 

These results, contend Chabris and Simons, comprise conclusive evidence for the illusion of memory, (which is the disconnect between how our memory works and how we think it works).

 

Most of us are all too aware of the failings of our short-term memory. We often forget where we put the car keys, cell phone, or sunglasses. These authors note that we are generally pretty accurate when it comes to knowing the limits of this type of memory. License plates and phone numbers have only seven digits because most of us can only hold that much data in short-term memory. However, when it comes to understanding the limits of our long-term memory we tend to hold entirely unrealistic, fallacious, and illusory expectations.

In a national survey of fifteen hundred people [Chabris and Simons] commissioned in 2009, we included several questions designed to probe how people think memory works. Nearly half (47%) of the respondents believed that ‘once you have experienced an event and formed a memory of it, that memory doesn’t change.’ An even greater percentage (63%) believed that ‘human memory works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.” (Chabris & Simons, 2010, pp. 45-46).

They added:

People who agreed with both statements apparently think that memories of all our experiences are stored permanently in our brains in an immutable form, even if we can’t access them. It is impossible to disprove this belief… but most experts on human memory find it implausible that the brain would devote energy and space to storing every detail of our lives…” (p. 46).

So, as it turns out, our memories of even significant life events are quite fallible. Although we perceive such memories as being vivid and clear, they are individual constructions based on what we already know, our previous experiences, and other cognitive and emotional associations that we ultimately pair with the event. “These associations help us discern what is important and to recall details about what we’ve seen. They provide ‘retrieval cues’ that make our memories more fluent. In most cases, such cues are helpful. But these associations can also lead us astray, precisely because they lead to an inflated sense of precision of memory.” (Chabris & Simons, 2010, p. 48). In other words, our memories are not exact recordings, they are instead modified and codified personal replicas that are anything but permanent.

 

I cannot do justice to the impressive and exhaustive detailing that Chabris and Simons provide in the The Invisible Gorilla regarding the illusion of memory. However, suffice it to say, that we give way too much credit to the accuracy of our own long-term memories and have unrealistic expectations regarding others’ recall. People recall what they expect to remember and memories are modified over time based on malleable belief systems. Memories fade and morph over time depending on the “motives and goals of the rememberer.” (Chabris & Simons, 2010, p. 51).

“Although we believe that our memories contain precise accounts of what we see and hear, in reality these records can be remarkably scanty. What we retrieve often is filled in based on gist, inference, and other influences; it is more like an improvised riff on a familiar melody than a digital recording of an original performance. We mistakenly believe that our memories are accurate and precise, and we cannot readily separate those aspects of our memory that accurately reflect what happened from those that were introduced later.” (Chabris & Simons, 2010, pp 62-63).

They detail with riveting stories continuity errors in movies, source memory errors (is it your memory or mine?), flashbulb memories, and false memories in a way that really drives home the point that our memories are not to be trusted as factual depictions of historical fact. They beg the question: Can you trust your memory?

 

The answer: Partially, but you must be aware that your memory is not immutable. It is erroneous to assume that your memories are factual and it is equally fallacious to presume that other’s memories are likewise infallible. Two people witnessing the same event from the same perspective are likely to recall the event differently because of their unique personal histories, capabilities, internal associations, and thus their unique internal cognitive associations, as they store into memory the bits and pieces of the event.

 

Isn’t it amazing and scary that we give so much credit and power to eye-witness testimony in the court of law? Such power is conferred based on the pervasive and deeply held belief in the accuracy of memory – which you must know by now is an illusion. This is just another example pertaining to the illusion of justice in this country.

 

On a more personal level, next time you and your significant other get into a debate about how some past event went down, you have to know that you both are probably wrong (and right) to some degree. There is your truth, their truth, and the real truth. These can be illustrated in a Venn Diagram with three circles that from time to time have various degrees of mutual overlap. We must admit that over time the real truth is likely to become a smaller piece of the story. This necessitates that we get comfortable with the reality that we don’t possess a DVR in our brains and that we part ways with yet another illusion of the importance and power of our uniquely human intuition.

 

Reference:

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Share

Last week I discussed Philip Tetlock’s work that revealed the utter meaninglessness of punditry in The Illusion of Punditry. It is important to note that although professional pundits, on average, were less accurate than random chance, a few outliers actually performed well above average. Tetlock closely examined the variables associated with the distribution of accuracy scores and discovered that experts were often blinded by their preconceptions, essentially lead astray by how they think. To elucidate his point, Tetlock employed Isaiah Berlin’s famous metaphor, The Hedgehog and the Fox. Berlin, a historian, drew inspiration for the title of this essay from a classical Greek poet Archilochus, who wrote: “The fox knows many things, but the hedgehog knows one big thing.”

 

Berlin contended that there are two types of thinkers, hedgehogs and foxes. To make sense of this metaphor, one has to understand a bit about these creatures. A hedgehog is a small spiny mammal that when attacked rolls into a ball with its spines protruding outward. This response is its sole defensive maneuver, its “one big thing,” employed under any indication of threat. And by extension he suggested that hedgehog thinkers “… relate everything to a single central vision, one system less or more coherent or articulate, in terms of which they understand, think and feel—a single, universal, organizing principle in terms of which alone all that they are and say has significance…” The cunning fox survives by adapting from moment to moment by being flexible and employing survival strategies that make sense in the current situation. They “pursue many ends, often unrelated and even contradictory, … their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects.”

 

John W. Dean, a former presidential counsel (for Richard Nixon), using the Berlin metaphor classified a number of US presidents as hedgehogs and foxes. In his column he wrote:

“With no fear of contradiction, Barack Obama can be described as a fox and George W. Bush as clearly a hedgehog. It is more difficult than I thought to describe all modern American presidents as either foxes or hedgehogs, but labeling FDR, JFK, and Clinton as foxes and LBJ and Reagan as hedgehogs is not likely to be contested. Less clear is how to categorize Truman, Nixon, Carter and Bush I. But Obama and Bush II are prototypical of these labels.”

 

Tetlock, in referring to pundit accuracy scores wrote that:

“Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.”

 

Tetlock was careful to point out that there was no correlation between political affiliation and either hedgehog or fox classification. But what he did note was that the most accurate pundits were foxes and that the key variable associated with their success was introspection. Those who studied their own decision making process, were open to dealing with dissonance, and those who were not blinded by their preconceptions were far more capable of making accurate predictions. Successful pundits were also cautious about their predictions and were inclined to take information from a wide variety of sources.

 

Hedgehogs on the other hand, were prone to certainty and grand “irrefutable” ideas. They tend to boil problems down to simple grand theories or conflicts (e.g., good versus evil, socialism versus capitalism, free markets versus government regulations, and so on) and view these big issues as being the driving force of history. They are prone to over simplify situations and miss the many and diverse issues that ultimately shape history. They instead are more likely to attribute historical changes to single great men with simple great ideas (e.g., Ronald Reagan was responsible for the fall of the USSR, and without his leadership the cold war may still be raging).

 

So what are you a hedgehog or a fox? Both thinking approaches have strengths and weaknesses and appropriate and less appropriate applications. What were Copernicus, da Vinci, Galileo, Newton, Einstein, and Darwin? When do you suppose it is good to be a hedgehog and when a fox? I suppose it comes down to the task at hand: big unifying issues such as gravity, relativity, evolution, quantum mechanics may indeed necessitate hedgehog thinking. Here such single minded determinism is likely essential to persevere. Although, having read Darwin’s On the Origin of Species I am inclined to think that Darwin was a fox. Da Vinci too, was likely a fox, considering the vastness of his contributions. And Galileo was similarly a broad thinker. Knowing little of Newton and Einstein, I care not to speculate. It seems to me with the specialization of science these days, one must be a hedgehog. Early science history is replete with foxes. I don’t know about you, but I have a romantic notion about the lifestyles of men like Galileo and Darwin, following their curiosities dabbling hither and yon.

 

References:

Berlin, I. (1953). The Hedgehog and the Fox. The Isaiah Berlin Virtual Library. http://berlin.wolf.ox.ac.uk/published_works/rt/HF.pdf

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Dean, J. (2009). Barack Obama Is a “Fox,” Not a “Hedgehog,” and Thus More Likely To Get It Right. http://writ.news.findlaw.com/dean/20090724.html

Lehrer, J. (2009). How We Decide. New York: Houghton Mifflin Harcourt.

Menand, L. (2005). Everybody’s an Expert. The New Yorker. http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?printable=true

Tetlock, P.E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.

Share