My wife and I recently spent some time in New York City and one of our traditions is to take in a Broadway show. This time we stepped a bit off-Broadway to see the bawdy but Tony Award Winning Avenue Q. On the surface, this show seems silly, but it actually addresses some important issues. Essentially it is about the “coming of age” of young adults stepping out into the real world. The way the show is played out is interesting in that it employs a mixture of human actors, human puppets, and monster puppets – with all puppeteers fully visible on stage. As is often the case in theater, It necessitated suspension of reality and letting go of conventional thinking.

 

The play itself satirized the longstanding PBS children’s show Sesame Street both in format and message. Make no mistake however, this is not a show for children, or even for folks put-off by lewd language or sexual situations. Regardless, it delves headlong into issues that challenge the teachings of Sesame Street, laying bare the notion that everyone is “special.”

 

I couldn’t help but hearken back to a post I wrote entitled Self Esteem on a Silver Platter, that highlights the cost of telling children they are smart. I wonder if there are similar costs to telling children they are inherently special? Obviously, the writers of Ave. Q had the same question in mind.

 

As Princeton, the play’s protagonist, struggled with the reality of entering the world of work and his internalized notion of his own specialness, I thought about my college age children and my own experience when I left a small town to attend college. I have to believe that my experience was not unlike Princeton’s and I’m guessing, is very similar to my children’s experiences, as they make the transition from “Big fish in a small pond – to small fish in a big pond.” It’s a humbling transition.

 

Some of the other issues confronted by the cast and characters include racism and homophobia. Each of these prejudices are attitudes played out in a large part by our intuitive brains. That is not to say that we are powerless over them – we can change these deep seated attributes through concerted effort and appropriate exposure. But it begs the question: “Where do these prejudices come from?” I believe the consensus is clear, prejudices are learned from, and taught by those important people around us who model and mold us throughout childhood. It is also important to understand that there seems to be a natural inclination within us to be suspicious of those who are different from us. This tribal tendency to classify outsiders as threats may stem back to our ancestral roots when outsiders were indeed threats to our very survival: and this successful propensity has carried on due to natural selection. It seems that there is a human inclination to be prejudiced. Compound that inclination with other human brain failings (e.g., confirmation bias), and minimal exposure to diversity, as well as influential bigots, and you have a near certain prejudicial clone. To make matters worse, all you have to do is turn on the TV and watch the news to feed those prejudices. Racism in our culture is not very subtle. But I digress.

 

The point that I am trying to make is that we all have biases, and that they are intuitive to a degree. Next week I am going to explore the Implicit Associations Test and its implications that support the notion that stereotypes or prejudices are indeed deeply rooted in our intuition. If you have not taken the Implicit Associations Test, do so, particularly the Race Test. You may be surprised by the results. I know I was. This is in fact, one of the sub-plots in Ave. Q – we are all a bit racist, and perhaps a bit homophobic too; although, I will argue to my grave that I do not value people differently based on their race, gender, or sexual orientation.

 

Ave. Q also deals with schadenfreude, which is the pleasure we gain from other’s pain or struggles. This is a curious proclivity, one I hope to gain a better understanding of. As I think back to childhood, I can recall experiencing a strong compulsion to laugh when a friend was injured through our mutual play. I remember knowing that this was somehow wrong and inappropriate, regardless, there was this deep urge to chuckle. Looking back, I know that it was not a rational response – it was intuitive. The reality is that most of us are at least relieved by the misery of others and we often gain some appreciation that our lives are not so bad after all. The play’s treatment of this very issue normalizes the experience and perhaps explains our societal infatuation with gossip. In my profession, on a daily basis, I see real agony in the lives of the families I work with, and thus find gossip repulsive.

 

One of the major goals of art is to incite thought, and Ave. Q effectively pulled this off. I’d like to say that I have no prejudices, but Ave. Q and the results of my IAT suggest that this may not be absolutely true. In reference to the work of Christopher Chabris and Daniel Simons in their book entitled The Invisible Gorilla, I wonder if perhaps there is an Illusion of an Open Mind? I shall not rest comfortably with this illusion, and I am fully committed to overcoming the failings of my naturally selected and intuitive tendencies. The first step is accepting this reality.

Share

There are many well intentioned folks out there who believe that childhood vaccinations cause Autism. Last week I covered the origins of this belief system as well as its subsequent debunking in Vaccines and Autism. Despite the conclusive data that clearly establishes no causal link between vaccines and Autism, the belief lives on. Why is this? Why do smart people fall prey to such illusions? Chabris and Simons contend in their book, The Invisible Gorilla, that we fall prey to such myths because of the Illusion of Cause. Michael Shermer (2000), in his book, How We Believe, eloquently describes our brains as a Belief Engine. Underlying this apt metaphor is the notion that “Humans evolved to be skilled pattern seeking creatures. Those who were best at finding patterns (standing upwind of game animals is bad for the hunt, cow manure is good for the crops) left behind the most offspring. We are their descendants.” (Shermer, p. 38). Chabris and Simons note that this refined ability “serves us well, enabling us to draw conclusions in seconds (or milliseconds) that would take minutes or hours if we had to rely on laborious logical calculations.” (p. 154). However, it is important to understand that we are all prone to drawing erroneous connections between stimuli in the environment and notable outcomes. Shermer further contends that “The problem in seeking and finding patterns is knowing which ones are meaningful and which ones are not.

 

From an evolutionary perspective, we have thrived in part, as a result of our tendency to infer cause or agency regardless of the reality of threat. For example, those who assumed that rustling in the bushes was a tiger (when it was just wind) were more likely to take precautions and thus less likely, in general, to succumb to predation. Those who were inclined to ignore such stimuli were more likely to later get eaten when in fact the rustling was a hungry predator. Clearly from a survival perspective, it is best to infer agency and run away rather than become lunch meat. The problem that Shermer refers to regarding this system is that we are subsequently inclined toward mystical and superstitious beliefs: giving agency to unworthy stimuli or drawing causal connections that do not exist. Dr. Steven Novella, a neurologist, in his blog post entitled Hyperactive Agency Detection notes that humans vary in the degree to which they assign agency. Some of us have Hyperactive Agency Detection Devices (HADD) and as such, are more prone to superstitious thinking, conspiratorial thinking, and more mystical thinking. It is important to understand as Shermer (2000) makes clear:

 

“The Belief Engine is real. It is normal. It is in all of us. Stuart Vyse [a research psychologist] shows for example, that superstition is not a form of psychopathology or abnormal behavior; it is not limited to traditional cultures; it is not restricted to race, religion, or nationality; nor is it only a product of people of low intelligence or lacking education. …all humans possess it because it is part of our nature, built into our neuronal mainframe.” (p. 47).

 

We all are inclined to detect patterns where there are none. Shermer refers to this tendency as patternicity. It is also called pareidolia. I’ve previously discussed this innate tendency noting that “Our brains do not tolerate vague or obscure stimuli very well. We have an innate tendency to perceive clear and distinct images within such extemporaneous stimuli.” It is precisely what leads us to see familiar and improbable shapes in puffy cumulus clouds or the Virgin Mary in a toasted cheese sandwich. Although this tendency can be fun, it can also lead to faulty and sometimes dangerous conclusions. And what is even worse is that when we hold a belief, we are even more prone to perceive patterns that are consistent with or confirm that belief. We are all prone to Confirmation Bias – an inclination to take in, and accept as true, information that supports our belief systems and miss, ignore, or discount information that runs contrary to our beliefs.

 

Patternicity and confirmation bias alone are not the only factors that contribute to the illusion of cause. There are at least two other equally salient intuitive inclinations that lead us astray. First, we tend to infer causation based on correlation. And second, the appeal of chronology, or the coincidence of timing, also leads us toward drawing such causal connections (Chabris & Simons, 2010).

 

A fundamental rule in science and statistics is that correlation does not infer causation. Just because two events occur in close temporal proximity, does not mean that one leads to the other. Chabris and Simons note that this rule is in place because our brains automatically – intuitively – draw causal associations, without any rational thought. We know that causation leads to correlation – but it is erroneous to assume that the opposite is true. Just because A and B occur together does not mean A causes B or vice-versa. There may be a third factor, C, that is responsible for both A and B. Chabris and Simons use ice cream consumption and drownings as an example. There is a sizable positive correlation between these two variables (as ice cream consumption goes up so do the incidences of drowning), but it would be silly to assume that ice cream consumption causes drowning, or that increases in the number of drownings causes increases in ice cream consumption. Obviously, a third factor, summer heat, leads to both more ice cream consumption and more swimming. With more swimming behavior there are more incidents of drowning.

 

Likewise, with vaccines and Autism, although there may be a correlation between the two (increases in the number of children vaccinated and increases in the number of Autism diagnoses), it is incidental, simply a coincidental relationship. But given our proclivity to draw inferences based on correlation, it is easy to see why people would be mislead by this relationship.

 

Add to this the chronology of the provision of the MMR vaccine (recommended between 12 and 18 months), and the typical time at which the most prevalent symptoms of Autism become evident (18-24 months), people are bound to infer causation. Given the fact that millions of children are vaccinated each year, there are bound to be examples of tight chronology.

 

So what is at work here are hyperactive agency detection (or overzealous patternicity), an inherent disposition to infer causality from correlation, and a propensity to “interpret events that happened earlier as the causes of events that happened or appeared to happen later” (Chabris & Simons, 2010, p. 184).  Additionally, you have a doctor like Andrew Wakefield misrepresenting data in such a way to solidify plausibility and celebrities like Jenny McCarthy using powerful anecdotes to convince others of the perceived link. And anecdotes are powerful indeed. “..[W]e naturally generalize from one example to the population as a whole, and our memories for such inferences are inherently sticky. Individual examples lodge in our minds, but statistics and averages do not. And it makes sense that anecdotes are compelling to us. Our brains evolved under conditions in which the only evidence available to us was what we experienced ourselves and what we heard from trusted others. Our ancestors lacked access to huge data sets, statistics, and experimental methods. By necessity, we learned from specific examples…” (Chabris & Simons, 2010, pp. 177-178).  When an emotional mother (Jenny McCarthy) is given a very popular stage (The Oprah Winfrey Show) and tells a compelling story, people buy it – intuitively – regardless of the veracity of the story. And when we empathize with others, particularly those in pain, we tend to become even less critical of the message conveyed (Chabris & Simons, 2010). These authors add that “Even in the face of overwhelming scientific evidence and statistics culled from studies of hundreds of thousands of people, that one personalized case carries undue influence” (p.178).

 

Although the efficacy of science is unquestionable, in terms of answering questions like the veracity of the relationship between vaccines and Autism, it appears that many people are incapable of accepting the reality of scientific inquiry (Chabris & Simons, 2010). Acceptance necessitates the arduous application of reason and the rejection of the influences rendered by the intuitive portion of our brain. This is harder than one might think. Again, it comes down to evolution. Although the ability to infer cause is a relatively recent development, we hominids are actually pretty good at it. And perhaps, in cases such as this one, we are too proficient for our own good (Chabris & Simons, 2010).

 

References

 

Center for Disease Control. (2009). Recommended Immunization Schedule for Persons Aged 0 Through 6 Years. http://www.cdc.gov/vaccines/recs/schedules/downloads/child/2009/09_0-6yrs_schedule_pr.pdf

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. Random House: New York.

 

Novella, S. (2010). Hyperactive Agency Detection. NeuroLogica Blog. http://www.theness.com/neurologicablog/?p=1762

 

Shermer, M. (2000). How We Believe. W.H. Freeman / Henry Holt and Company: New York.

Share

Vaccines and Autism

13 August 2010

It is hard to imagine anything more precious than one’s newborn child. Part of the joy of raising a child is the corresponding hope one has for the future. Don’t we all wish for our children a life less fraught with the angst and struggles we ourselves endured? One of the less pleasant aspects of my job has the effect, at least temporarily, of robbing parents of that hope. This erosion occurs in the parent’s mind and heart as a consequence of a diagnosis I often have to provide. I am a psychologist employed in part to provide diagnostic evaluations of preschool age children suspected of having Autism. My intention is never to crush hope, instead it is to get the child on the right therapeutic path as early as possible in order to sustain as much hope as possible. However, uttering the word AUTISM in reference to one’s child constitutes a serious and devastating emotional blow.

 

Many parents come to my office very aware of their child’s challenges and the subsequent implications. They love their child, accept him as he is, and just want to do whatever they can to make his life better. Others come still steeped in hope that their child’s challenges are just a phase or believing that she is just fine. Regardless, most of them report that they suspected difficulties very early in the child’s development. For example, many note a lack of smiles, chronic agitation and difficulty soothing their child. Some children had not been calmed by being held or may have even resisted it. Some other children I see develop quite typically. They smile, giggle, rejoice at being held, coo and babble, and ultimately start to use a few words with communicative intent. The parents of this latter and rather rare subset, then watch in dismay as their child withdraws, often losing both functional communication and interest in other children.

 

The timing of this developmental back-slide most often occurs at around 18 months of age. This regression happens to coincide with the recommended timing of the provision of the Measles-Mumps-Rubella (MMR) vaccine. This temporal chronology is important as it has lead, in part, to a belief that the vaccine itself is responsible for the development of Autism. What these parents must experience at this time, I can only imagine, is a horrible combination of confusion and grief. They have had their hopes encouraged and reinforced only to have them vanquished. And it is human nature, under such circumstances, to look for a direct cause. It makes perfect sense that parents would, given the chronicity of events in some cases, suspect the MMR vaccine as the cause of their child’s regression.

 

During my occasional community talks on Autism, I often am asked about the alleged connection between vaccines and Autism. The coincidental temporal relationship between the provision of the MMR vaccine and this developmental decay leads to what Chabris and Simons in The Invisible Gorilla refer to as the Illusion of Cause. Chabris and Simons discuss how “chronologies or mere sequences of happenings” lead to the inference “that earlier events must have caused the later ones.” (2010, p. 165). By default, as a result of evolution, our brains automatically infer causal explanations based on temporal associations (Chabris & Simons, 2010).

 

At nearly every talk I give, there is someone in the audience who is convinced that their child (or a relative) is a victim of the MMR vaccine. Their compelling anecdotes are very difficult to refute or discuss. I find that the application of reason, or data, or both, misses the mark and comes off as being cold and insensitive.

 

For such causal relationships to endure and spread they often need some confirmation of the effect by an “expert.” This is where the story of Dr. Andrew Wakefield comes into play. Wakefield, a GI Surgeon from the UK published a paper in the prestigious UK medical journal, The Lancet, alleging a relationship between the MMR vaccine and the development of Autism. His “expert” opinion offered legitimacy to already brewing suspicions backed by the perceived correlates of increases in both vaccination and Autism rates, as well as the apparent chronology between the timing of the vaccines and the onset of Autism. Wakefield provided credibility and sufficient plausibility: and as a result, the news of the alleged relationship gained traction.

 

But hold on! There were major flaws with Wakefield’s study that were not initially detected by The Lancet’s peer review panel. First of all, Wakefield was hired and funded by a personal injury attorney who commissioned him to prove that the MMR vaccine had harmed his clients (caused Autism). His study was not designed to test a hypothesis: it was carried out with the specific objective of positively establishing a link between Autism and provision of the MMR vaccine. From the outset the study was a ruse, disguised as science.

 

Just this year (2010), 12 years after the initial publication of Wakefield’s infamous study, The Lancet retracted it and Dr. Wakefield has been stripped of his privilege to practice medicine in the UK. Problems however, surfaced years ago: as early as 2004, when 10 of 13 co-authors retracted their support of a causal link. In 2005 it was alleged that Wakefield had fabricated data – in fact, some of the afflicted children used to establish the causal link had never actually received the MMR vaccine!

 

Since the initial publication of this study, hundreds of millions of dollars have been spent investigating the purported relationship between vaccines and Autism. Despite extensive large scale epidemiological studies, there have been no replications of Wakefield’s findings. Children who had not been vaccinated developed Autism at the same rate as those who had received the MMR. There is no relationship between the MMR vaccine and the development of Autism. As a result of Wakefield’s greed, hundreds of millions of dollars have been wasted. Those dollars could have been devoted to more legitimate pursuits, and that is not the worst of it. I will get to the real costs in a bit.

 

Another aspect of the history of this controversy is associated with the use of thimerosal as a preservative in vaccines. This notion, which has also been debunked, gained plausibility because thimerosal contains mercury, a known neurotoxin. You may ask: “Why on earth would a neurotoxin be used in vaccines?” Researchers have clearly established that thimerosal poses no credible threat to humans at the dosage levels used in vaccines. However, given the perceived threat, Thimerosal is no longer used as a preservative in routine childhood vaccinations. In fact, the last doses using this preservative were produced in 1999 and expired in 2001. Regardless, the prevalence of autism seems to be rising.

 

It is important to understand that mercury can and does adversely affect neurological development and functioning. However, long term exposure at substantially higher doses than present in thimerosal are necessary for such impact. The mercury in thimerosal is ethyl-mercury, which is not fat-soluble. Unlike the fat-soluble form of methyl-mercury (industrial mercury), ethyl-mercury is flushed from the body very quickly. Methyl-mercury can be readily absorbed into fatty brain tissue and render its damage through protracted contact. Methyl-mercury works its way into the food chain and poses a hazard to us if we eat too much fish (particularly those at the high end of the food chain). In reality, one is at more risk from eating too much seafood (shark and tuna) than from getting an injection of a vaccine preserved with thimerosal. Yet there does not seem to be a movement to implicate seafood as the cause of Autism.

 

Even though the relationship between vaccines and Autism has been thoroughly debunked, there is a movement afoot, steeped in conspiratorial thinking, that alleges that “Big Pharmacy” and the “Government” are colluding to deceive the people and that elaborately fabricated data is used to cover up a relationship. This belief lives on. How can this be so? Even intelligent and well educated people I know are avoiding important childhood immunizations based on the fear and misinformation spread by these well intentioned people.

 

In 2003, in the UK, the MMR vaccine rate had fallen to below 79% whereas a 95% rate is necessary to maintain herd immunity. Currently, the vaccine rates are dropping in the US due to the efforts of celebrities like Jenny McCarthy who purports that her son’s Autism was caused by vaccines. McCarthy campaigns fiercely against childhood immunizations spurred on by the likes of Oprah Winfrey. Even folks like John McCain, Joe Lieberman, and Robert F. Kennedy, Jr have spread such misinformation. Continuing to contend that the MMR vaccine is the culprit, Wakefield has moved to the US and has risen to martyr status among the anti-vaccine folk. You need to know that just months before he published his seminal paper, Wakefield received a patent on a Measles Vaccine that he alleges, “cures” Autism. He has much to gain financially, in his attempt to scare people away from the current safe and effective MMR vaccine.

 

It amazes me that people do not automatically dismiss this alleged vaccine-Autism link. Wakefield’s conflict of interest and discredited research practices alone draw into question anything he has to say. The mountains of epidemiological evidence also favors rejection of a causal relationship between the MMR vaccine and Autism. However, the power of anecdotes and misguided beliefs place millions of children in harm’s way.

 

Imagine yourself as a parent of a child who cannot get the MMR vaccine because of a serious medical condition (e.g., cancer). Such vulnerable children, of which there are millions worldwide, depend on herd immunity for their very survival. Now imagine that your child is inadvertently exposed to measles by coming into contact with a child who wasn’t vaccinated (because of misguided parental fear). Because your child’s compromised immunity, she develops the measles and gets seriously ill or dies. Such a scenario, although improbable is not impossible. It is more likely today largely due to the diminished herd immunity caused by misinformation. Whooping Cough (Pertussis) is likewise posing serious concerns (and one documented death) in unvaccinated clusters because of the anti-vaccine folk. This myth persists, in part, because of the Illusion of Cause, and the consequences have become deadly. Next week I will delve into this Illusion that sustains this erroneous and dangerous belief system.

 

References:

 

Association for Science in Autism Treatment. (2009).  Autism & Vaccines: The Evidence to Date. Vol. 6., No. 1 http://www.asatonline.org/pdf/summer2009.pdf

 

Center for Disease Control. Autism Spectrum Disorders: Data & Statistics. http://www.cdc.gov/ncbddd/autism/data.html

 

Chabris, C. F., & Simons, D. J. (2010).  The Invisible Gorilla. Random House: New York.

 

Plait, P. (2010). The Australian antivax movement takes its toll. Bad Astronomy Blog. http://blogs.discovermagazine.com/badastronomy/2009/04/26/the-australian-antivax-movement-takes-its-toll/

Share

In psychology there are some pretty famous studies that have penetrated popular culture. Many folks are at least familiar with Skinner’s rat box, Pavlov’s salivating dogs, Milgram’s obedience studies, Bandura’s Bobo Dolls, and Harlow’s rhesus monkeys reared by wire frame terry cloth mothers. In recent history, perhaps the most well known study pertains to inattentional blindness. If you have never heard of or seen a video of six college students, three in black shirts and three in white shirts, bouncing a couple basketballs back and forth, see the following video before you proceed.

 

 

So, of course I am referring to Daniel Simons’ Invisible Gorilla study. Just about everyone I know has seen this video, and I don’t recall any of them telling me that they did see the gorilla. I didn’t and I was absolutely flabbergasted – because I tend to be a pretty vigilant guy. This video is a graphic illustration of what Chabris and Simons (2010) refer to as the Illusion of Attention, and about 50% of those who watch the video while counting passes among white shirted players miss the gorilla.

 

This particular illusion concerns me because I spend a fare amount of time riding a bicycle on the roads of Western New York. So why should I or anyone who rides a bicycle or motorcycle, or anyone who drives while texting or talking on a cell phone be concerned?

 

The cold hard truth is that we may completely miss events or stimuli that we do not expect to see. If you don’t expect to see, and therefore fail to look for, bicycles and motorcycles, you may look right at them but fail to see them. LOOKING IS NOT SEEING just as hearing is not listening. This hearing/listening analogy is dead on.  How often have you been caught hearing someone but not listening to what was actually being said?  Chabris and Simons discuss in their book, The Invisible Gorilla, a study conducted by Daniel Memmert of Heidelberg University that demonstrated (using an eye-tracker) that virtually everyone who missed the gorilla looked directly at it at some point in the video (often for a full second). Bikers are the invisible gorillas of the roadways.

 

And as for drivers, if you are distracted by a cell phone conversation or by texting, you are less likely to see unexpected events (e.g., bicycles, motorcycles, pedestrians, wildlife).

 

Most drivers who text and talk on cell phones do not have problems. In fact, most driving is uneventful – as a result, most people get away with these behaviors. However, it is when there is an unexpected event that mobile phone users struggle with seeing and responding fluently to these events. You are under the same illusion as everybody else who has not been in an accident. Everyone believes, until they hit or kill somebody, that they are proficient drivers even while texting or talking on the phone.  And by the way, hands free head sets make no difference. Driving while talking on a cell phone disables you as much as does alcohol.

 

Think about driving down a road not seeing and subsequently hitting a young child on a bike. Think about having to live with killing a middle aged couple with three kids in college who were lawfully riding down the road on a tandem bicycle.  You hit the invisible gorilla.  Live with that!

 

Daniel Simons, in a recently published study, also suggests that even if you are expecting an unexpected event,  it is likely that you will miss other unanticipated events. Check out The Monkey Business Illusion video even if you have seen the invisible gorilla video. Test yourself.

 

 

I have long known that I am at risk while riding my bike on the road.  I have recently incorporated wearing bright hi-vis attire as I ride.  Doing so is completely inconsistent with my style; but I have done so in an effort to be safer.  I was surprised to learn that research shows that doing so will increase your visibility for those that are looking for you – but that it will likely make no difference at all for inattentionally blind drivers. For those drivers who do not expect to see cyclists, hi-vis clothing will not likely increase the likelihood that you will be seen.  Using head and tail lights works on a similar level.  They do increase visibility but only for those looking for such strange sights.  The best way to increase one’s safety while riding is to look like a car.

 

It is also important to note that riding in areas where there are more bikers helps too. Chabris and Simons (2010) noted a report by Peter Jacobson, a public health consultant in California who analyzed data on accidents involving automobiles striking pedestrians or cyclists. He found that in cities where there were more walkers and cyclists, there were actually fewer accidents. More folks walking or riding bikes seems to increase the level of driver expectation for seeing such individuals – thus making one less at risk of being victimized by inattentional blindness. It was further noted that drivers who also ride bikes may actually be more aware – if only more people would get out of their cars and get back on bicycles.

 

The bottom line is that our intuition about our attention is problematic. Intuitively we believe that we attend to and see, what is right before us. Research and real world data shows us that this is not the case. At the very least, when driving, we need to be aware of this erroneous assumption, and work diligently to avoid distractions like talking on the phone or texting. As for cyclists (motor powered or not) we must anticipate that we won’t be seen and behave accordingly. Although hi-vis clothing and lights may not aid in your visibility for some drivers, it will for those that are looking out for you.

 

Chabris and Simons contend that this illusion is a by product of modernity and the subsequent fast paced highly distracting world we live in. We have evolved for millions of years by process of natural selection in a middle sized slow paced world. Traveling faster than a few miles an hour is a relatively new development for our species. Today we travel in motor vehicles at break neck speeds. On top of that we distract ourselves with cell phones, Blackberries, iPhones, iPods and GPS units. Although the consequences of these factors can be grave – in most cases we squeak by – which is a double edged sword because it essentially reinforces the illusion and the behavior.

 

References:

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Simons, D. J., 2010. Monkeying around with the gorillas in our midst: familiarity with an inattentional-blindness task does not improve the detection of unexpected events i-Perception 1(1) 3–6

Share

Imagine yourself walking down a familiar street approaching a stranger who is obviously lost, staring hopelessly at a map.  As you saunter by you provide eye contact and a look of willingness to help. He asks you for directions.  As you begin to offer your advice, you are interrupted by a construction crew carrying a large door.  They walk right between you and the stranger.  Now imagine that as the construction crew parted you visually from the stranger a new and different person covertly took on the same lost role.  This new stranger is wearing different clothes, is taller by three inches, has a different build, and different vocal qualities.  Do you think you would notice?

 

Chabris and Simons (2010) in the The Invisible Gorilla share the results of a study carried out by Dan Simons and a colleague where they tested whether people would notice such changes in a scenario very much like the one I just described. When the scenario was described to undergraduates, 95% believed that they would certainly notice such a change (as is likely the case for you as well). Yet when this experiment was carried out in the real world, nearly 50% of the participants did not notice the switch!

 

This particularly startling data is indicative of change blindness, defined by Chabris and Simons (2010) as failure to notice changes between what was in view moments before and what is in view currently. Essentially, we tend not to compare and thus notice stimuli changes from moment to moment. As a result we tend to be “blind” in many cases to pretty obvious changes. And what is equally salient is that we are unaware of this blindness. If you are like most people you said “No way I’d miss that!” Yet it is likely that about half of you would miss such changes.

 

Unconvinced? So were a group of Harvard undergraduates who had just attended a lecture that covered the above “door study” and change blindness. After the lecture, students were recruited to participate in further research. Interested students were directed to a different floor where they were greeted by an experimenter behind a counter. As the recruits proceeded to review and complete the necessary paperwork, the experimenter who greeted and instructed them regarding the paperwork ducked down behind the counter, presumably to file some papers, only to depart as a new and different experimenter took over the role. Even after being primed with the knowledge of change blindness, not one of the students noticed the swap! This was true even for some of the students who had just moments before boldly stated that they would notice such a change. We are in fact largely blind to our change blindness regardless of our confidence regarding our vigilance.

 

These results, contend Chabris and Simons, comprise conclusive evidence for the illusion of memory, (which is the disconnect between how our memory works and how we think it works).

 

Most of us are all too aware of the failings of our short-term memory. We often forget where we put the car keys, cell phone, or sunglasses. These authors note that we are generally pretty accurate when it comes to knowing the limits of this type of memory. License plates and phone numbers have only seven digits because most of us can only hold that much data in short-term memory. However, when it comes to understanding the limits of our long-term memory we tend to hold entirely unrealistic, fallacious, and illusory expectations.

In a national survey of fifteen hundred people [Chabris and Simons] commissioned in 2009, we included several questions designed to probe how people think memory works. Nearly half (47%) of the respondents believed that ‘once you have experienced an event and formed a memory of it, that memory doesn’t change.’ An even greater percentage (63%) believed that ‘human memory works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.” (Chabris & Simons, 2010, pp. 45-46).

They added:

People who agreed with both statements apparently think that memories of all our experiences are stored permanently in our brains in an immutable form, even if we can’t access them. It is impossible to disprove this belief… but most experts on human memory find it implausible that the brain would devote energy and space to storing every detail of our lives…” (p. 46).

So, as it turns out, our memories of even significant life events are quite fallible. Although we perceive such memories as being vivid and clear, they are individual constructions based on what we already know, our previous experiences, and other cognitive and emotional associations that we ultimately pair with the event. “These associations help us discern what is important and to recall details about what we’ve seen. They provide ‘retrieval cues’ that make our memories more fluent. In most cases, such cues are helpful. But these associations can also lead us astray, precisely because they lead to an inflated sense of precision of memory.” (Chabris & Simons, 2010, p. 48). In other words, our memories are not exact recordings, they are instead modified and codified personal replicas that are anything but permanent.

 

I cannot do justice to the impressive and exhaustive detailing that Chabris and Simons provide in the The Invisible Gorilla regarding the illusion of memory. However, suffice it to say, that we give way too much credit to the accuracy of our own long-term memories and have unrealistic expectations regarding others’ recall. People recall what they expect to remember and memories are modified over time based on malleable belief systems. Memories fade and morph over time depending on the “motives and goals of the rememberer.” (Chabris & Simons, 2010, p. 51).

“Although we believe that our memories contain precise accounts of what we see and hear, in reality these records can be remarkably scanty. What we retrieve often is filled in based on gist, inference, and other influences; it is more like an improvised riff on a familiar melody than a digital recording of an original performance. We mistakenly believe that our memories are accurate and precise, and we cannot readily separate those aspects of our memory that accurately reflect what happened from those that were introduced later.” (Chabris & Simons, 2010, pp 62-63).

They detail with riveting stories continuity errors in movies, source memory errors (is it your memory or mine?), flashbulb memories, and false memories in a way that really drives home the point that our memories are not to be trusted as factual depictions of historical fact. They beg the question: Can you trust your memory?

 

The answer: Partially, but you must be aware that your memory is not immutable. It is erroneous to assume that your memories are factual and it is equally fallacious to presume that other’s memories are likewise infallible. Two people witnessing the same event from the same perspective are likely to recall the event differently because of their unique personal histories, capabilities, internal associations, and thus their unique internal cognitive associations, as they store into memory the bits and pieces of the event.

 

Isn’t it amazing and scary that we give so much credit and power to eye-witness testimony in the court of law? Such power is conferred based on the pervasive and deeply held belief in the accuracy of memory – which you must know by now is an illusion. This is just another example pertaining to the illusion of justice in this country.

 

On a more personal level, next time you and your significant other get into a debate about how some past event went down, you have to know that you both are probably wrong (and right) to some degree. There is your truth, their truth, and the real truth. These can be illustrated in a Venn Diagram with three circles that from time to time have various degrees of mutual overlap. We must admit that over time the real truth is likely to become a smaller piece of the story. This necessitates that we get comfortable with the reality that we don’t possess a DVR in our brains and that we part ways with yet another illusion of the importance and power of our uniquely human intuition.

 

Reference:

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Share

Have you ever wondered what makes a pundit a pundit? I mean really! Is there pundit school or a degree in punditry? Given what I hear, I can only imagine that what would be conferred upon graduation is a B.S. of different, more effluent sort. I mean REALLY!

 

I am certain that many of you have heard the rhetoric spewed by many of the talking heads on television and talk radio. This is true regardless of their alleged political ideology. And even more alarming, it seems to me, is that the more bombastic they are, the more popular they are. A pundit is supposed to be an expert – one with greater knowledge and insight than the general population – and subsequently they should possess the capacity to analyze current scenarios and draw better conclusions about the future than typical folk.

 

However, what we typically hear is two or more supremely confident versions of reality. You name the issue, be it anthropogenic global warming, health care reform, the value of free market systems, virtually no two pundits can agree unless of course they are political brethren.

 

Have you ever wondered if any one has ever put the predictive reliability of these so called experts to a test? Well, Philip Tetlock, a psychology professor at UC Berkley, has done just that. In 1984 Tetlock undertook such an analysis and his initial data was so so alarming (everybody had called the future wrong with regard to the cold war and demise of the USSR) that he decided to embark on what was to eventually become a two decade long quantitative analysis of, and report card on, the true predictive capabilities of professional pundits.

 

In 2005 Tetlock published his findings in his book, Expert political judgment: How good is it? How can we know? The results were again surprising. He analyzed the predictions made by over 280 professional experts. He gave each a series of professionally relevant real life situations and asked them to make probability predictions pertaining to three possible outcomes (often in the form of things will: stay the same, get better, or get worse). Further, Tetlock interviewed each expert to evaluate the thought processes used to draw their conclusions.

 

In the end, after nearly twenty years of predictions and real life playing itself out, Tetlock was able to analyze the accuracy of over 82,000 predictions. And the results were conclusive – the pundits performed worse than random chance in predicting outcomes within their supposed areas of expertise. These experts were able to accurately predict the future less than 33% of the time and non-specialists did equally as well. And to make matters worse, the most famous pundits were the least accurate. A clear pattern emerged – confidence in one’s predictions was highly correlated with error. Those who were most confident about their predictions were most often the least accurate. He noted that the most confident, despite their inaccuracy, were in fact the most popular! Tetlock noted that they were essentially blinded by their certainty.

 

Jonah Lehrer in How We Decide wrote of Tetlock’s study and stated “When pundits were convinced that they were right, they ignored any brain areas that implied that they might be wrong. This suggests that one of the best ways to distinguish genuine from phony expertise is to look at how a person responds to dissonant data. Does he or she reject the data out of hand? Perform elaborate mental gymnastics to avoid admitting error? He also suggested that people should “ignore those commentators that seem too confident or self assured. The people on television who are most certain are almost certainly going to be wrong.”

 

You might be surprised that the vast majority of the pundits actually believed that they were engaging in objective and rational analysis when drawing their conclusions.

 

So, experts, rationally analyzing data, drawing conclusions with less than random chance accuracy? One has to question either their actual level of expertise or the objectivity of their analysis. Tetlock suggests that they are “prisoners of their preconceptions.”

 

This begs the question: Is this an error of reason or an error of intuition? Jonah Lehrer suggests that this error is actually played out as one cherry picks which feelings to acknowledge and which to ignore. Lehrer noted: “Instead of trusting their gut feelings, they found ways to disregard the insights that contradicted their ideologies… Instead of encouraging the arguments inside their heads, these pundits settled on answers and then came up with reasons to justify those answers.

 

Chabris and Simons in the The Invisible Gorilla discuss why we are taken in by the pundits despite their measurable incompetence and why they likely make the errors that they do. The bottom line is that such ubiquitous errors (made by novices and experts alike) are in fact illusions of knowledge perpetrated by intuition and further that we are suckers for confidence.

 

First of all, our intuitive inclination is to overly generalize and assume that one’s confidence is a measure of one’s competence. Such an assumption is appropriate in situations where one personally knows the limits of the individual’s capabilities. When it comes to pundits, few people know the supposed expert well enough to accurately assess whether he or she is worthy of their confidence. Regardless, people prefer and are drawn toward confidence. Our intuitive attraction to, and trust in confidence, sets us up for error. It is the illusion of confidence.

 

Chabris and Simons then review numerous stories and studies that “show that even scientific experts can dramatically overestimate what they know.” They demonstrate how we confuse familiarity with knowledge – and that when our knowledge is put to the test “…our depth of understanding is sufficiently shallow that we may exhaust our knowledge after just the first question. We know that there is an answer, and we feel that we know it, but until asked to produce it we seem blissfully unaware of the shortcomings in our own knowledge.” They add:

And even when we do check our knowledge, we often mislead ourselves. We focus on those snippets of information that we do possess, or can easily obtain, but ignore all of the elements that are missing, leaving us with the impression that we understand everything we need to.

 

So what can we safely conclude?

 

For certain, we should be aware of the limits of our knowledge and be ever vigilant so as to remain skeptical about what other experts espouse (particularly if they come off as being very confident). Tetlock suggests that responsible pundits should state their predictions in measurable terms – so that they are subject to analysis – both for error correction/learning and accountability purposes. Further he discusses the importance of placing predictions within error bars denoting the probability of accuracy. Chabris and Simons contend that only through rational analytic thought can we overcome the illusion of knowledge. We have to stave off our intuitive inclination to trust bold, black and white predictions; we have to accept that complicated issues demand complicated solutions; and that predicting the future is very difficult. As such, we need to get more comfortable with probabilities and become more skeptical of certainties. As for the pundits – they are not worth listening to – they are almost always wrong – and all they really do is polarize the process and the nation. We need to inform one another of this – and ultimately make an active rational choice to stop victimizing ourselves.

 

References:

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Lehrer, J. (2009). How We Decide. New York: Houghton Mifflin Harcourt.

Menand, L. (2005). Everybody’s an Expert. The New Yorker. http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?printable=true

Tetlock, P.E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.

Share

Over the last couple months I have submitted posts proclaiming the potency of intuition. One of my major resources has been Malcolm Gladwell’s Blink: The Power of Thinking Without Thinking. Among Gladwell’s tenets, the most prominent was the power of intuition and its relative supremacy, in certain situations, over rational thought. I have also heavily referenced Jonah Lehrer’s, How We Decide. Lehrer argues that there is not in fact, a Platonic Dichotomy that establishes rationality in a supreme and distinct role over intuition. Instead, he suggests that emotion plays a key role in making decisions, much more so than has historically been acknowledged. Lehrer, however, uses more scientific scrutiny and relies more heavily on research than does Gladwell.

 

Currently I am reading The Invisible Gorilla by Daniel J. Simons and Christopher F. Chabris. These cognitive psychologists are best known for their Invisible Gorilla study illustrating selective attention. These authors appear to be on a mission to resurrect rational thought by highlighting the inherent weaknesses of intuition. Gladwell in particular comes under scrutiny by these authors for his alleged glorification of rapid cognition.

 

Not only have Gladwell’s hypotheses come under attack, so to has his journalistic approach. Simons and Chabris efficiently deconstruct a couple of Gladwell’s anecdotes as examples of illusions manifested by intuition. Contrary to the message of Blink, Simons and Chabris contend that intuition is inherently problematic and detail automatic illusions that spring forth as manifested by the adaptive unconscious.

 

Anecdotal evidence is inherently flawed yet amazingly compelling. Gladwell, they acknowledge, is a master story teller, and he uses this talent to effectively support his contentions. They argue, however, that he falls prey to the very illusions of intuition that he is ultimately celebrating.

 

Jonah Lehrer seems to escape Simons’ and Chabris’ scrutiny – yet this may simply be an artifact of release date. How We Decide was released in 2009 while Gladwell’s Blink was released in 2007. Whereas Blink appears on the surface to be a celebration of intuition, Lehrer instead puts a microscope on the brain and the interplay of reason and emotion. He identifies the regions in the brain thought to be involved in these functions and highlights the research that systematically debunks the notion of reason and emotion being distinct epic foes battling it out for supremacy. Lehrer does not seem to celebrate the relative power of intuition over reason, but instead makes it clear that emotion, acting as a messenger of intuition, actually plays a crucial role in reason itself.

 

Rarely are parts in complex systems clearly distinct. Dividing brain function into dichotomous terms like reason and intuition is just another example of a flawed human inclination to pigeon hole nature or make issues black and white. Although Gladwell puts a more positive spin on intuition than has historically been the case, he also makes an effort to identify at least some of its shortcomings. Lehrer brings into focus the complexity and interconnectedness of the system and dispels the traditional dichotomy. Simons and Chabris scientifically scrutinize the Gladwellian notion of the supremacy of intuition. Their skeptical message lacks the sex appeal of thinking without thinking, but it is very important just the same. I look forward to detailing parts of The Invisible Gorilla in the weeks to come.

 

References:

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Gladwell, M. 2005. Blink: The Power of Thinking Without Thinking. Little, Brown and Company: New York.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

Believe it or not, freewill, to a large extent, is an illusion. For the most part, what you do, as you go through your day is based on decisions made outside of your conscious awareness. Many of these decisions involve a complicated and largely unconscious interplay among various brain regions that each struggle for control of your behavior.

 

One has to be careful to avoid anthropomorphic tendencies when trying to understand this epic struggle. It is not as though there are specific Freudian (Id, Ego, Superego) forces at play, each with a specific and unique mission.  In reality it is more like chemical warfare going on in your brain – where neurotransmitters are released by those relevant brain centers based on current environmental circumstances (what your senses perceive in the world), your previous experiences in similar circumstances, and your treasure trove of knowledge. The subsequent emotions triggered by those neurotransmitters are then weighed out in the orbitofrontal cortex (OFC) in what has essentially been a tug of war involving varying measures of reinforcement and punishment.

 

Most of us are unaware of this neurological process and are under the illusion that we go through life making rational reason-based decisions. Although we may live within this illusion, the people who layout super center floor plans or produce advertisements know the truth. This discrepancy in knowledge makes you vulnerable. They use their knowledge of how the brain works in a manipulative and concerted effort to help you part ways with your hard earned money. It is not really a conspiracy, it is just an effort to gain a competitive advantage. It’s business.

 

Following is an abbreviated explanation of the brain systems in play and then an expose of how marketers use our brains against us. This information is drawn from Jonah Lehrer’s excellent book entitled How We Decide.

 

First there is the dopamine reward pathway. Dopamine is a neurotransmitter that serves a number of important functions in the brain. One of its most cogent roles is played out as a result of activation of the nucleus accumbens (NAcc). When the NAcc is activated it floods the brain with dopamine and we as a result experience pleasure. Desire for an item activates the NAcc. Being in the presence of the desired item activates it further. The greater the arousal of the NAcc the more pleasure we experience. It is your NAcc that is responsible for the happiness you feel when you eat a piece of chocolate cake, or listen to your favorite song, or watch your sports team win an exciting game (Lehrer, 2009).

 

Then there is the insula – a brain region that produces among other sensations, aversive feelings. In a New York Times article on the insula, Sandra Blakeslee (2006) noted that this center “lights up” in brain scans when people feel pain, anticipate pain, empathize with others, see disgust on someone’s face, are shunned in a social settings, and decide not to buy an item. In many cases we avoid exciting the insula as it is the system that produces the unpleasantness of caffeine or nicotine withdrawal and the negative feelings associated with spending money.

 

Super stores are designed to excite your NAcc and quiet the insula. You can’t help but notice when you walk into a Target, Walmart, Lowes, or even Pier 1 Imports just how much stuff is there – most of which you do not possess. Just by entering the store you have aroused your NAcc and the associated cravings.  Lehrer (2009) notes:

Just look at the interior of a Costco warehouse. It’s no accident that the most coveted items are put in the most prominent places. A row of high-definition televisions lines the entrence. The fancy jewelry, Rolex watches, iPods, and other luxury items are conspicuously placed along the corridors with the heaviest foot traffic. And then there are the free samples of food, liberally distributed throughout the store. The goal of a Costco is to constantly prime the pleasure centers of the brain, to keep us lusting after things we don’t need. Even though you probably wont buy the Rolex, just looking at the fancy watch makes you more likely to buy something else, since the desired item activates the NAcc. You have been conditioned to crave a reward.”

He further noted:

“But exciting the NAcc is not enough; retailers must also inhibit the insula. This brain area is responsible for making sure you don’t get ripped off, and when it’s repeatedly assured by retail stores that low prices are “guaranteed,” or that a certain item is on sale, or that it’s getting the “wholesale price,” the insula stops worrying so much about the price tag.  In fact, researchers have found that when a store puts a promotional sticker next to a price tag – something like “Bargain Buy!” or “Hot Deal!” – but doesn’t actually reduce the price, sales of that item still dramatically increase.  The retail tactics lull the brain into buying more things, since the insula is pacified.  We go broke convinced that we are saving money.”

I hypothesize that the frequently redundant catalogs that routinely fill our mailboxes from retailers like LLBean and Lands End work on our brains much like super centers do.  They excite the NAcc with idealized images modeled by perfect pretty people.  They pacify the insula by noting improved features, sales, and deep discounts on closeouts.  The necessary use of credit cards, Lehrer (2009) notes, has an additional inhibitory affect on the insula.  When the insula is calm and you are primed with dopamine, the pleasure center has a disproportional amount of control.  You may think you have complete rational control over this – but all this takes place outside of your direct awareness and plays out as feelings that guide your behavior.  I further hypothesize that online retail stores work in a similar way (although for some the insula may be aroused by security issues pertaining to using a credit card online).  Regardless, substantial marketing attempts by companies like EMS, REI, Victoria’s Secrets, LLBean, Bath & Body Works fill my in box, always hoping to draw in my NAcc and pacify my insula and subsequently open my wallet.  You have to guess that the amount of money devoted to catalogs and internet marketing pays off for these companies or they wouldn’t do it.

 

Being aware of one’s neurology and how we are manipulated may help us mediate these unconscious forces and thus help us make better decisions.  I myself try to avoid Malls and stores like Target because of the feelings they create in me.  And for this very reason, I’ve stopped routinely looking at catalogs.  I try to shop based only on need – not want.  I’m making progress – but it is hard – these patterns have been in place and reinforced for a long time.

 

References

 

Blakeslee, Sandra. 2007. Small Part of the Brain, and Its Profound Effects. New York Times. http://www.nytimes.com/2007/02/06/health/psychology/06brain.html?emc=eta1&pagewanted=all

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

For nearly as long as humans have been thinking about thinking, one of the most intriguing issues has been the interplay of reason and emotion. For the greatest thinkers throughout recorded history, reason has reigned supreme. The traditional paradigm has been one of a dichotomy where refined and uniquely human REASON pitches an ongoing battle for control over animalistic and lustful EMOTIONS. It has been argued by the likes of Plato, Descartes, Kant and and even Thomas Jefferson that reason is the means to enlightenment and that emotion is the sure road to human suffering (Lehrer, 2009).

 

This Platonic dichotomy remains a pillar of Western thought (Lehrer, 2009). Suppressing your urges is a matter of will – recall the mantras “Just say no!” or “Just do it!” My guess is that most people today continue to think of the brain in these terms. Until recently even the cognitive sciences reinforced this notion. Only through very recent advances in the tools used to study the brain (e.g., fMRI) and other ingenious studies (e.g., Damasio’s IGT) has any evidence been generated to place this traditional paradigm in doubt. As it turns out, emotion plays a very crucial role in decision making. Without it, our ability to reason effectively is seriously compromised. I have long believed that feelings and emotions should be under the control of our evolutionary gift – the frontal cortex. Reason, after all, is what sets us apart from the other animals. Instead it is important to understand that we have learned that these forces are NOT foes but essentially collaborative and completely interdependent forces.

 

The implications of this recent knowledge certainly do not suggest that it is fruitless to employ our reason and critical thinking capabilities as we venture through life. Reason is crucial and it does set us apart from other life forms that lack such fully developed frontal cortices. This part of the outdated concept is correct. However, we are wrong to suppose that emotion with regard to decision making lacks value or that it is a villainous force.

 

Jonah Lehrer, in his book, How We Decide discusses this very issue and notes that: “The crucial importance of our emotions – the fact that we can’t make decisions without them – contradicts the conventional view of human nature, with its ancient philosophical roots.” He further notes:

 

“The expansion of the frontal cortex during human evolution did not turn us into purely rational creatures, able to ignore our impulses. In fact, neuroscience now knows that the opposite is true: a significant part of our frontal cortex is involved with emotion. David Hume, the eighteenth-century Scottish philosopher who delighted in heretical ideas, was right when he declared that reason was the “the slave of the passions.”

 

So how does this work? How do emotion and critical thinking join forces? Neuroscientists now know that the orbitofrontal cortex (OFC) is the brain center where this interplay takes place. Located in the lower frontal cortex (the area just above and behind your eyes), your OFC integrates a multitude of information from various brain regions along with visceral emotions in an attempt to facilitate adaptive decision making. Current neuroimaging evidence suggests that the OFC is involved in monitoring, learning, as well as the memorization of the potency of both reinforcers and punishers. It operates within your adaptive unconscious – analyzing the available options, and communicating its decisions by creating emotions that are supposed to help you make decisions.

 

Next time you are faced with a decision, and you experience an associated emotion – it is the result of your OFC’s attempt to tell you what to do. Such feelings actually guide most of our decisions.

 

Most animals lack an OFC and in our primate cousins, this cortical area is much smaller. As a result, these other organisms lack the capacity to use emotions to guide their decisions. Lehrer notes: “From the perspective of the human brain, Homo sapiens is the most emotional animal of all.”

 

I am struck by the reality that natural selection has hit upon this opaque approach to guide behavior. This just reinforces the notion that evolution is not goal directed. Had evolution been goal directed or had we been intelligently designed don’t you suppose a more direct or more obviously rational process would have been devised? The reality of the OFC even draws into question the notion of free will – which is a topic all its own.

 

This largely adaptive brain system of course has draw backs and limitations – many of which I have previously discussed (e.g., implicit associations, cognitive conservatism, attribution error, cognitive biases, essentialism, pareidolia). This is true, in part, because these newer and “higher” brain functions are relatively recent evolutionary developments and the kinks have yet to be worked out (Lehrer, 2009). I also believe that perhaps the complexities and diversions of modernity exceed our neural specifications. Perhaps in time, natural selection will take us in a different direction, but none of us will ever see this. Regardless, by learning about how our brains work, we certainly can take an active role in shaping how we think. How do you think?

 

References:

 

Gladwell, M. (2005). ‘Blink: The Power of Thinking Without Thinking.’ Little, Brown and Company:New York.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

Recently, Fox News, aired a story posing the question as to whether Fred Rogers was evil.  Why you may ask, would anyone use the word evil in reference to such a gentle man?  They were suggesting that his you’re special message fostered unworthy self esteem and in effect ruined an entire generation of children.  This accusation inspired a fair amount of discourse that in some cases boiled down to the question of why children today have such hollow needy shells.  An example of the discourse on this topic can be seen at Bruce Hood’s blog in an article entitled Mr. Rogers is Evil According to Fox News.

 

The consensus among skeptics was that Mr. Rogers was not, in fact, evil and that he is not responsible for the current juvenile generation’s need for much praise and attention for relatively meaningless contribution. There was almost universal acknowledgment of the problem however, and discussions lead to troubling issues such as grade inflation at schools and universities and poor performance in the workplace. In an intriguing article by Carol Mithers in the Ladies Home Journal entitled Work Place Wars addresses the workplace implications of this phenomena. Mithers notes:

“.…. the Millennials — at a whopping 83 million, the biggest generation of all…. are technokids, glued to their cell phones, laptops, and iPods. They’ve grown up in a world with few boundaries and think nothing of forming virtual friendships through the Internet or disclosing intimate details about themselves on social networking sites. And, many critics charge, they’ve been so coddled and overpraised by hovering parents that they enter the job market convinced of their own importance. Crane calls them the T-ball Generation for the childhood sport where “no one fails, everyone on the team’s assured a hit, and every kid gets a trophy, just for showing up.

 

Workers of this generation are known for their optimism and energy — but also their demands: “They want feedback, flexibility, fun, the chance to do meaningful work right away and a ‘customized’ career that allows them to slow down or speed up to match the different phases of life,” says Ron Alsop, author of The Trophy Kids Grow Up: How the Millennial Generation Is Shaking Up the Workplace.

I find it ironic that the very people today who struggle with the behavior of the Millennials are the ones who shaped the behaviors of concern. I personally have struggled with the rampant misapplication of praise, attention, and the provision of reinforcement for meaningless achievements. I have seen this everywhere – in homes, schools, youth athletic clubs, you name it. It has been the most recent parenting zeitgeist. But where did this philosophy come from?

 

Throughout my doctoral training in psychology (late 80’s and early 90’s) I learned that reinforcement is a powerful tool, but it was clear to me that it has to be applied following behaviors you WANT to increase. Nowhere in my studies did I read of the importance of raising children through the application of copious amounts of reinforcement just to bolster their self esteem. I am aware of no evidence based teachings that suggest this approach. However, given the near universal application of these practices it must of come from somewhere. This very question, I’m sure, lead to the placement of responsibility squarely on the shoulders of poor Mr. Rogers.

 

Although the source of this approach remains a mystery to me, Dr. Carol Dweck’s work clarifies the process of the outcome. In an interview in Highlights, Dr. Dweck discusses Developing a Growth Mindset.  Dr. Dweck has identified two basic mindsets that profoundly shape the thinking and behavior both we as adults exhibit and foster in our children.  She refers to these as the Fixed Mindset and the Growth Mindset. People with a Fixed Mindset, Dr. Dweck notes in the Highlights article, “believe that their achievements are based on innate abilities. As a result, they are reluctant to take on challenges.” Dweck further notes that “People with Growth Mindsets believe that they can learn, change, and develop needed skills.  They are better equipped to handle inevitable setbacks, and know that hard work can help them accomplish their goals.” In this same article “She suggests that we should think twice about praising kids for being “smart” or “talented,” since this may foster a Fixed Mindset. Instead, if we encourage our kids’ efforts, acknowledging their persistence and hard work, we will support their development of a Growth Mindset – better equipping them to learn, persist and pick themselves up when things don’t go their way.”

 

Dweck’s conclusions are based on extensive research that clearly supports this notion. Jonah Lehrer, in his powerful book, How We Decide discussed the relevance of Dweck’s most famous study. This work involved more than 400 fifth grade students in New York City, who were individually given a set of relatively simple non-verbal puzzles. Upon completing the puzzles the students were provided with one of two one-sentence praise statements. Half of the participants were praised for their innate intelligence (e.g., “You must be smart at this.”).  The other half were praised for their effort (e.g., “You must have worked really hard.”).

 

All participants were then given a choice between two subsequent tasks – one described as a more challenging set of puzzles (paired with the assurance that they would learn a lot from attempting) and a set of easier puzzles like the ones the subjects just completed.  In summarizing Dweck’s results, Lehrer noted “Of the group of kids that had been praised for their efforts, 90 percent chose the harder set of puzzles. However, of the kids that were praised for their intelligence , most went for the easier test.”  Dweck concludes that praise statements that focus on intelligence encourage risk avoidance. The “smart” children do not want to risk having their innate intelligence come under suspicion.  It is better to take the safe route and maintain the perception and feeling of being smart.

 

Dweck went on to demonstrate how this fear of failure can inhibit learning.  The same participants were then given a third set of puzzles that were intentionally very difficult in order to see how the children would respond to the challenge.   Those who were praised for their effort on the initial puzzles worked diligently on the very difficult puzzles and many of them remarked about how much they enjoyed the challenge. The children who were praised for their intelligence were easily discouraged and quickly gave up.  Their innate intelligence was challenged – perhaps they were not so smart after all.  Then all subjects were subjected to a final round of testing.  This set of puzzles had a degree of difficulty comparable to the first relatively simple set. Those participants praised for their effort showed marked improvements in their performance.  On average their scores improved by 30 percentage points.   Those who were praised for their intelligence, the very children who had just had their confidence shaken by the very difficult puzzles, on average scored 20 percentage points lower than they had on the first set.  Lehrer noted in reference to the participants praised for their effort that “Because these kids were willing to challenge themselves, even if it meant failing at first, they ended up performing at a much higher level.” With regard to the participants praised for intelligence Lehrer writes “The experience of failure had been so discouraging for the “smart” kids that they actually regressed.

 

In the Highlights interview Dweck suggests:

“It’s a mistake to think that when children are not challenged they feel unconditionally loved. When you give children easy tasks and praise them to the skies for their success, they come to think that your love and respect depend on their doing things quickly and easily. They become afraid to do hard things and make mistakes, lest they lose your love and respect. When children know you value challenges, effort, mistakes, and learning, they won’t worry about disappointing you if they don’t do something well right away.”

She further notes:

“The biggest surprise has been learning the extent of the problem—how fragile and frightened children and young adults are today (while often acting knowing and entitled). I watched as so many of our Winter Olympics athletes folded after a setback. Coaches have complained to me that many of their athletes can’t take constructive feedback without experiencing it as a blow to their self-esteem. I have read in the news, story after story, how young workers can hardly get through the day without constant praise and perhaps an award. I see in my own students the fear of participating in class and making a mistake or looking foolish. Parents and educators tried to give these kids self-esteem on a silver platter, but instead seem to have created a generation of very vulnerable people.”

So, we have an improved understanding of what has happened – but not necessarily of how the thinking that drives such parenting behavior came to be. Regardless, it is what it is, and all we can do is change our future behavior. Here are some cogent words of advice from Dr. Dweck (again from the Highlights article):

  1. “Parents can also show children that they value learning and improvement, not just quick, perfect performance. When children do something quickly and perfectly or get an easy A in school, parents should not tell the children how great they are. Otherwise, the children will equate being smart with quick and easy success, and they will become afraid of challenges. Parents should, whenever possible, show pleasure over their children’s learning and improvement.”
  2. Parents should not shield their children from challenges, mistakes, and struggles. Instead, parents should teach children to love challenges. They can say things like “This is hard. What fun!” or “This is too easy. It’s no fun.” They should teach their children to embrace mistakes, “Oooh, here’s an interesting mistake. What should we do next?” And they should teach them to love effort: “That was a fantastic struggle. You really stuck to it and made great progress” or “This will take a lot of effort—boy, will it be fun.
  3. Finally, parents must stop praising their children’s intelligence. My research has shown that, far from boosting children’s self-esteem, it makes them more fragile and can undermine their motivation and learning. Praising children’s intelligence puts them in a fixed mindset, makes them afraid of making mistakes, and makes them lose their confidence when something is hard for them. Instead, parents should praise the process—their children’s effort, strategy, perseverance, or improvement. Then the children will be willing to take on challenges and will know how to stick with things—even the hard ones.”

 

References

 

Dweck, C. Developing a Growth Mindset. Highlights Parents.com Interview  http://www.highlightsparents.com/parenting_perspectives/interview_with_dr_carol_dweckdeveloping_a_growth_mindset.html

 

Hood, B. Mr Rogers is Evil According to Fox Newshttp://brucemhood.wordpress.com/2010/05/03/mr-rogers-is-evil-according-to-fox-news/

 

Lehrer, J. 2009.  How We Decide. Houghton Mifflin Harcourt: New York.

 

Mithers, C. Workplace Wars. Ladies Home Journal. http://www.lhj.com/relationships/work/worklife-balance/generation-gaps-at-work/

Share