There are many well intentioned folks out there who believe that childhood vaccinations cause Autism. Last week I covered the origins of this belief system as well as its subsequent debunking in Vaccines and Autism. Despite the conclusive data that clearly establishes no causal link between vaccines and Autism, the belief lives on. Why is this? Why do smart people fall prey to such illusions? Chabris and Simons contend in their book, The Invisible Gorilla, that we fall prey to such myths because of the Illusion of Cause. Michael Shermer (2000), in his book, How We Believe, eloquently describes our brains as a Belief Engine. Underlying this apt metaphor is the notion that “Humans evolved to be skilled pattern seeking creatures. Those who were best at finding patterns (standing upwind of game animals is bad for the hunt, cow manure is good for the crops) left behind the most offspring. We are their descendants.” (Shermer, p. 38). Chabris and Simons note that this refined ability “serves us well, enabling us to draw conclusions in seconds (or milliseconds) that would take minutes or hours if we had to rely on laborious logical calculations.” (p. 154). However, it is important to understand that we are all prone to drawing erroneous connections between stimuli in the environment and notable outcomes. Shermer further contends that “The problem in seeking and finding patterns is knowing which ones are meaningful and which ones are not.

 

From an evolutionary perspective, we have thrived in part, as a result of our tendency to infer cause or agency regardless of the reality of threat. For example, those who assumed that rustling in the bushes was a tiger (when it was just wind) were more likely to take precautions and thus less likely, in general, to succumb to predation. Those who were inclined to ignore such stimuli were more likely to later get eaten when in fact the rustling was a hungry predator. Clearly from a survival perspective, it is best to infer agency and run away rather than become lunch meat. The problem that Shermer refers to regarding this system is that we are subsequently inclined toward mystical and superstitious beliefs: giving agency to unworthy stimuli or drawing causal connections that do not exist. Dr. Steven Novella, a neurologist, in his blog post entitled Hyperactive Agency Detection notes that humans vary in the degree to which they assign agency. Some of us have Hyperactive Agency Detection Devices (HADD) and as such, are more prone to superstitious thinking, conspiratorial thinking, and more mystical thinking. It is important to understand as Shermer (2000) makes clear:

 

“The Belief Engine is real. It is normal. It is in all of us. Stuart Vyse [a research psychologist] shows for example, that superstition is not a form of psychopathology or abnormal behavior; it is not limited to traditional cultures; it is not restricted to race, religion, or nationality; nor is it only a product of people of low intelligence or lacking education. …all humans possess it because it is part of our nature, built into our neuronal mainframe.” (p. 47).

 

We all are inclined to detect patterns where there are none. Shermer refers to this tendency as patternicity. It is also called pareidolia. I’ve previously discussed this innate tendency noting that “Our brains do not tolerate vague or obscure stimuli very well. We have an innate tendency to perceive clear and distinct images within such extemporaneous stimuli.” It is precisely what leads us to see familiar and improbable shapes in puffy cumulus clouds or the Virgin Mary in a toasted cheese sandwich. Although this tendency can be fun, it can also lead to faulty and sometimes dangerous conclusions. And what is even worse is that when we hold a belief, we are even more prone to perceive patterns that are consistent with or confirm that belief. We are all prone to Confirmation Bias – an inclination to take in, and accept as true, information that supports our belief systems and miss, ignore, or discount information that runs contrary to our beliefs.

 

Patternicity and confirmation bias alone are not the only factors that contribute to the illusion of cause. There are at least two other equally salient intuitive inclinations that lead us astray. First, we tend to infer causation based on correlation. And second, the appeal of chronology, or the coincidence of timing, also leads us toward drawing such causal connections (Chabris & Simons, 2010).

 

A fundamental rule in science and statistics is that correlation does not infer causation. Just because two events occur in close temporal proximity, does not mean that one leads to the other. Chabris and Simons note that this rule is in place because our brains automatically – intuitively – draw causal associations, without any rational thought. We know that causation leads to correlation – but it is erroneous to assume that the opposite is true. Just because A and B occur together does not mean A causes B or vice-versa. There may be a third factor, C, that is responsible for both A and B. Chabris and Simons use ice cream consumption and drownings as an example. There is a sizable positive correlation between these two variables (as ice cream consumption goes up so do the incidences of drowning), but it would be silly to assume that ice cream consumption causes drowning, or that increases in the number of drownings causes increases in ice cream consumption. Obviously, a third factor, summer heat, leads to both more ice cream consumption and more swimming. With more swimming behavior there are more incidents of drowning.

 

Likewise, with vaccines and Autism, although there may be a correlation between the two (increases in the number of children vaccinated and increases in the number of Autism diagnoses), it is incidental, simply a coincidental relationship. But given our proclivity to draw inferences based on correlation, it is easy to see why people would be mislead by this relationship.

 

Add to this the chronology of the provision of the MMR vaccine (recommended between 12 and 18 months), and the typical time at which the most prevalent symptoms of Autism become evident (18-24 months), people are bound to infer causation. Given the fact that millions of children are vaccinated each year, there are bound to be examples of tight chronology.

 

So what is at work here are hyperactive agency detection (or overzealous patternicity), an inherent disposition to infer causality from correlation, and a propensity to “interpret events that happened earlier as the causes of events that happened or appeared to happen later” (Chabris & Simons, 2010, p. 184).  Additionally, you have a doctor like Andrew Wakefield misrepresenting data in such a way to solidify plausibility and celebrities like Jenny McCarthy using powerful anecdotes to convince others of the perceived link. And anecdotes are powerful indeed. “..[W]e naturally generalize from one example to the population as a whole, and our memories for such inferences are inherently sticky. Individual examples lodge in our minds, but statistics and averages do not. And it makes sense that anecdotes are compelling to us. Our brains evolved under conditions in which the only evidence available to us was what we experienced ourselves and what we heard from trusted others. Our ancestors lacked access to huge data sets, statistics, and experimental methods. By necessity, we learned from specific examples…” (Chabris & Simons, 2010, pp. 177-178).  When an emotional mother (Jenny McCarthy) is given a very popular stage (The Oprah Winfrey Show) and tells a compelling story, people buy it – intuitively – regardless of the veracity of the story. And when we empathize with others, particularly those in pain, we tend to become even less critical of the message conveyed (Chabris & Simons, 2010). These authors add that “Even in the face of overwhelming scientific evidence and statistics culled from studies of hundreds of thousands of people, that one personalized case carries undue influence” (p.178).

 

Although the efficacy of science is unquestionable, in terms of answering questions like the veracity of the relationship between vaccines and Autism, it appears that many people are incapable of accepting the reality of scientific inquiry (Chabris & Simons, 2010). Acceptance necessitates the arduous application of reason and the rejection of the influences rendered by the intuitive portion of our brain. This is harder than one might think. Again, it comes down to evolution. Although the ability to infer cause is a relatively recent development, we hominids are actually pretty good at it. And perhaps, in cases such as this one, we are too proficient for our own good (Chabris & Simons, 2010).

 

References

 

Center for Disease Control. (2009). Recommended Immunization Schedule for Persons Aged 0 Through 6 Years. http://www.cdc.gov/vaccines/recs/schedules/downloads/child/2009/09_0-6yrs_schedule_pr.pdf

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. Random House: New York.

 

Novella, S. (2010). Hyperactive Agency Detection. NeuroLogica Blog. http://www.theness.com/neurologicablog/?p=1762

 

Shermer, M. (2000). How We Believe. W.H. Freeman / Henry Holt and Company: New York.

Share

In psychology there are some pretty famous studies that have penetrated popular culture. Many folks are at least familiar with Skinner’s rat box, Pavlov’s salivating dogs, Milgram’s obedience studies, Bandura’s Bobo Dolls, and Harlow’s rhesus monkeys reared by wire frame terry cloth mothers. In recent history, perhaps the most well known study pertains to inattentional blindness. If you have never heard of or seen a video of six college students, three in black shirts and three in white shirts, bouncing a couple basketballs back and forth, see the following video before you proceed.

 

 

So, of course I am referring to Daniel Simons’ Invisible Gorilla study. Just about everyone I know has seen this video, and I don’t recall any of them telling me that they did see the gorilla. I didn’t and I was absolutely flabbergasted – because I tend to be a pretty vigilant guy. This video is a graphic illustration of what Chabris and Simons (2010) refer to as the Illusion of Attention, and about 50% of those who watch the video while counting passes among white shirted players miss the gorilla.

 

This particular illusion concerns me because I spend a fare amount of time riding a bicycle on the roads of Western New York. So why should I or anyone who rides a bicycle or motorcycle, or anyone who drives while texting or talking on a cell phone be concerned?

 

The cold hard truth is that we may completely miss events or stimuli that we do not expect to see. If you don’t expect to see, and therefore fail to look for, bicycles and motorcycles, you may look right at them but fail to see them. LOOKING IS NOT SEEING just as hearing is not listening. This hearing/listening analogy is dead on.  How often have you been caught hearing someone but not listening to what was actually being said?  Chabris and Simons discuss in their book, The Invisible Gorilla, a study conducted by Daniel Memmert of Heidelberg University that demonstrated (using an eye-tracker) that virtually everyone who missed the gorilla looked directly at it at some point in the video (often for a full second). Bikers are the invisible gorillas of the roadways.

 

And as for drivers, if you are distracted by a cell phone conversation or by texting, you are less likely to see unexpected events (e.g., bicycles, motorcycles, pedestrians, wildlife).

 

Most drivers who text and talk on cell phones do not have problems. In fact, most driving is uneventful – as a result, most people get away with these behaviors. However, it is when there is an unexpected event that mobile phone users struggle with seeing and responding fluently to these events. You are under the same illusion as everybody else who has not been in an accident. Everyone believes, until they hit or kill somebody, that they are proficient drivers even while texting or talking on the phone.  And by the way, hands free head sets make no difference. Driving while talking on a cell phone disables you as much as does alcohol.

 

Think about driving down a road not seeing and subsequently hitting a young child on a bike. Think about having to live with killing a middle aged couple with three kids in college who were lawfully riding down the road on a tandem bicycle.  You hit the invisible gorilla.  Live with that!

 

Daniel Simons, in a recently published study, also suggests that even if you are expecting an unexpected event,  it is likely that you will miss other unanticipated events. Check out The Monkey Business Illusion video even if you have seen the invisible gorilla video. Test yourself.

 

 

I have long known that I am at risk while riding my bike on the road.  I have recently incorporated wearing bright hi-vis attire as I ride.  Doing so is completely inconsistent with my style; but I have done so in an effort to be safer.  I was surprised to learn that research shows that doing so will increase your visibility for those that are looking for you – but that it will likely make no difference at all for inattentionally blind drivers. For those drivers who do not expect to see cyclists, hi-vis clothing will not likely increase the likelihood that you will be seen.  Using head and tail lights works on a similar level.  They do increase visibility but only for those looking for such strange sights.  The best way to increase one’s safety while riding is to look like a car.

 

It is also important to note that riding in areas where there are more bikers helps too. Chabris and Simons (2010) noted a report by Peter Jacobson, a public health consultant in California who analyzed data on accidents involving automobiles striking pedestrians or cyclists. He found that in cities where there were more walkers and cyclists, there were actually fewer accidents. More folks walking or riding bikes seems to increase the level of driver expectation for seeing such individuals – thus making one less at risk of being victimized by inattentional blindness. It was further noted that drivers who also ride bikes may actually be more aware – if only more people would get out of their cars and get back on bicycles.

 

The bottom line is that our intuition about our attention is problematic. Intuitively we believe that we attend to and see, what is right before us. Research and real world data shows us that this is not the case. At the very least, when driving, we need to be aware of this erroneous assumption, and work diligently to avoid distractions like talking on the phone or texting. As for cyclists (motor powered or not) we must anticipate that we won’t be seen and behave accordingly. Although hi-vis clothing and lights may not aid in your visibility for some drivers, it will for those that are looking out for you.

 

Chabris and Simons contend that this illusion is a by product of modernity and the subsequent fast paced highly distracting world we live in. We have evolved for millions of years by process of natural selection in a middle sized slow paced world. Traveling faster than a few miles an hour is a relatively new development for our species. Today we travel in motor vehicles at break neck speeds. On top of that we distract ourselves with cell phones, Blackberries, iPhones, iPods and GPS units. Although the consequences of these factors can be grave – in most cases we squeak by – which is a double edged sword because it essentially reinforces the illusion and the behavior.

 

References:

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Simons, D. J., 2010. Monkeying around with the gorillas in our midst: familiarity with an inattentional-blindness task does not improve the detection of unexpected events i-Perception 1(1) 3–6

Share

Imagine yourself walking down a familiar street approaching a stranger who is obviously lost, staring hopelessly at a map.  As you saunter by you provide eye contact and a look of willingness to help. He asks you for directions.  As you begin to offer your advice, you are interrupted by a construction crew carrying a large door.  They walk right between you and the stranger.  Now imagine that as the construction crew parted you visually from the stranger a new and different person covertly took on the same lost role.  This new stranger is wearing different clothes, is taller by three inches, has a different build, and different vocal qualities.  Do you think you would notice?

 

Chabris and Simons (2010) in the The Invisible Gorilla share the results of a study carried out by Dan Simons and a colleague where they tested whether people would notice such changes in a scenario very much like the one I just described. When the scenario was described to undergraduates, 95% believed that they would certainly notice such a change (as is likely the case for you as well). Yet when this experiment was carried out in the real world, nearly 50% of the participants did not notice the switch!

 

This particularly startling data is indicative of change blindness, defined by Chabris and Simons (2010) as failure to notice changes between what was in view moments before and what is in view currently. Essentially, we tend not to compare and thus notice stimuli changes from moment to moment. As a result we tend to be “blind” in many cases to pretty obvious changes. And what is equally salient is that we are unaware of this blindness. If you are like most people you said “No way I’d miss that!” Yet it is likely that about half of you would miss such changes.

 

Unconvinced? So were a group of Harvard undergraduates who had just attended a lecture that covered the above “door study” and change blindness. After the lecture, students were recruited to participate in further research. Interested students were directed to a different floor where they were greeted by an experimenter behind a counter. As the recruits proceeded to review and complete the necessary paperwork, the experimenter who greeted and instructed them regarding the paperwork ducked down behind the counter, presumably to file some papers, only to depart as a new and different experimenter took over the role. Even after being primed with the knowledge of change blindness, not one of the students noticed the swap! This was true even for some of the students who had just moments before boldly stated that they would notice such a change. We are in fact largely blind to our change blindness regardless of our confidence regarding our vigilance.

 

These results, contend Chabris and Simons, comprise conclusive evidence for the illusion of memory, (which is the disconnect between how our memory works and how we think it works).

 

Most of us are all too aware of the failings of our short-term memory. We often forget where we put the car keys, cell phone, or sunglasses. These authors note that we are generally pretty accurate when it comes to knowing the limits of this type of memory. License plates and phone numbers have only seven digits because most of us can only hold that much data in short-term memory. However, when it comes to understanding the limits of our long-term memory we tend to hold entirely unrealistic, fallacious, and illusory expectations.

In a national survey of fifteen hundred people [Chabris and Simons] commissioned in 2009, we included several questions designed to probe how people think memory works. Nearly half (47%) of the respondents believed that ‘once you have experienced an event and formed a memory of it, that memory doesn’t change.’ An even greater percentage (63%) believed that ‘human memory works like a video camera, accurately recording the events we see and hear so that we can review and inspect them later.” (Chabris & Simons, 2010, pp. 45-46).

They added:

People who agreed with both statements apparently think that memories of all our experiences are stored permanently in our brains in an immutable form, even if we can’t access them. It is impossible to disprove this belief… but most experts on human memory find it implausible that the brain would devote energy and space to storing every detail of our lives…” (p. 46).

So, as it turns out, our memories of even significant life events are quite fallible. Although we perceive such memories as being vivid and clear, they are individual constructions based on what we already know, our previous experiences, and other cognitive and emotional associations that we ultimately pair with the event. “These associations help us discern what is important and to recall details about what we’ve seen. They provide ‘retrieval cues’ that make our memories more fluent. In most cases, such cues are helpful. But these associations can also lead us astray, precisely because they lead to an inflated sense of precision of memory.” (Chabris & Simons, 2010, p. 48). In other words, our memories are not exact recordings, they are instead modified and codified personal replicas that are anything but permanent.

 

I cannot do justice to the impressive and exhaustive detailing that Chabris and Simons provide in the The Invisible Gorilla regarding the illusion of memory. However, suffice it to say, that we give way too much credit to the accuracy of our own long-term memories and have unrealistic expectations regarding others’ recall. People recall what they expect to remember and memories are modified over time based on malleable belief systems. Memories fade and morph over time depending on the “motives and goals of the rememberer.” (Chabris & Simons, 2010, p. 51).

“Although we believe that our memories contain precise accounts of what we see and hear, in reality these records can be remarkably scanty. What we retrieve often is filled in based on gist, inference, and other influences; it is more like an improvised riff on a familiar melody than a digital recording of an original performance. We mistakenly believe that our memories are accurate and precise, and we cannot readily separate those aspects of our memory that accurately reflect what happened from those that were introduced later.” (Chabris & Simons, 2010, pp 62-63).

They detail with riveting stories continuity errors in movies, source memory errors (is it your memory or mine?), flashbulb memories, and false memories in a way that really drives home the point that our memories are not to be trusted as factual depictions of historical fact. They beg the question: Can you trust your memory?

 

The answer: Partially, but you must be aware that your memory is not immutable. It is erroneous to assume that your memories are factual and it is equally fallacious to presume that other’s memories are likewise infallible. Two people witnessing the same event from the same perspective are likely to recall the event differently because of their unique personal histories, capabilities, internal associations, and thus their unique internal cognitive associations, as they store into memory the bits and pieces of the event.

 

Isn’t it amazing and scary that we give so much credit and power to eye-witness testimony in the court of law? Such power is conferred based on the pervasive and deeply held belief in the accuracy of memory – which you must know by now is an illusion. This is just another example pertaining to the illusion of justice in this country.

 

On a more personal level, next time you and your significant other get into a debate about how some past event went down, you have to know that you both are probably wrong (and right) to some degree. There is your truth, their truth, and the real truth. These can be illustrated in a Venn Diagram with three circles that from time to time have various degrees of mutual overlap. We must admit that over time the real truth is likely to become a smaller piece of the story. This necessitates that we get comfortable with the reality that we don’t possess a DVR in our brains and that we part ways with yet another illusion of the importance and power of our uniquely human intuition.

 

Reference:

 

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Share

Have you ever wondered what makes a pundit a pundit? I mean really! Is there pundit school or a degree in punditry? Given what I hear, I can only imagine that what would be conferred upon graduation is a B.S. of different, more effluent sort. I mean REALLY!

 

I am certain that many of you have heard the rhetoric spewed by many of the talking heads on television and talk radio. This is true regardless of their alleged political ideology. And even more alarming, it seems to me, is that the more bombastic they are, the more popular they are. A pundit is supposed to be an expert – one with greater knowledge and insight than the general population – and subsequently they should possess the capacity to analyze current scenarios and draw better conclusions about the future than typical folk.

 

However, what we typically hear is two or more supremely confident versions of reality. You name the issue, be it anthropogenic global warming, health care reform, the value of free market systems, virtually no two pundits can agree unless of course they are political brethren.

 

Have you ever wondered if any one has ever put the predictive reliability of these so called experts to a test? Well, Philip Tetlock, a psychology professor at UC Berkley, has done just that. In 1984 Tetlock undertook such an analysis and his initial data was so so alarming (everybody had called the future wrong with regard to the cold war and demise of the USSR) that he decided to embark on what was to eventually become a two decade long quantitative analysis of, and report card on, the true predictive capabilities of professional pundits.

 

In 2005 Tetlock published his findings in his book, Expert political judgment: How good is it? How can we know? The results were again surprising. He analyzed the predictions made by over 280 professional experts. He gave each a series of professionally relevant real life situations and asked them to make probability predictions pertaining to three possible outcomes (often in the form of things will: stay the same, get better, or get worse). Further, Tetlock interviewed each expert to evaluate the thought processes used to draw their conclusions.

 

In the end, after nearly twenty years of predictions and real life playing itself out, Tetlock was able to analyze the accuracy of over 82,000 predictions. And the results were conclusive – the pundits performed worse than random chance in predicting outcomes within their supposed areas of expertise. These experts were able to accurately predict the future less than 33% of the time and non-specialists did equally as well. And to make matters worse, the most famous pundits were the least accurate. A clear pattern emerged – confidence in one’s predictions was highly correlated with error. Those who were most confident about their predictions were most often the least accurate. He noted that the most confident, despite their inaccuracy, were in fact the most popular! Tetlock noted that they were essentially blinded by their certainty.

 

Jonah Lehrer in How We Decide wrote of Tetlock’s study and stated “When pundits were convinced that they were right, they ignored any brain areas that implied that they might be wrong. This suggests that one of the best ways to distinguish genuine from phony expertise is to look at how a person responds to dissonant data. Does he or she reject the data out of hand? Perform elaborate mental gymnastics to avoid admitting error? He also suggested that people should “ignore those commentators that seem too confident or self assured. The people on television who are most certain are almost certainly going to be wrong.”

 

You might be surprised that the vast majority of the pundits actually believed that they were engaging in objective and rational analysis when drawing their conclusions.

 

So, experts, rationally analyzing data, drawing conclusions with less than random chance accuracy? One has to question either their actual level of expertise or the objectivity of their analysis. Tetlock suggests that they are “prisoners of their preconceptions.”

 

This begs the question: Is this an error of reason or an error of intuition? Jonah Lehrer suggests that this error is actually played out as one cherry picks which feelings to acknowledge and which to ignore. Lehrer noted: “Instead of trusting their gut feelings, they found ways to disregard the insights that contradicted their ideologies… Instead of encouraging the arguments inside their heads, these pundits settled on answers and then came up with reasons to justify those answers.

 

Chabris and Simons in the The Invisible Gorilla discuss why we are taken in by the pundits despite their measurable incompetence and why they likely make the errors that they do. The bottom line is that such ubiquitous errors (made by novices and experts alike) are in fact illusions of knowledge perpetrated by intuition and further that we are suckers for confidence.

 

First of all, our intuitive inclination is to overly generalize and assume that one’s confidence is a measure of one’s competence. Such an assumption is appropriate in situations where one personally knows the limits of the individual’s capabilities. When it comes to pundits, few people know the supposed expert well enough to accurately assess whether he or she is worthy of their confidence. Regardless, people prefer and are drawn toward confidence. Our intuitive attraction to, and trust in confidence, sets us up for error. It is the illusion of confidence.

 

Chabris and Simons then review numerous stories and studies that “show that even scientific experts can dramatically overestimate what they know.” They demonstrate how we confuse familiarity with knowledge – and that when our knowledge is put to the test “…our depth of understanding is sufficiently shallow that we may exhaust our knowledge after just the first question. We know that there is an answer, and we feel that we know it, but until asked to produce it we seem blissfully unaware of the shortcomings in our own knowledge.” They add:

And even when we do check our knowledge, we often mislead ourselves. We focus on those snippets of information that we do possess, or can easily obtain, but ignore all of the elements that are missing, leaving us with the impression that we understand everything we need to.

 

So what can we safely conclude?

 

For certain, we should be aware of the limits of our knowledge and be ever vigilant so as to remain skeptical about what other experts espouse (particularly if they come off as being very confident). Tetlock suggests that responsible pundits should state their predictions in measurable terms – so that they are subject to analysis – both for error correction/learning and accountability purposes. Further he discusses the importance of placing predictions within error bars denoting the probability of accuracy. Chabris and Simons contend that only through rational analytic thought can we overcome the illusion of knowledge. We have to stave off our intuitive inclination to trust bold, black and white predictions; we have to accept that complicated issues demand complicated solutions; and that predicting the future is very difficult. As such, we need to get more comfortable with probabilities and become more skeptical of certainties. As for the pundits – they are not worth listening to – they are almost always wrong – and all they really do is polarize the process and the nation. We need to inform one another of this – and ultimately make an active rational choice to stop victimizing ourselves.

 

References:

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Lehrer, J. (2009). How We Decide. New York: Houghton Mifflin Harcourt.

Menand, L. (2005). Everybody’s an Expert. The New Yorker. http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?printable=true

Tetlock, P.E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.

Share

Over the last couple months I have submitted posts proclaiming the potency of intuition. One of my major resources has been Malcolm Gladwell’s Blink: The Power of Thinking Without Thinking. Among Gladwell’s tenets, the most prominent was the power of intuition and its relative supremacy, in certain situations, over rational thought. I have also heavily referenced Jonah Lehrer’s, How We Decide. Lehrer argues that there is not in fact, a Platonic Dichotomy that establishes rationality in a supreme and distinct role over intuition. Instead, he suggests that emotion plays a key role in making decisions, much more so than has historically been acknowledged. Lehrer, however, uses more scientific scrutiny and relies more heavily on research than does Gladwell.

 

Currently I am reading The Invisible Gorilla by Daniel J. Simons and Christopher F. Chabris. These cognitive psychologists are best known for their Invisible Gorilla study illustrating selective attention. These authors appear to be on a mission to resurrect rational thought by highlighting the inherent weaknesses of intuition. Gladwell in particular comes under scrutiny by these authors for his alleged glorification of rapid cognition.

 

Not only have Gladwell’s hypotheses come under attack, so to has his journalistic approach. Simons and Chabris efficiently deconstruct a couple of Gladwell’s anecdotes as examples of illusions manifested by intuition. Contrary to the message of Blink, Simons and Chabris contend that intuition is inherently problematic and detail automatic illusions that spring forth as manifested by the adaptive unconscious.

 

Anecdotal evidence is inherently flawed yet amazingly compelling. Gladwell, they acknowledge, is a master story teller, and he uses this talent to effectively support his contentions. They argue, however, that he falls prey to the very illusions of intuition that he is ultimately celebrating.

 

Jonah Lehrer seems to escape Simons’ and Chabris’ scrutiny – yet this may simply be an artifact of release date. How We Decide was released in 2009 while Gladwell’s Blink was released in 2007. Whereas Blink appears on the surface to be a celebration of intuition, Lehrer instead puts a microscope on the brain and the interplay of reason and emotion. He identifies the regions in the brain thought to be involved in these functions and highlights the research that systematically debunks the notion of reason and emotion being distinct epic foes battling it out for supremacy. Lehrer does not seem to celebrate the relative power of intuition over reason, but instead makes it clear that emotion, acting as a messenger of intuition, actually plays a crucial role in reason itself.

 

Rarely are parts in complex systems clearly distinct. Dividing brain function into dichotomous terms like reason and intuition is just another example of a flawed human inclination to pigeon hole nature or make issues black and white. Although Gladwell puts a more positive spin on intuition than has historically been the case, he also makes an effort to identify at least some of its shortcomings. Lehrer brings into focus the complexity and interconnectedness of the system and dispels the traditional dichotomy. Simons and Chabris scientifically scrutinize the Gladwellian notion of the supremacy of intuition. Their skeptical message lacks the sex appeal of thinking without thinking, but it is very important just the same. I look forward to detailing parts of The Invisible Gorilla in the weeks to come.

 

References:

 

Chabris, C. F., & Simons, D. J., 2010. The Invisible Gorilla. Random House: New York.

 

Gladwell, M. 2005. Blink: The Power of Thinking Without Thinking. Little, Brown and Company: New York.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

Believe it or not, freewill, to a large extent, is an illusion. For the most part, what you do, as you go through your day is based on decisions made outside of your conscious awareness. Many of these decisions involve a complicated and largely unconscious interplay among various brain regions that each struggle for control of your behavior.

 

One has to be careful to avoid anthropomorphic tendencies when trying to understand this epic struggle. It is not as though there are specific Freudian (Id, Ego, Superego) forces at play, each with a specific and unique mission.  In reality it is more like chemical warfare going on in your brain – where neurotransmitters are released by those relevant brain centers based on current environmental circumstances (what your senses perceive in the world), your previous experiences in similar circumstances, and your treasure trove of knowledge. The subsequent emotions triggered by those neurotransmitters are then weighed out in the orbitofrontal cortex (OFC) in what has essentially been a tug of war involving varying measures of reinforcement and punishment.

 

Most of us are unaware of this neurological process and are under the illusion that we go through life making rational reason-based decisions. Although we may live within this illusion, the people who layout super center floor plans or produce advertisements know the truth. This discrepancy in knowledge makes you vulnerable. They use their knowledge of how the brain works in a manipulative and concerted effort to help you part ways with your hard earned money. It is not really a conspiracy, it is just an effort to gain a competitive advantage. It’s business.

 

Following is an abbreviated explanation of the brain systems in play and then an expose of how marketers use our brains against us. This information is drawn from Jonah Lehrer’s excellent book entitled How We Decide.

 

First there is the dopamine reward pathway. Dopamine is a neurotransmitter that serves a number of important functions in the brain. One of its most cogent roles is played out as a result of activation of the nucleus accumbens (NAcc). When the NAcc is activated it floods the brain with dopamine and we as a result experience pleasure. Desire for an item activates the NAcc. Being in the presence of the desired item activates it further. The greater the arousal of the NAcc the more pleasure we experience. It is your NAcc that is responsible for the happiness you feel when you eat a piece of chocolate cake, or listen to your favorite song, or watch your sports team win an exciting game (Lehrer, 2009).

 

Then there is the insula – a brain region that produces among other sensations, aversive feelings. In a New York Times article on the insula, Sandra Blakeslee (2006) noted that this center “lights up” in brain scans when people feel pain, anticipate pain, empathize with others, see disgust on someone’s face, are shunned in a social settings, and decide not to buy an item. In many cases we avoid exciting the insula as it is the system that produces the unpleasantness of caffeine or nicotine withdrawal and the negative feelings associated with spending money.

 

Super stores are designed to excite your NAcc and quiet the insula. You can’t help but notice when you walk into a Target, Walmart, Lowes, or even Pier 1 Imports just how much stuff is there – most of which you do not possess. Just by entering the store you have aroused your NAcc and the associated cravings.  Lehrer (2009) notes:

Just look at the interior of a Costco warehouse. It’s no accident that the most coveted items are put in the most prominent places. A row of high-definition televisions lines the entrence. The fancy jewelry, Rolex watches, iPods, and other luxury items are conspicuously placed along the corridors with the heaviest foot traffic. And then there are the free samples of food, liberally distributed throughout the store. The goal of a Costco is to constantly prime the pleasure centers of the brain, to keep us lusting after things we don’t need. Even though you probably wont buy the Rolex, just looking at the fancy watch makes you more likely to buy something else, since the desired item activates the NAcc. You have been conditioned to crave a reward.”

He further noted:

“But exciting the NAcc is not enough; retailers must also inhibit the insula. This brain area is responsible for making sure you don’t get ripped off, and when it’s repeatedly assured by retail stores that low prices are “guaranteed,” or that a certain item is on sale, or that it’s getting the “wholesale price,” the insula stops worrying so much about the price tag.  In fact, researchers have found that when a store puts a promotional sticker next to a price tag – something like “Bargain Buy!” or “Hot Deal!” – but doesn’t actually reduce the price, sales of that item still dramatically increase.  The retail tactics lull the brain into buying more things, since the insula is pacified.  We go broke convinced that we are saving money.”

I hypothesize that the frequently redundant catalogs that routinely fill our mailboxes from retailers like LLBean and Lands End work on our brains much like super centers do.  They excite the NAcc with idealized images modeled by perfect pretty people.  They pacify the insula by noting improved features, sales, and deep discounts on closeouts.  The necessary use of credit cards, Lehrer (2009) notes, has an additional inhibitory affect on the insula.  When the insula is calm and you are primed with dopamine, the pleasure center has a disproportional amount of control.  You may think you have complete rational control over this – but all this takes place outside of your direct awareness and plays out as feelings that guide your behavior.  I further hypothesize that online retail stores work in a similar way (although for some the insula may be aroused by security issues pertaining to using a credit card online).  Regardless, substantial marketing attempts by companies like EMS, REI, Victoria’s Secrets, LLBean, Bath & Body Works fill my in box, always hoping to draw in my NAcc and pacify my insula and subsequently open my wallet.  You have to guess that the amount of money devoted to catalogs and internet marketing pays off for these companies or they wouldn’t do it.

 

Being aware of one’s neurology and how we are manipulated may help us mediate these unconscious forces and thus help us make better decisions.  I myself try to avoid Malls and stores like Target because of the feelings they create in me.  And for this very reason, I’ve stopped routinely looking at catalogs.  I try to shop based only on need – not want.  I’m making progress – but it is hard – these patterns have been in place and reinforced for a long time.

 

References

 

Blakeslee, Sandra. 2007. Small Part of the Brain, and Its Profound Effects. New York Times. http://www.nytimes.com/2007/02/06/health/psychology/06brain.html?emc=eta1&pagewanted=all

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

For nearly as long as humans have been thinking about thinking, one of the most intriguing issues has been the interplay of reason and emotion. For the greatest thinkers throughout recorded history, reason has reigned supreme. The traditional paradigm has been one of a dichotomy where refined and uniquely human REASON pitches an ongoing battle for control over animalistic and lustful EMOTIONS. It has been argued by the likes of Plato, Descartes, Kant and and even Thomas Jefferson that reason is the means to enlightenment and that emotion is the sure road to human suffering (Lehrer, 2009).

 

This Platonic dichotomy remains a pillar of Western thought (Lehrer, 2009). Suppressing your urges is a matter of will – recall the mantras “Just say no!” or “Just do it!” My guess is that most people today continue to think of the brain in these terms. Until recently even the cognitive sciences reinforced this notion. Only through very recent advances in the tools used to study the brain (e.g., fMRI) and other ingenious studies (e.g., Damasio’s IGT) has any evidence been generated to place this traditional paradigm in doubt. As it turns out, emotion plays a very crucial role in decision making. Without it, our ability to reason effectively is seriously compromised. I have long believed that feelings and emotions should be under the control of our evolutionary gift – the frontal cortex. Reason, after all, is what sets us apart from the other animals. Instead it is important to understand that we have learned that these forces are NOT foes but essentially collaborative and completely interdependent forces.

 

The implications of this recent knowledge certainly do not suggest that it is fruitless to employ our reason and critical thinking capabilities as we venture through life. Reason is crucial and it does set us apart from other life forms that lack such fully developed frontal cortices. This part of the outdated concept is correct. However, we are wrong to suppose that emotion with regard to decision making lacks value or that it is a villainous force.

 

Jonah Lehrer, in his book, How We Decide discusses this very issue and notes that: “The crucial importance of our emotions – the fact that we can’t make decisions without them – contradicts the conventional view of human nature, with its ancient philosophical roots.” He further notes:

 

“The expansion of the frontal cortex during human evolution did not turn us into purely rational creatures, able to ignore our impulses. In fact, neuroscience now knows that the opposite is true: a significant part of our frontal cortex is involved with emotion. David Hume, the eighteenth-century Scottish philosopher who delighted in heretical ideas, was right when he declared that reason was the “the slave of the passions.”

 

So how does this work? How do emotion and critical thinking join forces? Neuroscientists now know that the orbitofrontal cortex (OFC) is the brain center where this interplay takes place. Located in the lower frontal cortex (the area just above and behind your eyes), your OFC integrates a multitude of information from various brain regions along with visceral emotions in an attempt to facilitate adaptive decision making. Current neuroimaging evidence suggests that the OFC is involved in monitoring, learning, as well as the memorization of the potency of both reinforcers and punishers. It operates within your adaptive unconscious – analyzing the available options, and communicating its decisions by creating emotions that are supposed to help you make decisions.

 

Next time you are faced with a decision, and you experience an associated emotion – it is the result of your OFC’s attempt to tell you what to do. Such feelings actually guide most of our decisions.

 

Most animals lack an OFC and in our primate cousins, this cortical area is much smaller. As a result, these other organisms lack the capacity to use emotions to guide their decisions. Lehrer notes: “From the perspective of the human brain, Homo sapiens is the most emotional animal of all.”

 

I am struck by the reality that natural selection has hit upon this opaque approach to guide behavior. This just reinforces the notion that evolution is not goal directed. Had evolution been goal directed or had we been intelligently designed don’t you suppose a more direct or more obviously rational process would have been devised? The reality of the OFC even draws into question the notion of free will – which is a topic all its own.

 

This largely adaptive brain system of course has draw backs and limitations – many of which I have previously discussed (e.g., implicit associations, cognitive conservatism, attribution error, cognitive biases, essentialism, pareidolia). This is true, in part, because these newer and “higher” brain functions are relatively recent evolutionary developments and the kinks have yet to be worked out (Lehrer, 2009). I also believe that perhaps the complexities and diversions of modernity exceed our neural specifications. Perhaps in time, natural selection will take us in a different direction, but none of us will ever see this. Regardless, by learning about how our brains work, we certainly can take an active role in shaping how we think. How do you think?

 

References:

 

Gladwell, M. (2005). ‘Blink: The Power of Thinking Without Thinking.’ Little, Brown and Company:New York.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

Recently, Fox News, aired a story posing the question as to whether Fred Rogers was evil.  Why you may ask, would anyone use the word evil in reference to such a gentle man?  They were suggesting that his you’re special message fostered unworthy self esteem and in effect ruined an entire generation of children.  This accusation inspired a fair amount of discourse that in some cases boiled down to the question of why children today have such hollow needy shells.  An example of the discourse on this topic can be seen at Bruce Hood’s blog in an article entitled Mr. Rogers is Evil According to Fox News.

 

The consensus among skeptics was that Mr. Rogers was not, in fact, evil and that he is not responsible for the current juvenile generation’s need for much praise and attention for relatively meaningless contribution. There was almost universal acknowledgment of the problem however, and discussions lead to troubling issues such as grade inflation at schools and universities and poor performance in the workplace. In an intriguing article by Carol Mithers in the Ladies Home Journal entitled Work Place Wars addresses the workplace implications of this phenomena. Mithers notes:

“.…. the Millennials — at a whopping 83 million, the biggest generation of all…. are technokids, glued to their cell phones, laptops, and iPods. They’ve grown up in a world with few boundaries and think nothing of forming virtual friendships through the Internet or disclosing intimate details about themselves on social networking sites. And, many critics charge, they’ve been so coddled and overpraised by hovering parents that they enter the job market convinced of their own importance. Crane calls them the T-ball Generation for the childhood sport where “no one fails, everyone on the team’s assured a hit, and every kid gets a trophy, just for showing up.

 

Workers of this generation are known for their optimism and energy — but also their demands: “They want feedback, flexibility, fun, the chance to do meaningful work right away and a ‘customized’ career that allows them to slow down or speed up to match the different phases of life,” says Ron Alsop, author of The Trophy Kids Grow Up: How the Millennial Generation Is Shaking Up the Workplace.

I find it ironic that the very people today who struggle with the behavior of the Millennials are the ones who shaped the behaviors of concern. I personally have struggled with the rampant misapplication of praise, attention, and the provision of reinforcement for meaningless achievements. I have seen this everywhere – in homes, schools, youth athletic clubs, you name it. It has been the most recent parenting zeitgeist. But where did this philosophy come from?

 

Throughout my doctoral training in psychology (late 80’s and early 90’s) I learned that reinforcement is a powerful tool, but it was clear to me that it has to be applied following behaviors you WANT to increase. Nowhere in my studies did I read of the importance of raising children through the application of copious amounts of reinforcement just to bolster their self esteem. I am aware of no evidence based teachings that suggest this approach. However, given the near universal application of these practices it must of come from somewhere. This very question, I’m sure, lead to the placement of responsibility squarely on the shoulders of poor Mr. Rogers.

 

Although the source of this approach remains a mystery to me, Dr. Carol Dweck’s work clarifies the process of the outcome. In an interview in Highlights, Dr. Dweck discusses Developing a Growth Mindset.  Dr. Dweck has identified two basic mindsets that profoundly shape the thinking and behavior both we as adults exhibit and foster in our children.  She refers to these as the Fixed Mindset and the Growth Mindset. People with a Fixed Mindset, Dr. Dweck notes in the Highlights article, “believe that their achievements are based on innate abilities. As a result, they are reluctant to take on challenges.” Dweck further notes that “People with Growth Mindsets believe that they can learn, change, and develop needed skills.  They are better equipped to handle inevitable setbacks, and know that hard work can help them accomplish their goals.” In this same article “She suggests that we should think twice about praising kids for being “smart” or “talented,” since this may foster a Fixed Mindset. Instead, if we encourage our kids’ efforts, acknowledging their persistence and hard work, we will support their development of a Growth Mindset – better equipping them to learn, persist and pick themselves up when things don’t go their way.”

 

Dweck’s conclusions are based on extensive research that clearly supports this notion. Jonah Lehrer, in his powerful book, How We Decide discussed the relevance of Dweck’s most famous study. This work involved more than 400 fifth grade students in New York City, who were individually given a set of relatively simple non-verbal puzzles. Upon completing the puzzles the students were provided with one of two one-sentence praise statements. Half of the participants were praised for their innate intelligence (e.g., “You must be smart at this.”).  The other half were praised for their effort (e.g., “You must have worked really hard.”).

 

All participants were then given a choice between two subsequent tasks – one described as a more challenging set of puzzles (paired with the assurance that they would learn a lot from attempting) and a set of easier puzzles like the ones the subjects just completed.  In summarizing Dweck’s results, Lehrer noted “Of the group of kids that had been praised for their efforts, 90 percent chose the harder set of puzzles. However, of the kids that were praised for their intelligence , most went for the easier test.”  Dweck concludes that praise statements that focus on intelligence encourage risk avoidance. The “smart” children do not want to risk having their innate intelligence come under suspicion.  It is better to take the safe route and maintain the perception and feeling of being smart.

 

Dweck went on to demonstrate how this fear of failure can inhibit learning.  The same participants were then given a third set of puzzles that were intentionally very difficult in order to see how the children would respond to the challenge.   Those who were praised for their effort on the initial puzzles worked diligently on the very difficult puzzles and many of them remarked about how much they enjoyed the challenge. The children who were praised for their intelligence were easily discouraged and quickly gave up.  Their innate intelligence was challenged – perhaps they were not so smart after all.  Then all subjects were subjected to a final round of testing.  This set of puzzles had a degree of difficulty comparable to the first relatively simple set. Those participants praised for their effort showed marked improvements in their performance.  On average their scores improved by 30 percentage points.   Those who were praised for their intelligence, the very children who had just had their confidence shaken by the very difficult puzzles, on average scored 20 percentage points lower than they had on the first set.  Lehrer noted in reference to the participants praised for their effort that “Because these kids were willing to challenge themselves, even if it meant failing at first, they ended up performing at a much higher level.” With regard to the participants praised for intelligence Lehrer writes “The experience of failure had been so discouraging for the “smart” kids that they actually regressed.

 

In the Highlights interview Dweck suggests:

“It’s a mistake to think that when children are not challenged they feel unconditionally loved. When you give children easy tasks and praise them to the skies for their success, they come to think that your love and respect depend on their doing things quickly and easily. They become afraid to do hard things and make mistakes, lest they lose your love and respect. When children know you value challenges, effort, mistakes, and learning, they won’t worry about disappointing you if they don’t do something well right away.”

She further notes:

“The biggest surprise has been learning the extent of the problem—how fragile and frightened children and young adults are today (while often acting knowing and entitled). I watched as so many of our Winter Olympics athletes folded after a setback. Coaches have complained to me that many of their athletes can’t take constructive feedback without experiencing it as a blow to their self-esteem. I have read in the news, story after story, how young workers can hardly get through the day without constant praise and perhaps an award. I see in my own students the fear of participating in class and making a mistake or looking foolish. Parents and educators tried to give these kids self-esteem on a silver platter, but instead seem to have created a generation of very vulnerable people.”

So, we have an improved understanding of what has happened – but not necessarily of how the thinking that drives such parenting behavior came to be. Regardless, it is what it is, and all we can do is change our future behavior. Here are some cogent words of advice from Dr. Dweck (again from the Highlights article):

  1. “Parents can also show children that they value learning and improvement, not just quick, perfect performance. When children do something quickly and perfectly or get an easy A in school, parents should not tell the children how great they are. Otherwise, the children will equate being smart with quick and easy success, and they will become afraid of challenges. Parents should, whenever possible, show pleasure over their children’s learning and improvement.”
  2. Parents should not shield their children from challenges, mistakes, and struggles. Instead, parents should teach children to love challenges. They can say things like “This is hard. What fun!” or “This is too easy. It’s no fun.” They should teach their children to embrace mistakes, “Oooh, here’s an interesting mistake. What should we do next?” And they should teach them to love effort: “That was a fantastic struggle. You really stuck to it and made great progress” or “This will take a lot of effort—boy, will it be fun.
  3. Finally, parents must stop praising their children’s intelligence. My research has shown that, far from boosting children’s self-esteem, it makes them more fragile and can undermine their motivation and learning. Praising children’s intelligence puts them in a fixed mindset, makes them afraid of making mistakes, and makes them lose their confidence when something is hard for them. Instead, parents should praise the process—their children’s effort, strategy, perseverance, or improvement. Then the children will be willing to take on challenges and will know how to stick with things—even the hard ones.”

 

References

 

Dweck, C. Developing a Growth Mindset. Highlights Parents.com Interview  http://www.highlightsparents.com/parenting_perspectives/interview_with_dr_carol_dweckdeveloping_a_growth_mindset.html

 

Hood, B. Mr Rogers is Evil According to Fox Newshttp://brucemhood.wordpress.com/2010/05/03/mr-rogers-is-evil-according-to-fox-news/

 

Lehrer, J. 2009.  How We Decide. Houghton Mifflin Harcourt: New York.

 

Mithers, C. Workplace Wars. Ladies Home Journal. http://www.lhj.com/relationships/work/worklife-balance/generation-gaps-at-work/

Share

The capabilities of our adaptive unconscious are really quite amazing. In an earlier post, entitled Intuitive Thought, I covered the general relative strengths of this silent supercomputer running outside of our awareness. It has long been believed that rational thought, the application of logic and reason, over intuition, is the key to a successful life. One wonders, given the recent revelations about the importance of emotion and intuition, how reasoning capabilities would fair in a head to head (pun intended) competition with emotion?

 

Believe it or not, a research team from the University of Iowa devised a rather ingenious way of holding such a competition. In 1994 neuroscientists Antonio Damasio, Antoine Bechara, Daniel Tranel, and Steven Anderson developed the Iowa Gambling Task (IGT) to facilitate the identification of decision-making errors in individuals with prefrontal cortex damage. Both Malcolm Gladwell (Blink) and Jonah Lehrer (How We Decide) highlight this study in their powerful books on how we think. The IGT website describes the IGT as “a computerized experiment that is carried out in real time and resembles real-world contingencies. The task allows participants to select cards from four decks displayed on-screen. Participants are instructed that the selection of each card will result in winning or losing money. The objective is to attempt to win as much money as possible.” Sounds straight forward – although there is a catch. The participants are not aware that the decks are rigged in such a way that two decks consistently offer modest cash advances ($50) and rare penalties. These are the “good decks.” The two other decks, the “bad decks,” provide bigger advances ($100) but also devastating penalties ($1250). Playing the good decks is a slow but sure road to substantial winnings. The bad decks lead to disaster.

 

As participants began selecting cards, they tended to draw from all four decks (in a random fashion). However, as card selection proceeded and the consequences of their choices were realized, on average it took the typical participant about 50 cards before they started exclusively drawing from the “good decks.”  After drawing 50 cards, most participants developed a hunch that there were deck specific patterns in rewards and penalties and they began responding to those patterns. But It took on average about 80 cards before the typical subject could explain why they favored the good decks. That is 80 draws before most people concluded, rationally and logically, that there were good and bad decks.

 

In their original study, Damasio and his colleagues were interested in the emotional responses the subjects had to the task.  Participants were hooked up to a machine that specifically monitored their stress response (nervousness and anxiety) associated with each and every card selection.   What they discovered was that their subjects responded emotionally to the bad decks long before they changed their behavior or developed any rational understanding of the card distribution. On average most subjects exhibited a stress response to the bad decks after ten draws, a full 40 draws before their behavior changed and 70 draws before they could identify the reason for avoidance of the bad decks. Lehrer noted that “Although the subject still had little inkling of which card piles were the most lucrative, his emotions had developed an accurate sense of fear. The emotions knew which decks were dangerous. The subject’s feelings figured out the game first.”

 

On the IGT, neurotypical individuals almost always came out well ahead financially. Ultimately the emotions they experienced associated with draws from the various decks clued them into the correct responding pattern. However, individuals who were incapable of experiencing any emotional response – typically due to damaged orbito-frontal cotices – proved incapable of identifying the patterns and often went bankrupt. As it turns out, our emotional responses serve a very crucial role in good decision making – much more so than reason and logic. Again from Lehrer: “When the mind is denied the emotional sting of losing, it never figures out how to win.” The adaptive unconscious and the associated underlying emotional capacity of the brain serve an essential role in the decision making process. “Even when we think we know nothing, our brains know something. That’s what our feelings are trying to tell us.” (Lehrer, 2009).

 

It really is quite amazing that we strive for, and so greatly value, rational thought as a savior of sorts; yet it is our intuition and emotions that really serve as our most effective advisers. The acceptance of the inferiority of rationality is literally and figuratively counter-intuitive. Of course this does not mean we should devalue rationality and go with all our impulses. There are limits and dangers associated with such thinking, and our emotions are kept in balance by our reasoning capabilities. It is crucial that we understand the capacity and strengths of both reason and intuition, as well as their downfalls. I am devoted to this pursuit with growing passion and will continue to share my insights.

 

References:

 

Gladwell, M. 2005.  Blink: The Power of Thinking Without Thinking. Little, Brown and Company: New York

 

Lehrer, J. 2009.  How We Decide. Houghton Mifflin Harcourt: New York.

Share

I saw it with my own two eyes!” Does this argument suffice? As it turns out – “NO!” that’s not quite good enough. Seeing should not necessarily conclude in believing. Need proof? Play the video below.

 

 

As should be evident as a result of this video, what we perceive, can’t necessarily be fully trusted. Our brains complete patterns, fill in missing data, interpret, and make sense of chaos in ways that do not necessarily coincide with reality. Need more proof? Check these out.

 

Visual Illusion - A & B are the same shade of gray

Visual Illusion – A & B are the same shade of gray

Illusion - Notice the perceived motion around the green circles.

Illusion – Notice the perceived motion around the green circles.

 

Convinced? The software in our brains is responsible for these phenomena. And this software was coded through progressive evolutionary steps that conferred survival benefits to those with such capabilities. Just as pareidolia confers as survival advantage to those that assign agency to things that go bump in the night, there are survival advantages offered to those that evidence the adaptations that are responsible for these errors.

 

So really, you can’t trust what you see. Check out the following video for further implications.

 

 

Many of you are likely surprised by what you missed. We tend to see what we are looking for and we may miss other important pieces of information. The implications of this video seriously challenge the value of eye witness testimony.

 

To add insult to injury you have to know that even our memory is vulnerable. Memory is a reconstructive process not a reproductive one.2 During memory retrieval we piece together fragments of information, however, due to our own biases and expectations, errors creep in.2 Most often these errors are minimal, so regardless of these small deviations from reality, our memories are usually pretty reliable. Sometimes however, too many errors are inserted and our memory becomes unreliable.2 In extreme cases, our memories can be completely false2 (even though we are convinced of their accuracy). This confabulation as it is called, is most often unintentional and can spontaneously occur as a result of the power of suggestion (e.g., leading questions or exposure to a manipulated photograph).2 Frontal lobe damage (due to a tumor or traumatic brain injury) is known to make one more vulnerable to such errors.2

 

Even when our brain is functioning properly, we are susceptible to such departures from reality. We are more vulnerable to illusions and hallucinations, be they hypnagogic or otherwise, when we are ill (e.g., have a high fever, are sleep deprived, oxygen deprived, or have neurotransmitter imbalances). All of us are likely to experience at least one if not many illusions or hallucinations throughout our lifetime. In most cases the occurrence is perfectly normal, simply an acute neurological misfiring. Regardless, many individuals experience religious conversions or become convinced of personal alien abductions as a result of these aberrant neurological phenomena.

 

We are most susceptible to these particular inaccuracies when we are ignorant of them. On the other hand, improved decisions are likely if we understand these mechanisms, as well as, the limitations of the brain’s capacity to process incoming sensory information. Bottom line – you can’t necessarily believe what you see. The same is true for your other senses as well – and these sensory experiences are tightly associated and integrated into long-term memory storage. When you consider the vulnerabilities of our memory, it leaves one wondering to what degree we reside within reality.

 

For the most part, our perceptions of the world are real. If you think about it, were it otherwise we would be at a survival disadvantage. The errors in perception we experience are in part a result of the rapid cognitions we make in our adaptive unconscious (intuitive brain) so that we can quickly process and successfully react to our environment. For the most part it works very well. But sometimes we experience aberrations, and it is important that we understand the workings of these cognitive missteps. This awareness absolutely necessitates skepticism. Be careful what you believe!

 

References:

 

1.  169 Best Illusions–A Sampling, Scientific American: Mind & Brain. May 10, 2010
http://www.scientificamerican.com/slideshow.cfm?id=169-best-illusions&photo_id=82E73209-C951-CBB7-7CD7B53D7346132B

 

2.  Anatomy of a false memory. Posted on: June 13, 2008 6:25 PM, by Mo
http://scienceblogs.com/neurophilosophy/2008/06/anatomy_of_a_false_memory.php

 

3.  Simons, Daniel J., 1999. Selective Attention Test. Visual Cognitions Lab, University of Illinois. http://viscog.beckman.illinois.edu/flashmovie/15.php

 

4.  Sugihara, Koukichi 2010. Impossible motion: magnet-like slopes. Meiji Institute for Advanced Study of Mathematical Sciences, Japan. http://illusioncontest.neuralcorrelate.com/2010/impossible-motion-magnet-like-slopes/

Share