Last week I discussed Philip Tetlock’s work that revealed the utter meaninglessness of punditry in The Illusion of Punditry. It is important to note that although professional pundits, on average, were less accurate than random chance, a few outliers actually performed well above average. Tetlock closely examined the variables associated with the distribution of accuracy scores and discovered that experts were often blinded by their preconceptions, essentially lead astray by how they think. To elucidate his point, Tetlock employed Isaiah Berlin’s famous metaphor, The Hedgehog and the Fox. Berlin, a historian, drew inspiration for the title of this essay from a classical Greek poet Archilochus, who wrote: “The fox knows many things, but the hedgehog knows one big thing.”

 

Berlin contended that there are two types of thinkers, hedgehogs and foxes. To make sense of this metaphor, one has to understand a bit about these creatures. A hedgehog is a small spiny mammal that when attacked rolls into a ball with its spines protruding outward. This response is its sole defensive maneuver, its “one big thing,” employed under any indication of threat. And by extension he suggested that hedgehog thinkers “… relate everything to a single central vision, one system less or more coherent or articulate, in terms of which they understand, think and feel—a single, universal, organizing principle in terms of which alone all that they are and say has significance…” The cunning fox survives by adapting from moment to moment by being flexible and employing survival strategies that make sense in the current situation. They “pursue many ends, often unrelated and even contradictory, … their thought is scattered or diffused, moving on many levels, seizing upon the essence of a vast variety of experiences and objects.”

 

John W. Dean, a former presidential counsel (for Richard Nixon), using the Berlin metaphor classified a number of US presidents as hedgehogs and foxes. In his column he wrote:

“With no fear of contradiction, Barack Obama can be described as a fox and George W. Bush as clearly a hedgehog. It is more difficult than I thought to describe all modern American presidents as either foxes or hedgehogs, but labeling FDR, JFK, and Clinton as foxes and LBJ and Reagan as hedgehogs is not likely to be contested. Less clear is how to categorize Truman, Nixon, Carter and Bush I. But Obama and Bush II are prototypical of these labels.”

 

Tetlock, in referring to pundit accuracy scores wrote that:

“Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.”

 

Tetlock was careful to point out that there was no correlation between political affiliation and either hedgehog or fox classification. But what he did note was that the most accurate pundits were foxes and that the key variable associated with their success was introspection. Those who studied their own decision making process, were open to dealing with dissonance, and those who were not blinded by their preconceptions were far more capable of making accurate predictions. Successful pundits were also cautious about their predictions and were inclined to take information from a wide variety of sources.

 

Hedgehogs on the other hand, were prone to certainty and grand “irrefutable” ideas. They tend to boil problems down to simple grand theories or conflicts (e.g., good versus evil, socialism versus capitalism, free markets versus government regulations, and so on) and view these big issues as being the driving force of history. They are prone to over simplify situations and miss the many and diverse issues that ultimately shape history. They instead are more likely to attribute historical changes to single great men with simple great ideas (e.g., Ronald Reagan was responsible for the fall of the USSR, and without his leadership the cold war may still be raging).

 

So what are you a hedgehog or a fox? Both thinking approaches have strengths and weaknesses and appropriate and less appropriate applications. What were Copernicus, da Vinci, Galileo, Newton, Einstein, and Darwin? When do you suppose it is good to be a hedgehog and when a fox? I suppose it comes down to the task at hand: big unifying issues such as gravity, relativity, evolution, quantum mechanics may indeed necessitate hedgehog thinking. Here such single minded determinism is likely essential to persevere. Although, having read Darwin’s On the Origin of Species I am inclined to think that Darwin was a fox. Da Vinci too, was likely a fox, considering the vastness of his contributions. And Galileo was similarly a broad thinker. Knowing little of Newton and Einstein, I care not to speculate. It seems to me with the specialization of science these days, one must be a hedgehog. Early science history is replete with foxes. I don’t know about you, but I have a romantic notion about the lifestyles of men like Galileo and Darwin, following their curiosities dabbling hither and yon.

 

References:

Berlin, I. (1953). The Hedgehog and the Fox. The Isaiah Berlin Virtual Library. http://berlin.wolf.ox.ac.uk/published_works/rt/HF.pdf

Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Dean, J. (2009). Barack Obama Is a “Fox,” Not a “Hedgehog,” and Thus More Likely To Get It Right. http://writ.news.findlaw.com/dean/20090724.html

Lehrer, J. (2009). How We Decide. New York: Houghton Mifflin Harcourt.

Menand, L. (2005). Everybody’s an Expert. The New Yorker. http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?printable=true

Tetlock, P.E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.

Share

For nearly as long as humans have been thinking about thinking, one of the most intriguing issues has been the interplay of reason and emotion. For the greatest thinkers throughout recorded history, reason has reigned supreme. The traditional paradigm has been one of a dichotomy where refined and uniquely human REASON pitches an ongoing battle for control over animalistic and lustful EMOTIONS. It has been argued by the likes of Plato, Descartes, Kant and and even Thomas Jefferson that reason is the means to enlightenment and that emotion is the sure road to human suffering (Lehrer, 2009).

 

This Platonic dichotomy remains a pillar of Western thought (Lehrer, 2009). Suppressing your urges is a matter of will – recall the mantras “Just say no!” or “Just do it!” My guess is that most people today continue to think of the brain in these terms. Until recently even the cognitive sciences reinforced this notion. Only through very recent advances in the tools used to study the brain (e.g., fMRI) and other ingenious studies (e.g., Damasio’s IGT) has any evidence been generated to place this traditional paradigm in doubt. As it turns out, emotion plays a very crucial role in decision making. Without it, our ability to reason effectively is seriously compromised. I have long believed that feelings and emotions should be under the control of our evolutionary gift – the frontal cortex. Reason, after all, is what sets us apart from the other animals. Instead it is important to understand that we have learned that these forces are NOT foes but essentially collaborative and completely interdependent forces.

 

The implications of this recent knowledge certainly do not suggest that it is fruitless to employ our reason and critical thinking capabilities as we venture through life. Reason is crucial and it does set us apart from other life forms that lack such fully developed frontal cortices. This part of the outdated concept is correct. However, we are wrong to suppose that emotion with regard to decision making lacks value or that it is a villainous force.

 

Jonah Lehrer, in his book, How We Decide discusses this very issue and notes that: “The crucial importance of our emotions – the fact that we can’t make decisions without them – contradicts the conventional view of human nature, with its ancient philosophical roots.” He further notes:

 

“The expansion of the frontal cortex during human evolution did not turn us into purely rational creatures, able to ignore our impulses. In fact, neuroscience now knows that the opposite is true: a significant part of our frontal cortex is involved with emotion. David Hume, the eighteenth-century Scottish philosopher who delighted in heretical ideas, was right when he declared that reason was the “the slave of the passions.”

 

So how does this work? How do emotion and critical thinking join forces? Neuroscientists now know that the orbitofrontal cortex (OFC) is the brain center where this interplay takes place. Located in the lower frontal cortex (the area just above and behind your eyes), your OFC integrates a multitude of information from various brain regions along with visceral emotions in an attempt to facilitate adaptive decision making. Current neuroimaging evidence suggests that the OFC is involved in monitoring, learning, as well as the memorization of the potency of both reinforcers and punishers. It operates within your adaptive unconscious – analyzing the available options, and communicating its decisions by creating emotions that are supposed to help you make decisions.

 

Next time you are faced with a decision, and you experience an associated emotion – it is the result of your OFC’s attempt to tell you what to do. Such feelings actually guide most of our decisions.

 

Most animals lack an OFC and in our primate cousins, this cortical area is much smaller. As a result, these other organisms lack the capacity to use emotions to guide their decisions. Lehrer notes: “From the perspective of the human brain, Homo sapiens is the most emotional animal of all.”

 

I am struck by the reality that natural selection has hit upon this opaque approach to guide behavior. This just reinforces the notion that evolution is not goal directed. Had evolution been goal directed or had we been intelligently designed don’t you suppose a more direct or more obviously rational process would have been devised? The reality of the OFC even draws into question the notion of free will – which is a topic all its own.

 

This largely adaptive brain system of course has draw backs and limitations – many of which I have previously discussed (e.g., implicit associations, cognitive conservatism, attribution error, cognitive biases, essentialism, pareidolia). This is true, in part, because these newer and “higher” brain functions are relatively recent evolutionary developments and the kinks have yet to be worked out (Lehrer, 2009). I also believe that perhaps the complexities and diversions of modernity exceed our neural specifications. Perhaps in time, natural selection will take us in a different direction, but none of us will ever see this. Regardless, by learning about how our brains work, we certainly can take an active role in shaping how we think. How do you think?

 

References:

 

Gladwell, M. (2005). ‘Blink: The Power of Thinking Without Thinking.’ Little, Brown and Company:New York.

 

Lehrer, J. 2009. How We Decide. Houghton Mifflin Harcourt: New York.

Share

I saw it with my own two eyes!” Does this argument suffice? As it turns out – “NO!” that’s not quite good enough. Seeing should not necessarily conclude in believing. Need proof? Play the video below.

 

 

As should be evident as a result of this video, what we perceive, can’t necessarily be fully trusted. Our brains complete patterns, fill in missing data, interpret, and make sense of chaos in ways that do not necessarily coincide with reality. Need more proof? Check these out.

 

Visual Illusion - A & B are the same shade of gray

Visual Illusion – A & B are the same shade of gray

Illusion - Notice the perceived motion around the green circles.

Illusion – Notice the perceived motion around the green circles.

 

Convinced? The software in our brains is responsible for these phenomena. And this software was coded through progressive evolutionary steps that conferred survival benefits to those with such capabilities. Just as pareidolia confers as survival advantage to those that assign agency to things that go bump in the night, there are survival advantages offered to those that evidence the adaptations that are responsible for these errors.

 

So really, you can’t trust what you see. Check out the following video for further implications.

 

 

Many of you are likely surprised by what you missed. We tend to see what we are looking for and we may miss other important pieces of information. The implications of this video seriously challenge the value of eye witness testimony.

 

To add insult to injury you have to know that even our memory is vulnerable. Memory is a reconstructive process not a reproductive one.2 During memory retrieval we piece together fragments of information, however, due to our own biases and expectations, errors creep in.2 Most often these errors are minimal, so regardless of these small deviations from reality, our memories are usually pretty reliable. Sometimes however, too many errors are inserted and our memory becomes unreliable.2 In extreme cases, our memories can be completely false2 (even though we are convinced of their accuracy). This confabulation as it is called, is most often unintentional and can spontaneously occur as a result of the power of suggestion (e.g., leading questions or exposure to a manipulated photograph).2 Frontal lobe damage (due to a tumor or traumatic brain injury) is known to make one more vulnerable to such errors.2

 

Even when our brain is functioning properly, we are susceptible to such departures from reality. We are more vulnerable to illusions and hallucinations, be they hypnagogic or otherwise, when we are ill (e.g., have a high fever, are sleep deprived, oxygen deprived, or have neurotransmitter imbalances). All of us are likely to experience at least one if not many illusions or hallucinations throughout our lifetime. In most cases the occurrence is perfectly normal, simply an acute neurological misfiring. Regardless, many individuals experience religious conversions or become convinced of personal alien abductions as a result of these aberrant neurological phenomena.

 

We are most susceptible to these particular inaccuracies when we are ignorant of them. On the other hand, improved decisions are likely if we understand these mechanisms, as well as, the limitations of the brain’s capacity to process incoming sensory information. Bottom line – you can’t necessarily believe what you see. The same is true for your other senses as well – and these sensory experiences are tightly associated and integrated into long-term memory storage. When you consider the vulnerabilities of our memory, it leaves one wondering to what degree we reside within reality.

 

For the most part, our perceptions of the world are real. If you think about it, were it otherwise we would be at a survival disadvantage. The errors in perception we experience are in part a result of the rapid cognitions we make in our adaptive unconscious (intuitive brain) so that we can quickly process and successfully react to our environment. For the most part it works very well. But sometimes we experience aberrations, and it is important that we understand the workings of these cognitive missteps. This awareness absolutely necessitates skepticism. Be careful what you believe!

 

References:

 

1.  169 Best Illusions–A Sampling, Scientific American: Mind & Brain. May 10, 2010
http://www.scientificamerican.com/slideshow.cfm?id=169-best-illusions&photo_id=82E73209-C951-CBB7-7CD7B53D7346132B

 

2.  Anatomy of a false memory. Posted on: June 13, 2008 6:25 PM, by Mo
http://scienceblogs.com/neurophilosophy/2008/06/anatomy_of_a_false_memory.php

 

3.  Simons, Daniel J., 1999. Selective Attention Test. Visual Cognitions Lab, University of Illinois. http://viscog.beckman.illinois.edu/flashmovie/15.php

 

4.  Sugihara, Koukichi 2010. Impossible motion: magnet-like slopes. Meiji Institute for Advanced Study of Mathematical Sciences, Japan. http://illusioncontest.neuralcorrelate.com/2010/impossible-motion-magnet-like-slopes/

Share

There is a learning curve to the application of Skeptism. Raw, unchecked challenges to other’s beliefs, in a social context, are not well tolerated. People tend to find such notions rather offputting. In fact, as I have certainly encountered, it elicits defensiveness and sometimes hurt feelings. People often own their ideas and beliefs in a way that is essentially linked to their identity. As Carl Sagan wrote in ‘The Deamon Haunted World’ “All of us cherish our beliefs. They are, to a degree, self-defining. When someone comes along who challenges our belief system as insufficiently well-based — or who, like Socrates, merely asks embarrassing questions that we haven’t thought of, or demonstrates that we’ve swept key underlying assumptions under the rug — it becomes much more than a search for knowledge. It feels like a personal assault.”

 

These assaults repel people and in effect insolate them from the rational inquiry you may wish to posit. People are inclined to respond to uninvited or poorly crafted skepticism much as one would respond to contemptuous arrogance.

 

Throughout most of human history, the social consequences of skeptical inquiry were likely quite costly. This was most certainly true in the preagrarian stages of our evolution. It is believed that throughout early human evolution individual survival was linked to social cohesion. Although this is not as true today, in prehistory skepticism likely hindered, rather than promoted survival. With this in mind, it certainly makes sense that we as a species are inclined toward unquestioning belief rather than skepticism. This inclination also makes us vulnerable to mysticism and superstition. Natural selection, it seems, has selected for gulibility.

 

Sensitive, judicious, and scant use of sketicism, in social contexts, is prudent. This is true unless you just don’t care about how others feel about you, how they feel about interacting with you, and even about how they feel about themselves. There is a time and place for everything. Choosing those times carefully and selecting one’s words even more cautiously will more likely get better results.

 

I admire great thinkers like Bruno, Coppernicus, and Galileo who faced more than mere social consequences for putting forward their theories. Bruno, in fact, paid with his life. Darwin too faced significant costs. However, their rejection of accepted explanations (stemming from skeptical inquiry) moved us forward. We owe much to these men for their courage and steadfast dedication to the truth. We move forward when we step away from blind acceptance; but, let’s not lend a blind eye toward the social consequences of our own personal skepticism.

Share

Intuitive Thought

2 April 2010

What is Intuitive Thought?

 

I have devoted numerous posts to a general category of cognitive errors and biases that are broadly lumped into errors associated with the intuitive mind. The lay notions of intuition are often referred to as gut instincts and they are generally considered emotional and irrational responses.  It is in this context that intuition is vilified.  Such impulsive reactions are countered with teachings typified by adages such as: “Look before you leap;” “Don’t judge a book by its cover;” “Haste makes waste;” and “The hurrier you go the behinder you get.”  Although this narrow understanding of intuition is in part correct, it largely misses the mark regarding this very complicated and sophisticated neuro-system.  Intuition is largely misunderstood, and has frankly not been well understood to begin with. Herein I hope to offer a cursory explanation of intuition and broadly differentiate it from rational thought. The vast majority of the following content is drawn from Malcolm Gladwell’s intriguing 2005 book called ‘Blink: The Power of Thinking Without Thinking.’ Gladwell draws together a vast array of research from cognitive and social psychology and a number of other sciences in an attempt to elucidate this ambiguous concept.

 

Rational thought serves as a good starting place because, in fact, it offers a good point of comparison helping to bring intuition into slightly better focus. Reason is the hallmark of rational thought. It involves an active application of the cerebral cortex, whereby personal history, knowledge, and active cognitions are employed in a conscious manner to solve problems. The keywords here are active and conscious. When we engage in reasoning we are generally aware of the cognitive effort directed toward this process.  Another aspect of relevance to this process is the passage of time. Reason-based thought is not generally instantaneous. Although solutions may seem to pop into awareness out of the blue, generally some measure of time passes as we strive for enlightenment. Think of an occasion where you had word finding difficulties. You probably actively thought about the word, the context of the word, and so on. If you failed to recall the word you may have cognitively moved on to something else, only to have the word come to you.  The former was rational thought; the latter, the result of intuitive thought.

 

Intuition is different from rational thought with regard to those key variables. First, this instantaneous process is seemingly unconscious.  Second, it is automatic (or at least seemingly so) consuming no apparent effort or time.  The popular and scientific literature is replete with descriptive names for this seemingly mystical capacity.  Gladwell uses a full complement of these terms and he sprinkles them throughout his text.  Terms that emanate from the sciences include the adaptive unconscious, unconscious reasoning, rapid cognition, and thin slicing. Other descriptive terms include snap judgments, fast and frugal thinking, and eloquently the “mind behind the locked door.” Regardless of what we call it, intuition is constantly at work, drawing instantaneous conclusions outside of our awareness.

 

Because of the nature of this process, Gladwell notes that people are often ignorant of the secret decisions that affect their behavior, yet they do not feel ignorant. We often behave in manners driven by the adaptive unconscious and later try to justify those behaviors invoking the rational brain to do so. This fact is what calls into the question the reality of free will.  Intriguing isn’t it!   It is as though there is a covert super-powerful, super-fast computer running in tandem with our overt reasoning computer: yet outside our awareness this covert computer remains ever vigilant, soaking in the world through our senses, and actively directing our behavior.

 

Although the adaptive unconscious lies outside our direct control, life experiences, practice, and our intellectual pursuits contribute to the data set that is used when snap judgments are made. The more informed, erudite, and experienced one is, the more accurate one’s rapid cognitions become.  Just think about driving.  When learning to drive there are an overwhelming number of things to think about – so many in fact, that mistakes made are likely due to “analysis paralysis.”  Too much to compute!  Through practice and repetition, all those things we previously had to actively think about become more automatic.  We don’t think about the countless micro adjustments we make on the steering wheel as we drive down the highway.   Novice drivers must think about these adjustments, along with attending to their speed (generally with gross applications of the accelerator and brakes), and myriad other factors that seasoned drivers do not overtly contemplate. The novice’s driving is chunky – experienced drivers, with the benefit of many miles in the drivers seat, are generally more smooth and more refined in their driving.

 

Experts in their given fields become more intuitive or automatic with regard to their area of expertise over time as a result of exposure, learning, and practice.   Their thoughts become seemingly automatic, their judgments and reactions more spontaneous – all of this in many situations without the expert even having to actively think.  In these cases (where there is sufficient expertise) snap judgments can be even more accurate than the arduous process of working through problems rationally.   On the other hand, this intuitive process can lead to problems because it is remarkably susceptible to prejudices and errors.  This is particularly true, as you might surmise, in areas where the individual lacks experience or knowledge.

 

Under certain circumstances the adaptive unconscious serves our purposes very well.  In addition to those situations where one’s expertise applies, we tend to effectively use snap judgments in social situations, in complicated situations, or in life or death situations that necessitate quick decisions.   This is where evolution has played a role in shaping this capacity.  It has had the effect of contributing to the survival of our species.  He who can make effective snap judgments in life or death situations is more likely to pass on this very capacity.  And tens of thousands of years of such natural selection has refined this capacity.

 

The catch is that there are erroneous thought processes that are artifacts, residuals or the direct consequence of the adaptive unconscious.  Issues such as essentialism, pareidolia, and superstition fall into this category, as they have been ushered along with the survival advantage that the adaptive unconscious has conferred.  Cognitive errors and biases hamper the effectiveness of the adaptive unconscious because of its inclination toward implicit associations and other accidental error imposing tendencies. Implicit associations are automatic and non deliberate pairings we make between concepts, people, things, etc., (e.g., African Americans are athletic, blonds are scatterbrained, gay men are effeminate) as they are folded into memory.  This is an intriguing concept, one deserving its own post, but you have to take the Implicit Associations Test, particularly the race test, to get a true sense of this powerful bias.  Confirmation bias, self serving bias, as well as the numerous other cognitive biases are likewise linked to this influential super-computer.  However, just because we cannot directly and purposefully access this incredible system, does not mean we have to bow entirely to its influence.  In fact, we can proactively prime this system through active learning.  And we can be aware of this powerful system and the advantages and disadvantages it confers.  We can learn of the errors it inclines us toward and monitor ourselves when it comes to our biases and prejudices.  We can impose certain rules of thought when it comes to important issues.  I believe that  we all should take these very important steps both to make our intuitive brain more accurate and to buffer its influences in those situations where it is likely to lead us astray.

 

References:

 

Gladwell, M. (2005). ‘Blink: The Power of Thinking Without Thinking.’ New York: Little, Brown and Company.

Share

Historically, morality has not been considered a topic of discussion within the domain of science. Instead, this issue has almost exclusively been within the purview of religion. Increasingly, however, concepts such as moral instinct have gained legitimacy as discussed by scientists such as Steven Pinker and Jonathon Haidt, who argue that there are neurological factors associated with morality and that natural selection has played a fundamental role in shaping universal instinctual moral truths. The evidence for this position is compelling. The question remains: “Can science offer moral guidance?” In other words, should science play a role in helping us discern what is right or wrong? Or does science have to relinquish issues of morality to other social systems based solely on historical precedence?

 

First of all, the definition of morality has to be accepted. Dictionary.com defines morality as “conformity to the rules of right conduct; moral or virtuous conduct.” The Stanford Encyclopedia of Philosophy definition of morality reads as follows “descriptively to refer to a code of conduct put forward by a society or, some other group, such as a religion, or accepted by an individual for her own behavior; or normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.” These definitions are devoid of the de facto notion that this concept is values based. Sam Harris argues, and I believe most people would agree, that human values pertain to circumstances that have the positive affect of enhancing the well being of conscious beings. As such, it does not seem like a reach to suggest that science can play a role in setting the parameters of morality.

 

Quite simply, it can be suggested that there are certain conditions under which humans are more likely to prosper and other conditions under which they are more likely to falter. For instance it is known that children raised in a loving environment where life’s basic needs are provided for, are more likely to grow into happy and productive adults than those raised in hostile and deprived environments. We may intuitively know this, but it is science that provides the evidence for such claims.  The profession of psychology devotes considerable resources to this pursuit. As a psychologist myself I employ evidenced based practices as I endeavor to facilitate the betterment of my clients’ lives. Why is it then, that we dismiss the influences of science when we discuss morals? At a recent TED Conference Sam Harris posed this very question.

 

I suggest, as did Harris, that science is very capable of pointing us, as a society, in the right direction when it comes to morals and values. Russell Blackford wrote in his post on Harris’ speech that “…science can give us information about what individual conduct, moral systems, laws, and so on are likely to lead to such plausible goals for ….. individual and collective human flourishing, social survival, and reduction of suffering. Any source of information about what will lead to goals such as these has some moral authority.

 

Harris argues that it boils down to understanding the conditions that lead to human flourishing – and accepting that these conditions are fundamental facts that should serve as the basis of universal morals. He further contends that there are distinctly problematic values within our current human systems that run counter to human flourishing. For example he discusses the costs of the extremist cultural expectation for women of Islam to wear burkas (and the brutal costs of non-compliance). He contrasts this with the unrealistically perfect portrayal of the female body in modern western cultures. Neither of these circumstances promotes healthy thriving circumstances for young women.

 

He also argues that religion should not be given a pass when it comes to the values they promote just because of their religious status. The natural deference given to religion in our “pluralistic” society in fact, promotes many clearly harmful practices (including the prohibition of birth control, the denial of civil liberties for homosexual couples, sanctioned murder of victims of rape to preserve the honor of the family, male foreskin and in some cultures clitoral circumcision, and the application of prayer in lieu of modern medical services particularly for ill children).  Values rendered in distant Bronze Age cultures and sustained based on ideology are far from being in touch with those values that are likely to promote healthy human development today.

 

Individuals suffer, indeed society as a whole suffers when these or similar prohibitions and/or expectations thrive.  Science, it seems to me is far more capable of really looking at the human and societal costs of such “values.”  Harris suggests that “Morality is certainly a domain where knowledge and expertise applies.” We need to “bring into our dialogue the issues of these truths of right and wrong.”  By accepting that values are drawn based on quality of life issues pertaining to the greater good of all, and by accepting that there are certain truths pertaining to life experiences that either enhance or impinge upon the well being of the human conscious, then isn’t the domain of science to draw out these truths?

 

References:

 

Blackford, Russell. 2010. Sam Harris on Science and Morality. Metamagician and the Hellfire Club. http://metamagician3000.blogspot.com/2010/03/sam-harris-on-science-and-morality.html

 

Harris, Sam. 2010. Science can answer moral questions. TED Conference. http://www.ted.com/talks/sam_harris_science_can_show_what_s_right.html

Share
 | Posted by | Categories: Evolution, Morality, Science | Tagged: , |

Nature is harsh. This reality is evidenced with potential discomfort to those who care to open their eyes to what goes on around us. Most living creatures struggle to survive, facing either limited resources or predation on a continual basis. In most developed nations many humans escape this reality, but not too long ago even we had to struggle survive.

 

I remember the reality of this struggle burning into my memory cells as a child while watching nature shows like The Underwater Odyssey of Commander Cousteau and Wild Kingdom. I vividly recall the horror and intrigue I experienced watching cheetahs and lions chasing down and killing antelope or gazelles. To this day I experience a visceral response when I witness this predation carried to its conclusion with the blood soaked carnivore licking it’s chops. Harsh indeed!

 

The moral implications of nature’s harshness has stirred our intellect for quite some time. It certainly weighed heavily on Darwin as he developed his theory of evolution by means of natural selection. A pressing question in natural theology asked how a benevolent and loving God could create such a system with pervasive suffering. Stephen Jay Gould, in perhaps his most famous essay, titled Nonmoral Nature addressed this very issue.

 

Gould (1982) provides a historical review of this controversy dating back to the mid nineteenth century. One particular scholar from that era, William Buckland, gained comfort from the notion that predation is moral because carnivores increase “the aggregate of animal enjoyment” and “diminish that of pain” because:

Death after all, is swift and relatively painless, victims are spared the ravages of decrepitude and senility, and populations do not outrun their food supply to the greater sorrow of all.”

Buckland concluded that predation on a grand scale is moral. But to some, the real challenge to the morality of nature lies outside run of the mill predation. The reproduction cycle of the ichneumon fly epitomizes this challenge.

 

The ichneumon fly is is actually a wasp belonging to the ichneumonoidea superfamily. This diverse group of insects lay their eggs on or in other insects setting into motion a synchronized chain of events that defies any sense of morality. The endoparasitic ichneumon wasps insert their eggs into the body of their host (e.g., caterpillars, aphids, or spiders). The larvae upon hatching carefully ingest their host’s internal organs – first devouring the non-essential tissues saving the vital organs for last so as to prolong the life of their meal. The ectoparasitic ichneumons sting and paralyze the host before laying eggs on the exterior of the host’s body. The paralysis is permanent but the host remains alive. Once the eggs hatch the larvae penetrate the host’s body and again selectively devour the incapacitated but fully alive host little by little, sustaining the live fresh meal as long as possible.

 

This process is, to say the least, horrifying to contemplate. We humans do not cope well with the notion of parasites on or in our body. Think of the circus that ensues when a child comes home from school with head lice. Think of the horror and shame associated with pubic lice. How about scabies or tape worms? People don’t even like to hear that approximately 10% of our body mass is that of our essential parasitic partners (bacteria). One does not have to use much imagination to shudder with the notion of being slowly devoured from within. ‘Alien’ – need I say more.

 

The ichneumon reproduction contrivance became the supreme challenge to the morality of the designer. Gould wrote of the 19th Century theologians who attempted to resolve this dilemma by anthropomorphizing the mother’s love for its progeny and by downplaying the implications of the plight of the host. They also suggested that this approach may be adaptive for humans as the predation has the effect of minimizing crop loss due to the ravenous appetites of living caterpillars. Finally, they argued that animals are not moral agents, and that they thus must feel little, if any pain. They suggested that lower life forms and even “primitive people suffer less than advanced and cultured folk. It was also believed during this Victorian era that consciousness was only within the realm of man. Needless to say, these arguments fail to resolve the dilemma if one contends that there is a “lurking goodness behind everything.” Darwin wrote in a 1856 note to Joseph Hooker:

What a book a devil’s chaplain might write on the clumsy, wasteful, blundering, low, and horribly cruel works of nature!”

Gould wrote that in the face of this conundrum intellectuals had two options:

  1. Retain the notion “that nature holds moral messages” and that morality involves knowing the ways of nature and doing the opposite. Be not a savage – be not an animal.
  2. Accept that nature is nonmoral, that it is what it is, that morality plays no role in the struggle for existence.

Darwin himself leaned toward the second option although he struggled with letting go of the notion that the laws of nature might denote some higher purpose.  In his essay, Gould (1982) suggested that:

Since ichneumons are a detail, and since natural selection is a law regulating details, the answer to the ancient dilema of why such cruelty (in our terms) exists in nature can only be that there isn’t any answer – and that framing the question “in our terms” is thoroughly inappropriate in a natural world neither made for us nor ruled by us. It just plain happens.”

It is a strategy that works for ichneumons and that natural selection has programmed into their behavioral repertoire. Caterpillars are not suffering to teach us something; they have simply been outmaneuvered, for now, in the evolutionary game.”

 

I too, am inclined toward the notion that nature as it plays out evolution’s dance, is entirely devoid of anything pertaining to morality or evil. We anthropomorphize when we apply these concepts. Even to suggest that nature is cruel is anthropomorphizing. Any true and deep look at the struggle for life that constantly dances in our midst can scarcely lead to any other conclusion but that nature is brutal, harsh, and nonmoral. Should I be wrong about this, I am inclined to be reluctant to meet its designer.

 

Reference:

 

Gould, S. J. 1982. ‘Nonmoral Nature.’ Natural History. 91. pg.19-26.

Share
 | Posted by | Categories: Evolution, Morality, Science | Tagged: , |