Have you ever wondered what makes a pundit a pundit? I mean really! Is there pundit school or a degree in punditry? Given what I hear, I can only imagine that what would be conferred upon graduation is a B.S. of different, more effluent sort. I mean REALLY!


I am certain that many of you have heard the rhetoric spewed by many of the talking heads on television and talk radio. This is true regardless of their alleged political ideology. And even more alarming, it seems to me, is that the more bombastic they are, the more popular they are. A pundit is supposed to be an expert – one with greater knowledge and insight than the general population – and subsequently they should possess the capacity to analyze current scenarios and draw better conclusions about the future than typical folk.


However, what we typically hear is two or more supremely confident versions of reality. You name the issue, be it anthropogenic global warming, health care reform, the value of free market systems, virtually no two pundits can agree unless of course they are political brethren.


Have you ever wondered if any one has ever put the predictive reliability of these so called experts to a test? Well, Philip Tetlock, a psychology professor at UC Berkley, has done just that. In 1984 Tetlock undertook such an analysis and his initial data was so so alarming (everybody had called the future wrong with regard to the cold war and demise of the USSR) that he decided to embark on what was to eventually become a two decade long quantitative analysis of, and report card on, the true predictive capabilities of professional pundits.


In 2005 Tetlock published his findings in his book, Expert political judgment: How good is it? How can we know? The results were again surprising. He analyzed the predictions made by over 280 professional experts. He gave each a series of professionally relevant real life situations and asked them to make probability predictions pertaining to three possible outcomes (often in the form of things will: stay the same, get better, or get worse). Further, Tetlock interviewed each expert to evaluate the thought processes used to draw their conclusions.


In the end, after nearly twenty years of predictions and real life playing itself out, Tetlock was able to analyze the accuracy of over 82,000 predictions. And the results were conclusive – the pundits performed worse than random chance in predicting outcomes within their supposed areas of expertise. These experts were able to accurately predict the future less than 33% of the time and non-specialists did equally as well. And to make matters worse, the most famous pundits were the least accurate. A clear pattern emerged – confidence in one’s predictions was highly correlated with error. Those who were most confident about their predictions were most often the least accurate. He noted that the most confident, despite their inaccuracy, were in fact the most popular! Tetlock noted that they were essentially blinded by their certainty.


Jonah Lehrer in How We Decide wrote of Tetlock’s study and stated “When pundits were convinced that they were right, they ignored any brain areas that implied that they might be wrong. This suggests that one of the best ways to distinguish genuine from phony expertise is to look at how a person responds to dissonant data. Does he or she reject the data out of hand? Perform elaborate mental gymnastics to avoid admitting error? He also suggested that people should “ignore those commentators that seem too confident or self assured. The people on television who are most certain are almost certainly going to be wrong.”


You might be surprised that the vast majority of the pundits actually believed that they were engaging in objective and rational analysis when drawing their conclusions.


So, experts, rationally analyzing data, drawing conclusions with less than random chance accuracy? One has to question either their actual level of expertise or the objectivity of their analysis. Tetlock suggests that they are “prisoners of their preconceptions.”


This begs the question: Is this an error of reason or an error of intuition? Jonah Lehrer suggests that this error is actually played out as one cherry picks which feelings to acknowledge and which to ignore. Lehrer noted: “Instead of trusting their gut feelings, they found ways to disregard the insights that contradicted their ideologies… Instead of encouraging the arguments inside their heads, these pundits settled on answers and then came up with reasons to justify those answers.


Chabris and Simons in the The Invisible Gorilla discuss why we are taken in by the pundits despite their measurable incompetence and why they likely make the errors that they do. The bottom line is that such ubiquitous errors (made by novices and experts alike) are in fact illusions of knowledge perpetrated by intuition and further that we are suckers for confidence.


First of all, our intuitive inclination is to overly generalize and assume that one’s confidence is a measure of one’s competence. Such an assumption is appropriate in situations where one personally knows the limits of the individual’s capabilities. When it comes to pundits, few people know the supposed expert well enough to accurately assess whether he or she is worthy of their confidence. Regardless, people prefer and are drawn toward confidence. Our intuitive attraction to, and trust in confidence, sets us up for error. It is the illusion of confidence.


Chabris and Simons then review numerous stories and studies that “show that even scientific experts can dramatically overestimate what they know.” They demonstrate how we confuse familiarity with knowledge – and that when our knowledge is put to the test “…our depth of understanding is sufficiently shallow that we may exhaust our knowledge after just the first question. We know that there is an answer, and we feel that we know it, but until asked to produce it we seem blissfully unaware of the shortcomings in our own knowledge.” They add:

And even when we do check our knowledge, we often mislead ourselves. We focus on those snippets of information that we do possess, or can easily obtain, but ignore all of the elements that are missing, leaving us with the impression that we understand everything we need to.


So what can we safely conclude?


For certain, we should be aware of the limits of our knowledge and be ever vigilant so as to remain skeptical about what other experts espouse (particularly if they come off as being very confident). Tetlock suggests that responsible pundits should state their predictions in measurable terms – so that they are subject to analysis – both for error correction/learning and accountability purposes. Further he discusses the importance of placing predictions within error bars denoting the probability of accuracy. Chabris and Simons contend that only through rational analytic thought can we overcome the illusion of knowledge. We have to stave off our intuitive inclination to trust bold, black and white predictions; we have to accept that complicated issues demand complicated solutions; and that predicting the future is very difficult. As such, we need to get more comfortable with probabilities and become more skeptical of certainties. As for the pundits – they are not worth listening to – they are almost always wrong – and all they really do is polarize the process and the nation. We need to inform one another of this – and ultimately make an active rational choice to stop victimizing ourselves.



Chabris, C. F., & Simons, D. J. (2010). The Invisible Gorilla. New York: Random House.

Lehrer, J. (2009). How We Decide. New York: Houghton Mifflin Harcourt.

Menand, L. (2005). Everybody’s an Expert. The New Yorker. http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?printable=true

Tetlock, P.E. (2005). Expert political judgment: How good is it? How can we know? Princeton: Princeton University Press.



  1. Thank you, Gerry!
    Clearly an important review of what is available to us laypeople, but demonstrating how blinded we are by how important the pundits sound.
    The available data on any subject is also a problem to the average person, as we are not necessarily given facts to absorb, but rather half truths, speculation, and mistakes. And our sources are limited by many things including our own interests, timing, and available media. When we try to analyze (almost any subject of import) we give up too often, thinking ourselves too dumb to work on it. Yet we still want facts and understanding to be able to learn.
    I am looking forward to reading more.
    Some of us older types remember being taught to think with: “Fish swim, Johnny swims, therefore, Johnny is a fish”. I have used and shared this expression for more than fifty-five years of thinking. And the louder it is said, the more often it is said, one can almost see fins on Fox news people….amongst others.
    Thank you. I will continue on your blog. You are appreciated. LYNN

  2. Pingback:Are You a Hedgehog or a Fox? « How Do You Think?

  3. Lynn Vestel wrote:

    “The available data on any subject is also a problem to the average person, as we are not necessarily given facts to absorb, but rather half truths, speculation, and mistakes. And our sources are limited by many things including our own interests, timing, and available media.”

    This is so true, profoundly true – knowing this and accepting the reality of it is half the battle. The other half – getting the facts – is a monumental challenge – as facts are not necessarily facts – but spin with specific objectives. The third half is knowing that we are primed to believe – and in particular believe what already fits into our particular belief paradigm. We are not dumb necessarily, just hard wired to be gullible and limited in capacity to interpret, and at the same time overwhelmed by the plethora of data out there. This is what has lead to my generalized skepticism and search for sources of a skeptical nature. They are out there – but not in the general media.

    Thank you Lynn – I love this!

  4. One of the best articles I’ve read in a while. This especially applies to the “dismal science” of Economics. Anyone claiming they “know the future” of the markets is probably much more likely to not know very much at all — or is privy to the ‘inside information’ of those who may be manipulating the (supposedly) “free market”.

  5. Well thank you Lloyd. It’s quite an illusion isn’t it! But people buy it no matter how bombastic or capricious. And WHAT?! people have access to inside information about the market? Sounds like a money making op to me. Do you suppose they take advantage of it? đŸ˜‰

  6. Pingback:2010 – A Year in Review: How Do You Think? « How Do You Think?

  7. Pingback:When Tribal Moral Communities Collide – How Do You Think?

  8. This is a great article. Thank you! Thought you might be interested in this published study influenced by Tetlock-

  9. @ R. Robb – Thanks for the read and kind comment! I’ll check out the recommended article post haste.

  10. http://www.hamilton.edu/news/polls/pundit – an interesting read. Thanks R. Robb.

Leave a Comment

Your email address will not be published. Required fields are marked *