Thinking About Beliefs

“A vexing problem in the history of human thought is finding one’s position on the boundary between skepticism and gullibility, or how to believe and how to not believe….Clearly, you cannot doubt everything and function; you cannot believe everything and survive.”
Nassim Nicholas Taleb

This quote has been bouncing around in my head all morning. It relates to questions I’ve been pondering a long time. Among them, how do we come to have the core beliefs around which we orient meaning in our lives? How do we determine what is believable and what isn’t? How do we respond to those whose nature and life experiences have led them to beliefs that conflict with our own?

Rene Descartes famously set out to discover truth entirely from reason. He imagined his mind as a clean slate. After first deducing his own existence from the fact of his self consciousness (“I think, therefore I am”), he proceeded from there, ending up constructing a set of truths and beliefs based on analytic reasoning, which were probably identical to those he had before he began to think about them.

According to David Hume, moral reasoning is a sort of mind-trick (my words, not his). Reasoning, he claimed, is merely post hoc justification for our emotions. “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.” Some experiments in neuroscience and psychology seem to support Hume’s belief.

This has some fascinating implications.

If moral reasoning is generally a post-hoc construction intended to justify automatic moral intuitions, then our moral life is plagued by two illusions. The first illusion can be called the “wag-the-dog” illusion: we believe that our own moral judgment (the dog) is driven by our own moral reasoning (the tail). The second illusion can be called the “wag-the-other-dog’s-tail” illusion: in a moral argument, we expect the successful rebuttal of an opponent’s arguments to change the opponent’s mind. Such a belief is like thinking that forcing a dog’s tail to wag by moving it with your hand should make the dog happy.

(From HERE)

Take the question of the humane treatment of farm animals, for example. I consider myself an example of someone who was led to change his opinions and behavior on that subject as a result of thoughtful moral reasoning. After I spoke about food ethics at a church in North Carolina last year, a woman tearfully told me that she had given up eating factory-farmed meat after reading my book, and that she and her husband had sold their house and bought a small farm so they could raise more of their own food. I know several people who have become vegans or vegetarians out of a concern for animal welfare. Examples like those seem to indicate that we can change our moral beliefs based on reasoning. But of course it’s entirely possible that in each of those cases the person’s change of behavior was due not to moral reasoning per se, but rather to their innate moral intuitions, which did not arise from reasoning. Maybe they weren’t so much persuaded to change their beliefs, as they were to conform their practices to their pre-existing beliefs.

Descartes argued that animals are mere machines, incapable of feeling pain or emotion. We need not concern ourselves with animal suffering, he reasoned, because there is no such thing. Our only concern should be how to maximize the utility of non-human animals for our own benefit. This Cartesian view of animals became the dominant scientific and philosophical view and still determines how many people respond to animal welfare issues.

I find the Cartesian view of animal suffering immoral, ridiculous and violative of common sense. Are those who adhere to the Cartesian view simply using it to justify a pre-existing insensitivity to animal suffering? Are they just grabbing post hoc onto a rationale for their desire to go on exploiting animals? And what about me? Am I accepting the moral arguments for animal welfare not because of the reasoning behind them, but rather because of pre-existing moral intuitions toward compassion/sentimentality?

Probably even more relevant in our everyday lives is the question of what Lothar Lorraine (in the excellent post linked above) called the “wag-the-other-dog’s-tail” illusion. We regularly argue with those with whom we have differences of opinion on moral issues.

Both sides present what they take to be excellent arguments in support of their positions. Both sides expect the other side to be responsive to such reasons (the wag-the-other-dog’s-tail illusion). When the other side fails to be affected by such good reasons, each side concludes that the other side must be closed-minded or insincere….They are convinced that reason is on their side, that those disagreeing with them are either morons or profoundly wicked people, and that they deserve to be treated in the rudest manner.

Aside from the fact that such arguments usually bring out our worst manners, it is very rare that they have the effect of changing anyone’s mind. Rather, it has been shown that when confronted with facts that challenge or refute deeply-held opinions, instead of changing those opinions people tend instead to believe them even more strongly! This is what is known as the “Backfire Effect.” So when we get into heated debates with folks whose opinions differ from our own, we not only steer ourselves toward the belief that they are “wicked,” “close-minded,” “morons,” we’re also having the unintended effect of causing them to hold their contrary beliefs more strongly than ever. So our intent to change their minds has exactly the opposite effect. Our arguments backfire.

So maybe we ought to devote less energy to trying to convert the close-minded morons to our way of thinking, and more energy to trying to understand their perspectives. I think I probably do.

I came across this recently, and it resonated with me.

How far is it from here to there?

How far from where I stand — the bit of earth, the people and places, my experiences and my feelings — to where others stand, what they experience, what they feel.


It seems to me that maybe we should be asking, what are the underlying emotional foundations upon which those with whom I disagree ground their reasoning? What can I learn from trying to understand that? How might making that effort affect public discourse? How can we disagree, without being arrogantly dismissive of contrary opinions? What can excuse remaining willfully ignorant of the emotional foundations of contrary opinions? I am reminded of something Richard Louv wrote: “there is no ignorance quite so unattractive as prideful ignorance.” And after all (to drop one more name), as John Stuart Mill noted, if you only know one side of the argument, then you don’t even know that.

As for me, I’m going to try to continue to form moral judgments, and to balance doubt and belief, according to what I perceive to be reasonable, whether or not that’s just an illusion, and whether or not what I perceive to be reasoning is actually the mere slave of my emotions.

And henceforth I’m going to try to be less judgmental of all those close-minded morons who refuse to submit in the face of my superior intellect and morality.