Critical Thinking in China

020 – Critical Thinking in China


This past month I was fortunate to be a guest of Xidian University in China for two weeks. On this episode of the podcast I share stories and reflections from my adventures as a first-time visitor to China, and I give an overview of some of the public talks and lectures I gave.

The episode has four distinct parts. The first 20 minutes is stories from my trip and observations about Chinese culture. Then there are three discussions on philosophy, science and critical thinking topics:

(1) on circular reasoning in the appeal to science and nature to justify social and political views;
(2) on the elements of science literacy and why public science education doesn’t teach it; and
(3) on the history of critical thinking in the west, and the challenges of talking about the value of critical thinking to audiences in modern China.

You can find a photo essay with lots of pics below!


In This Episode:

  • (00 min -20 min) stories from my trip and observations about Chinese culture
  • (20 min – 30 min) on circular reasoning in the appeal to science and nature to justify social and political views
  • (30 min – 40 min) on the elements of science literacy and why public science education doesn’t teach it
  • (40 min – 50 min) on the history of critical thinking in the west, and the challenges of talking about the value of critical thinking to audiences in modern China
  • (55 min – 60 min) whether I will accept Xidian University’s offer to hire me as a Visiting Professor

Quotes:

“And every morning, before work — between 6 AM and 8:30 AM, or so — hundreds of people, of all ages, would gather in and around this public space, and participate in some kind of physical activity. Many would jog around the track, but others would break off into groups to do tai chi. Older people would gather at a set of upright bars and do calisthenics or stretching in small groups. Off to the side, in these private little park spaces, other tai chi groups would set up. Some of them did tai chi with fans. I saw a small group with swords. Some do group dancing, with scarves and colorful costumes. I would go here in the morning to people-watch, mostly.”

“When you’re studying the natural world, you need to be on the lookout for biases that can creep in due to one’s philosophical or ideological worldview. To some extent this is unavoidable. But it’s particularly concerning if you’re appealing to science to tell you what’s “natural”, and then conclude that some particular social order that you prefer is natural and therefore justified.

People have found justifications for almost any social practice this way. Science tells us that slavery is natural, racial discrimination is natural, fixed gender roles are natural, colonialism is natural, hierarchy is natural, free market capitalism is natural, communism is natural.

This is a seductive path. You don’t want to find yourself projecting your ideology onto the science, and then using that same science to justify your ideology.”

“Critical thinking education, as represented by the curriculum in these commonly used textbooks, by and large ignores the psychological dimension, the actual mechanisms that determine how people form beliefs and make decisions. It ignores the connections between persuasive rhetoric, good argumentation, and psychology. It ignores the social dimensions of cognition, how we rely on groups and culture to carry the burden of much of our thinking. And it fails to recognize how hostile the current media environment is to critical thinking, how people are subject to persuasive messaging around the clock, that is engineered to exploit cognitive biases and bypass our conscious, deliberative reasoning processes.”


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Stories and Pictures from China

From June 27 to July 8 I was a guest of Xidian University in Xi’an, China. This was my first trip to China and it was a great experience.

The circumstances of the invitation are worth mentioning, because it really was a surprise.

I was invited by Dr. Zhu Danqiong. She teaches in the Philosophy Department at Xidian, and she is Director of the Center for Cross-Cultural Studies.

 

Dr. Zhu has a research program on environmental philosophy, and this spring semester she was teaching undergraduate courses in philosophy of science and philosophy of ecology. As Director of the Center for Cross-Cultural Studies she had an opportunity to invite someone who could provide a cross-cultural experience for students, and she was familiar with some of my earlier academic work in the philosophy of science and ecology, so that’s how I came to her attention.

I was invited to do some public talks, some workshops with students, and some meetings with faculty.

Here was the itinerary for the two weeks of my visit:

(more…)

Read More

019 – Understanding Your Divided Mind: Kahneman, Haidt and Greene


Argument Ninjas need to acquire a basic understanding of the psychology of human reasoning. This is essential for improving the quality of our own reasoning, and for mastering skills in communication and persuasion.

On this episode I take you on a guided tour of our divided mind. I compare and contrast the dual-process theories of Daniel Kahneman (Thinking, Fast and Slow), Jonathan Haidt (The Righteous Mind) and Joshua Greene (Moral Tribes). The simple mental models these authors use should be part of every critical thinker’s toolbox.

My other goal with this episode is to help listeners think more critically about dual-process theories in cognitive science, to better understand the state of the science and the diversity of views that fall under this label.

In This Episode:

  • Why it’s important to cultivate multiple mental models (2:40)
  • Kahneman and Tversky: biases and heuristics (4:20)
  • Example: the availability heuristic (5:30)
  • Cognitive biases originating from mismatches between the problem a heuristic was designed to solve, and the problem actually faced (8:20)
  • Dual-process theories in psychology that pre-date System 1 and System 2 (9:35)
  • The System 1 – System 2 distinction (12:00)
  • Kahneman’s teaching model: System 1 and System 2 as personified agents (18:30)
  • Example: “Answering an Easier Question” (19:30)
  • How beliefs and judgments are formed: System 1 –> System 2 (22:20)
  • System 2 can override System 1 (23:35)
  • Assessing Kahneman’s model (25:40)
  • Introduction to Jonathan Haidt (28:40)
  • The Elephant and the Rider model (30:50)
  • Principles for changing human behavior, based on the Elephant and the Rider model (33:00)
  • Introduction to Haidt’s moral psychology (34:00)
  • Haidt’s dual-process view of moral judgment (34:30)
  • Moral reasoning as an adaptation for social influence (35:20)
  • Moral intuitions as evolutionary adaptations (36:30)
  • Introduction to the moral emotions (six core responses) (37:50)
  • Liberal versus conservative moral psychology (39:20)
  • The moral matrix: it “binds us and blinds us” (40:30)
  • What an enlightened moral stance would look like (41:55)
  • Assessing Haidt’s model (42:40)
  • Introduction to Joshua Greene (46:20)
  • Greene’s digital camera model: presets vs manual mode (47:20)
  • When preset mode (moral intuition) is unreliable (50:52)
  • When should we rely on System 2, “manual mode”  (52:40)
  • Greene’s consequentialist view of moral reasoning (53:10)
  • How Greene’s dual-process view of moral judgment differs from Haidt’s (53:30)
  • Summary: the value of multiple mental models for critical thinking (55:55)

Quotes:

“And as critical thinkers, we shouldn’t shy away from having multiple models that address the same, or similar, phenomena. On the contrary, we should try to accumulate them. Because each of these represents a different perspective, a different way of thinking, about a set of complex psychological phenomena that are important for us to understand. ”

“Kahneman is inviting us to think of System 1 and System 2 like characters, in something like the way that the movie Inside Out personified emotions like happy, sad, anger and disgust. We identify with System 2, our conscious reasoning self that holds beliefs and makes decisions. But System 2 isn’t in the driver’s seat most of the time. Most of the time, the source of our judgments and decisions is System 1. System 2 more often plays the role of side-kick, but a side-kick who is under the delusion that he or she is the hero.”

“The rider can reason and argue and plan all it wants, but if you can’t motivate the elephant to go along with the plan, it’s not going to happen. So we need to pay attention to the factors that influence the elephant, that influence our automatic, intuitive, emotion-driven cognitive processes.”

“[According to Haidt] our moral psychology was designed by evolution to unite us into teams, divide us against other teams, and blind us from the truth. This picture goes a long way to explaining why our moral and political discourse is so divisive and so uncompromising. But what is the “truth” to which we are blind?”


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Introduction

On this episode I want to introduce a very important topic. We like to talk about mental models on this show. Mental models that can help us think critically and be more effective communicators and persuaders.

These can come in all shapes and sizes, but some models are more important than others because they’re models of the process of reasoning itself. Models of how our minds function, how biases arise in our thinking, and why we behave the way we do.

These models come in families, that have core features in common, but that different in other respects.

The most influential among these families are what are known as “dual-process” models of cognition. Many of you are already familiar with the distinction between System 1 and System 2 thinking. That’s the family I’m talking about.

These aren’t the only kind of models that are useful for critical thinking purposes, but they’re very important. So at some point on the education of an Argument Ninja, you need to be introduced to the basic idea of a dual process view of the mind.

From a teaching perspective, that first model needs to be simple, intuitive, and useful as a tool for helping us become better critical thinkers.

Luckily, we’ve got a few to choose from. They’ve been provided for us by psychologists who work in this tradition and who write popular books for a general audience.

So that’s one of my goals with this episode. To introduce you to the ways that some prominent psychologists talk about dual process reasoning, and the simple conceptual models they’ve developed to help communicate these ideas.

Specifically, we’re going to look at dual process models in the work of Daniel Kahneman, Jonathan Haidt, and Joshua Greene. Kahneman you may know as the author of Thinking, Fast and Slow. Haidt is familiar to many of the listens of this show, he’s the author of The Righteous Mind, and he introduced the well known metaphor of the Elephant and the Rider.  Joshua Greene isn’t quite as famous as Kahneman or Haidt, but his work in moral psychology overlaps in many ways with Haidt’s, and he introduces a very interesting mental model for dual process thinking in his book Moral Tribes.

I have another goal for this podcast. That goal is to help you, the listener, become better critical thinkers and consumers of information about these writers, these mental models, and dual process theories of the mind in general.

Why is this necessary? Because it’s easy for people to develop misconceptions about these models and what they tell us about human reasoning.

Part of the problem is that most people are introduced to dual-process thinking through one of these popular psychology books, either through Kahneman or Haidt or some other author. And a lot of people don’t read beyond that one book, that one exposure. At least on the topic of dual-process thinking.

So it’s easy for a reader to come to think that one particular author’s version of dual-process thinking represents the final word on the subject.

When your only exposure is one or two popular science books, you don’t have a chance to see these models from the perspective of the author as a working scientist within a community of working scientists who are engaged in very specific scientific projects, trying to answer very specific questions, and who often disagree with one another.

The reality is that there isn’t just one dual-process theory of human cognition. There are many dual-process theories. And not all of them are compatible.

The territory is much larger and more complex than any given map we may have in our hands. That’s important to know.

And as critical thinkers, we shouldn’t shy away from having multiple models that address the same, or similar, phenomena. On the contrary, we should try to accumulate them. Because each of these represents a different perspective, a different way of thinking, about a set of complex psychological phenomena that are important for us to understand.

With these multiple models in our heads, we can then think more critically and creatively about our own reasoning and behavior, and the behavior of other people.

Daniel Kahneman and the Biases and Heuristics Research Program

Let’s start with Daniel Kahneman and the model he presents in his 2011 book Thinking, Fast and Slow.

The book is a combination of intellectual biography and an introduction to dual-process thinking in psychology for the layperson.

It became a best-seller partly due to Kahneman’s status as a Nobel Prize winner in Economics in 2002.

But the Nobel Prize was based on the work he did with Amos Tversky cognitive biases and heuristics that lead to a revolution in psychology and launched the field of behavioral economics.

In 1974 they published an article, titled “Judgment under Uncertainty: Heuristics and Biases”, that summarized a number of studies they conducted on how human beings reason about probabilities.

They showed that there’s a significant and predictable gap between how we ought to reason, based on the standard rules of statistics and probability, and we in fact reason.

This gap between how we ought to reason and how we in fact reason is what they called a cognitive “bias”.

In order to explain these systematic errors in our judgment, they introduced the idea that our brains use shortcuts, or heuristics, to answer questions about chance and probability.

For example, if we’re asked to estimate the frequency of events of a certain kind, like being hit by lightning, or winning the lottery, or dying in a car accident, and we have to assign a probability to these events, how do we do this?

If you were forced to write down an answer right now, you would write down an answer. But how do you decide what to write down?

Well, Kahneman and Tversky suggested that what our brains do is swap out these hard questions for an easier question. The easier question is this: How easy is it for me to imagine examples of the events in question?

And then our brains follow a simple rule, a heuristic, for generating a judgment: The easier it is for me to imagine examples of the events in question, the higher I will judge the probability of events of this type. The harder it is for me to imagine examples, the lower I will judge the probability.

So if it’s easy for me to recall or imagine examples of people being killed by lightning, I’ll judge the probability of being killed by lightning to be higher than if I struggle to imagine such examples.

This particular shortcut they called the “availability heuristic”, because we base on judgments on how available these examples are to our memory or imagination.

In the paper, Kahneman and Tversky introduced several other heuristics, including what they called the “representativeness” heuristic, and the “anchoring and adjustment” heuristic.

These heuristics are themselves hypotheses that can be tested, and this launched a whole research program devoted to testing such hypotheses and looking for new ones.

And over the past forty years, there’s been an explosion of research on cognitive biases. The wikipedia page called “list of cognitive biases” has over two hundred entries in it.

Now, at this early stage, in the 1970s, no one was using the language of System 1 and System 2. But the idea of our brains using two distinct methods of answering these questions was implicit in the experiments and the analysis of the results.

There are the fast, automatic shortcuts that our brains seem to default to, that generate our gut responses. And there’s the slower, more deliberate reasoning we do when, for example, we’re consciously trying to apply our knowledge of probability and statistics to work out a solution.

This becomes the template for the System 1, System 2 distinction that Kahneman features so prominently in Thinking, Fast and Slow.

It’s important to remember that our heuristic-based, System 1 reasoning isn’t always wrong. In fact, the view that most researchers hold is that heuristic reasoning is highly adaptive. It generates results that are good enough, most of the time, and it’s fast and automatic.

Many of these heuristics have evolutionary origins. We have them precisely because they were adaptive for survival in our ancestral past.

But heuristic reasoning works best when there’s a good match between the kind of problems that the heuristic was designed to solve efficiently, and the problem that an organism is actually facing. If there’s a mismatch, then we’re inclined to call the resulting judgment an error.

And one can argue that modern life poses more problems of this type, where there’s a mismatch and our initial judgments don’t give the best answers.

I’ll give a common example. In our ancestral environment it may have been adaptive to have a craving for sweets and salty foods, to over-eat on these food sources when we come across them, because such food sources were few and far between.

But in our modern environment our craving for sweets and salty foods is no longer adaptive, because we’ve created an environment where we have easy access to them all the time, and over-eating results in obesity, diabetes and so on. Now that adaptive shortcut has become an unhealthy bias.

Now, over time, as new heuristics and different kinds of cognitive biases were discovered, it became natural to see all of this as pointing toward a more general dual-process view of human reasoning and behavior.

This is the picture that Kahneman lays out in Thinking, Fast and Slow.

Dual-Process Theories in Psychology

But we shouldn’t think that the biases and heuristics program was the only source for this view.

Dual-process views have a history that predates Kahneman’s use of this language, and that run along independent paths.

Kahneman himself borrows the language of System 1 and System 2 from Keith Stanovich and Richard West, from a 2000 paper titled “Individual Differences in Reasoning: Implications for the Rationality Debate”.

Stanovich and West used the System 1/System 2 language back in 2000. But modern dual-process theories appeared in different areas of psychology much earlier.

Seymour Epstein, for example, introduced a dual-process view of personality and cognition back in 1973, in his work on what he called “cognitive-experiential self theory”.

Epstein argued that people operate using two separate systems for information processing: analytical-rational and intuitive-experiential. The analytical-rational system is deliberate, slow, logical and rule-driven. The intuitive-experiential system is fast, automatic, associative and emotionally driven. He treated these as independent systems that operate in parallel and interact to produce behavior and conscious thought.

Sound familiar?

Dual-process theories were also introduced back in the 1980s by social psychologists studying social cognition and persuasion.

Shelly Chaiken, for example, called her view the “heuristic-systematic” model of information processing. The model states that people process persuasive messages in one of two ways: heuristically or systematically.

This view is closed related to Petty and Cacioppo’s model of the same phenomena, which they called the “elaboration likelihood model”. They argued that persuasive messages get processed by what they called the peripheral route or the central route.

In both of these cases, these styles of information processing would line up today with the System 1, System 2 distinction.

There are lots of examples like this in the literature. So what you have is a variety of dual-process views that have a family resemblance to one another. The cognitive biases and heuristics tradition is just one member of this family.

But the similarities among these views suggest a convergence on a general dual-system view of the mind and behavior, and there was a temptation to lump all these distinctions together.

For example, it’s quite common in popular psychology books or online articles to see dual process views presented as a single generic theory, with System 1 and System 2 as the headers for a long list of attributes that are claimed to fall under each category.

So, you’ll hear people say that System 1 processing is unconscious while System 2 processing is conscious.

System 1 is automatic, System 2 is controlled.

System 1 is low effort, system 2 is high effort.

Fast vs. slow. Implicit vs explicit. Associative vs rule-based. Contextual vs abstract. Pragmatic vs logical. Parallel vs sequential.

Here are some associations related to evolutionary thinking.

System 1 is claimed to be evolutionarily old,  System 2 is evolutionarily recent.

System 1 expresses “evolutionary rationality,” in the sense that it’s adaptive for survival, while System 2 expresses individual or personal rationality.

System 1 processes are shared with animals, System 2 processes are uniquely human.

System 1 is nonverbal, System 2 is linked to language.

Another claim is that System 1 processing is independent of general intelligence and working memory, while System 2 processing is linked to general intelligence and limited by working memory.

And in some lists, emotion and feeling are linked directly to system 1 processes, while analytic reasoning is linked to system 2 processes.

Now, as tempting as it is to imagine that these lists are describing some general theory of the mind and behavior, that’s not the case.

There is no general dual-process theory that is worthy of being called a “theory”.

What there is is a collection of theories and explanatory models that have homes in different branches of psychology and cognitive science, which share a family resemblance.

Its more helpful to divide them into sub-groups, so you can actually compare them.

So there are dual-process theories of judgment and decision-making. The biases and heuristics tradition that Kahneman pioneered is in this group.

There are dual-process theories of social cognition, which focuses on conscious and unconscious processing of social information. The “elaboration likelihood” model of persuasion is in this group.

And there are dual-process theories of reasoning. And by reasoning I mean deductive reasoning, how people reason about the kinds of logical relationships you might study in a formal logic class. Why are some inferences easy for us to recognize as valid or invalid, and some much harder to recognize? Formal logic doesn’t ask this question, but psychologists have been studying this for decades.

So there’s more diversity in dual-process views than you would learn from reading popular psychology books.

There’s also a lot more disagreement among these authors than these books would suggest.

However, that doesn’t mean that there aren’t useful models that we can extract from this literature, that we can use to help us become better critical thinkers. There are.

This is actually one of Kahneman’s goals in Thinking, Fast and Slow. So let’s look at how Kahneman tries to do this.

Kahneman’s Teaching Model

One thing to remember is that when an academic is in “teaching mode” they can get away with making broad generalizations that they would never say in front of their academic peers.

When Kahneman introduces the System 1, System 2 distinction, he’s in teaching mode. He believes that if the reader can successfully internalize these concepts, they can help us make better judgments and decisions.

So he starts out with a standard description.

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control”.

Here are some examples of skills and behaviors that he attributes to System 1.

  • Detect that one object is more distant than another.
  • Orient to the source of a sudden sound.
  • Complete the phrase “bread and …”.
  • Make a “disgust face” when shown a horrible picture.
  • Detect hostility in a voice.
  • The answer to 2 + 2 is …?
  • Read words on large billboards.
  • Drive a car on an empty road.
  • Find a strong move in chess (if you’re a chess master)
  • Understand simple sentences.

On the other hand, “System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice and concentration”.

Here are some examples of System 2 activities.

  • Brace for the starter gun in a race.
  • Focus attention on the clowns in the circus.
  • Focus on the voice of a particular person in a crowded and noisy room.
  • Look for a woman with white hair.
  • Search memory to identify a surprising sound.
  • Maintain a faster walking speed than is natural for you.
  • Monitor the appropriateness of your behavior in a social situation.
  • Count the occurrences of the later “a” on a page of text.
  • Tell someone your phone number.
  • Compare two washing machines for overall value.
  • Fill out a tax form.
  • Check the validity of a complex logical argument.

In all these situations you have to pay attention, and you’ll perform less well, or not at all, if you’re not ready or your attention is distracted.

Then Kahneman says, “the labels of System 1 and System 2 are widely used in psychology, but I go further than most in this book, which you can read as a psychodrama with two characters.”

“When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decided what to think about and what to do. Although System 2 believes itself to be where the action is, the automatic System 1 is the hero of the book. I describe System 1 as effortlessly originating impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps. I also describe circumstances in which System 2 takes over, overruling the freewheeling impulses and associations of System 1. You will be invited to think of the two systems as agents with their individual abilities, limitations, and functions.” (P. 21)

So, Kahneman is inviting us to think of System 1 and System 2 like characters, in something like the way that the movie Inside Out personified emotions like happy, sad, anger and disgust.

We identify with System 2, our conscious reasoning self that holds beliefs and makes decisions. But System 2 isn’t in the driver’s seat most of the time. Most of the time, the source of our judgments and decisions is System 1. System 2 more often plays the role of side-kick, but a side-kick who is under the delusion that he or she is the hero.

Now, as Kahneman works through one chapter after another in the book, he introduces new psychological principles and then tries to reframe them as attributes of these two characters.

For example, there’s a chapter on how we substitute one question for another. I gave an example of that already.  This is chapter 9 in Thinking, Fast and Slow, called “Answering An Easier Question”.

You’re given a target question, and if it’s a difficult question to answer, System 1 will try to answer a different, easier question, which Kahneman calls the “heuristic question”.

If the target question is “How much would you contribute to save an endangered species?”, the heuristic question is “How much emotion do I feel when I think of dying dolphins?”.

If the target question is “How happy are you with your life these days?”, the heuristic question is “What is my mood right now?”

If the target question is “How popular will the president be six months from now?”, the heuristic question is “How popular is the president right now?”.

If the target question is “How should financial advisers who prey on the elderly be punished?”, the heuristic question is “How much anger do I feel when I think of financial predators?”

If the target question is “This woman is running for the primary. How far will she go in politics?”, the heuristic question is “Does this woman look like a political winner?”.

Now, there’s a step still missing. Because if I’m asked how much money I would contribute to a conservation campaign, and all I say is “I feel really bad for those animals”, that’s not answering the question. Somehow I have to convert the intensity of my feelings to a dollar value.

And that’s something that System 1 can do as well. Kahneman and Tversky called it “intensity matching”.

Feelings and dollar amounts can both be ordered on a scale. High intensity, low intensity. High dollar amount, low dollar amount.

System 1 is able to pick out a dollar amount that matches the intensity of my feelings. The answer I give, the dollar amount that actually pops into my head, is the amount that System 1 has determined is the appropriate match for my feelings. Similar intensity matchings are possible for all these questions.

So, substitution and intensity matching is a type of System 1 heuristic processing that we use in a variety of situations.

Now, how would we describe the personality of System 1, if we were trying to imagine a character who behaves like this?

When I used to teach this material in a classroom I would sometimes try to act out this persona, someone who perceives reality through the filter of their emotions and immediate impulses.

But we have to remember that in Kahneman’s story, System 1 isn’t who we identify with. In most cases we’re not consciously aware of these processes. We identify with System 2. So how does System 2 relate to System 1?

Well, in class I would present System 2 as a lazy, half-awake student who is good at math and solving problems when she’s alert and paying attention, but she spends most of her time in that half-awake state.

A belief or a judgment starts in System 1. What System 1 outputs are impressions, intuitions, impulses, feelings. Not quite fully formed judgments.

These are then sent on to System 2, which converts these into consciously held beliefs and judgments and voluntary actions.

Now, when System 2 is in this low-effort, half awake mode, what it basically does is rubber-stamp the outputs delivered by System 1.

So what eventually comes out of your mouth, or pops into your head, is really just a version of what System 1 decided, based on your impressions, intuitions, feelings, and the heuristic processing that goes along with these.

Most of the time, this system works fine.

But when System 1 encounters a problem or a task that is surprising or has a hard time handling, it can call on System 2 to provide more detailed and specific processing that might solve the problem at hand.

With our two characters, this involves System 1 telling System 2 to wake up because we need you to do some thinking that requires focus and attention.

Now, in this alert, active mode, System 2 has the capacity to override or modify the outputs of System 1. This is what we want it to do if these System 1 outputs are biased, if they’re prone to error.

System 2 can tell System 1, wait a minute … I might want the conclusion of this argument to be true, but that doesn’t make it a good argument. And look, I can see a problem with the logic right here …

Or System 2 can do things like imagine hypothetical scenarios, and run mental simulations to see how different actions might have different outcomes, and then pick the action that gives the best outcome in these simulations.  Like imagining what the outcomes will be if I go to this party tonight rather than study for my final exam, which is at 9 o’clock in the morning.

But System 2 has a limited capacity for this kind of work. System 2 is fundamentally lazy. Its default mode is to minimize cognitive effort, when it can get away with it.

This is an important element of this dual-process model. Kahneman calls it the “lazy controller”. Stanovich calls it the “cognitive miser”. The principle actually applies to both System 1 and 2, but it’s a particular issue for System 2.

So one source of error can came from System 1, when its outputs are biased.

Another source of error can come from System 2, when it fails to override the biased judgments that System 1 feeds it, or when it doesn’t know how to come up with a better answer.

System 2 can fail to do this for any number of reasons: because you’re tired or distracted, or because you don’t have the right background knowledge or training or cognitive resources to work out a better answer.

But this is the basic idea of how System 1 and System 2 interact, according to Kahneman. And this is how cognitive biases fit within this framework.

Overall, the division of labor between System 1 and System 2 is highly efficient. It minimizes effort and optimizes performance. It works well most of the time because System 1 is generally very good at what it does.

But the vulnerabilities are real, and what’s more distressing is that these vulnerabilities can be exploited. There’s a whole field of persuasion practice that is organized around exploiting these vulnerabilities.

Assessing Kahneman’s Model

So, what should we think of this model?

Well, my concern isn’t so much with whether it accurately reflects the consensus on dual-process theories in psychology. It’s Kahneman’s perspective on dual-process theories of judgment and decision-making, filtered through his own history as a central contributor to this field, and his aim to write a book that is accessible to the public.

It’s not hard to find respected people in the field who take issue with various elements of the story that Kahneman tells.

My interest is in how useful this model is for teaching critical thinking, as a tool for improving how people reason and make decisions.

From that standpoint, it’s got a lot of virtues.

I can draw a diagram on a chalkboard and describe the basic picture of how System 1 and System 2 interact, and how cognitive biases appear and fit within this picture. I can describe how these personified characters behave and interact with each other, which is extremely useful.

If I can get students to internalize this picture at some point, that’s a useful mental model to have in anyone’s critical thinking toolbox.

And this is exactly what Kahneman had in mind when he was writing the book. He’s very clear that System 1 and System 2 are fictions. Useful fictions, but still fictions.

Here’s a quote: “System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters.

Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home.

You may well ask: What is the point of introducing fictitious characters with ugly names into a serious book?

The answer is that the characters are useful because of some quirks of our minds, yours and mine.

A sentence is understood more easily if it describes what an agent does than if it describes what something is, what properties it has.

In other words, “System 2” is a better subject of a sentence than “mental arithmetic”.

The mind — especially System 1 — appears to have a special aptitude for the construction and interpretation of stories about active agents, who have personalities, habits, and abilities.

Why call them System 1 and System 2 rather than the more descriptive “automatic system” and “effortful system”?

The reason is simple: “Automatic system” takes longer to say than “System 1” and therefore takes more space in your working memory.

This matters, because anything that occupies your working memory reduces your ability to think.

You should treat “System 1” and “System 2” as nicknames, like Bob and Joe, identifying characters that we will get to know over the course of this book.

The fictitious systems make it easier for me to think about judgment and choice, and will make it easier for you to understand what I say”.

Unquote.

So, part of the explanation for the language that Kahneman uses is that he’s in teaching mode.

Jonathan Haidt: The Elephant and the Rider

Now, let’s talk about another popular model for dual-process thinking, Jonathan Haidt’s “elephant and the rider” model.

Haidt introduced this metaphor in his 2006 book the The Happiness Hypothesis, which is subtitled Finding Modern Truth in Ancient Wisdom.

Haidt is a social psychologist who specializes in moral psychology and the moral emotions. And he very much views his work as a contribution to the broader field of positive psychology, which you can think of very roughly as the scientific study of the strengths that enable individuals and communities to thrive, and what sorts of interventions can help people live happier and more fulfilled lives.

Haidt has always been interested in how people from different cultures and historical periods pursue their collective goals and conceive the good life. The thesis that he’s been pushing for most of his career is that the scientific study of human nature and human flourishing has been handicapped by an overly narrow focus on modern, western, industrialized cultures.

He thinks we should look at human cultures across space and time, and try to develop an account of human nature that explains both the common patterns and the differences that we see across cultures.

When we do this, we get a richer picture of human psychology and human values, and new insights into how we can live happier and more meaningful lives.

It’s in this context that Haidt introduces the metaphor of the elephant and the rider. It’s part of a discussion about the various ways that we experience the human mind as divided, as a source of internal conflict.

I want to lose weight and get in shape but I constantly fail to make lasting changes to my eating and exercise habits.

I want to be more patient with other people but I get triggered and can’t control my emotions.

I want get started early on this writing project but I find myself procrastinating on YouTube and Facebook and I end up waiting until the last minute again.

I want to make positive changes in my life but my mind and my body seem to be conspiring against me.

This is where Haidt introduces the metaphor as a model for our divided mind, a mind with two distinct operating modes that sometimes come into conflict.

Imagine yourself as a rider sitting on top of an elephant. You’re holding the reins in your hands, and by pulling one way or the other you can tell the elephant to turn, to stop, or to go. You can direct things, but only when the elephant doesn’t have desires of its own. When the elephant really wants to do something, like stop and eat grass, or run away from something that scares it, there’s nothing the rider can to do stop it, because it’s too powerful. The elephant, ultimately, is the one in control, not the rider.

The rider represents our conscious will, the self that acts on the basis of reasons, that can plan and formulate goals. The elephant represents our gut feelings, our visceral reactions, emotions and intuitions that arise automatically, outside of conscious control.

If this sounds familiar, there’s a reason for that. Haidt is connecting this model to dual process theories of cognition. The rider is effectively System 2, the elephant is System 1.

The great virtue of this metaphor, from a teaching standpoint, is that it vividly depicts a key thesis about the relationship between these two systems. It’s not an equal partnership. The elephant, System 1, is the primary motivator and driver of our actions. The rider, System 2, has a role to play, but it’s not in charge; System 1 is in charge.

Kahneman says something similar, but when you just have the labels, System 1 and System 2, the asymmetry of the relationship isn’t apparent on the surface. But with the image of the rider atop a huge elephant, it’s implicit in the image itself.

The purpose of this model, for Haidt, is to help us understand failures of self-control, and how spontaneous thoughts and feelings can seem to come out of nowhere. And it can give us guidance in thinking about how to change our own behavior, and the behavior of others.

The principle is simple. The rider can reason and argue and plan all it wants, but if you can’t motivate the elephant to go along with the plan, it’s not going to happen. So we need to pay attention to the factors that influence the elephant, that influence our automatic, intuitive, emotion-driven cognitive processes. If we can motivate the elephant in the right way, then the rider can be effective in formulating goals and coming up with a plan to achieve those goals, because the core motivational structures of the elephant and the rider are aligned. But once that alignment breaks, and the elephant is following a different path, the rider is no longer effective.

From a teaching perspective, this is the value of the rider and the elephant model.

But it has its limits. It doesn’t say much about these System 1 processes other than they’re hard to control. And it doesn’t give us much insight into the philosophical and psychological themes that Haidt is actually interested in, that have to do with moral psychology and the moral emotions.

That’s the topic of his next book, which he published in 2012, called The Righteous Mind, which is subtitled “Why Good People Are Divided by Politics and Religion”.

Haidt’s Moral Psychology

In this book, Haidt argues that our moral judgments have their origins in the elephant, in System 1 automatic processes.

You can think of the book as an exercise in unpacking the various evolutionary and cultural sources of our moral intuitions, our moral “gut feelings”, and examining how this bears on our modern political and religious differences.

Now, here’s an important question: what’s the role of the rider in our moral psychology?

Haidt has a specific thesis about this.

Intuitively, it feels like we have a capacity to reason about moral issues and convince ourselves of a moral viewpoint on the basis of those reasons.

But Haidt thinks this is largely an illusion. This rarely happens.

For Haidt, the primary role of the rider, in our moral psychology, is to justify our moral judgments to other people, to convince other people that they should hold the same judgment as us.

So we come up with reasons and present them to others. But we didn’t arrive at our original moral judgment on the basis of these reasons.  We arrived at that moral judgment based on intuitive, automatic processing in System 1, that is going on below the surface, that’s largely outside of our conscious control.

The reasoning that we come up with to justify our judgment is a System 2 process. But the main function of this kind of reasoning is to rationalize, to others, a judgment that has been made on very different grounds.

In other words, for Haidt, the primary function of moral reasoning, the reason why we have the capacity at all, is social persuasion, to convince others. Not to convince ourselves, though sometimes we do that too. And certainly not to arrive at timeless truths about morality.

Now, he grants that it doesn’t feel this way, to us. It doesn’t feel like all we’re doing when we argue about ethical or political issues is rationalize a position that we’ve arrived at by other means.

It feels like we’re arguing about moral facts that can be true or false. It feels like we are reasoning our way to knowledge of an objective morality.

But for Haidt, all of this is an illusion. An illusion manufactured by our minds. There are no moral truths of this kind.

This is not to say that our moral intuitions are meaningless, that they have no cognitive content. They do. But it’s not the kind of content that most of us think it is.

Haidt would say that our moral intuitions are adaptations, they’re a product of our evolutionary history that is subsequently shaped by culture.

As adaptations, our evolved moral intuitions served the survival needs of our evolutionary ancestors by making us sensitive to features of our physical and social environment that can harm us or otherwise undermine our ability to survive.

So we have natural aversions to things that cause physical pain, disease, and so on. We’re wired to be attracted to things that promote our self-interest and to avoid things that undermine our self-interest.

But humans are also a social species. Our primate ancestors lived in groups and survived because of their ability to function within groups. Parent-child groups, kin groups, and non-kin groups.

The most distinguishing feature of human cultures is highly coordinated social activity even among genetically unrelated members of a group.  We are an ultra-social species.

That means that at some point we had to learn how to cooperate within large groups to promote the goals of the group, not just individual self-interest.

Haidt’s view is that our moral psychology developed to solve the evolutionary problem of cooperation within groups.

Now, if you’re familiar with Haidt’s approach to the moral emotions you know that he thinks there are six distinct categories of moral value that are correlated with distinctive moral emotions.

There’s care, the desire to help those in need and avoid inflicting harm.

There’s liberty, the drive to seek liberation from constraints and to fight oppression.

There’s fairness, the impulse to impose rules that apply equally to all and avoid cheating.

There’s loyalty, the instinct to affirm the good of the group and punish those who betray it.

There’s authority, the urge to uphold hierarchical relationships and avoid subverting them.

And there’s sanctity, the admiration of purity and disgust at degradation.

Each of these values is correlated with a moral emotion or an intuitive moral response. For all of these, Haidt gives an evolutionary story for why these responses would be adaptive in promoting the survival of individuals or groups and for coordinating social behavior.

You might also be familiar with Haidt’s favorite visual metaphor for these instinctive moral responses. They’re like taste receptors on our tongue.

When we’re born we all have an innate capacity to respond to different tastes, like sweet, bitter, sour, salt, and so on. But children from different cultures are exposed to different foods that emphasize some flavors over others. So the palate of a person born and raised in Indian or China ends up being quite different from the palate of a person raised on a typical American diet.

Similarly, nature provides a first draft of our moral psychology that we all share. But then culture and experience revise this first draft, emphasizing certain values and deemphasizing others.

Now, Haidt’s book focuses on the differences in moral psychology between liberals and conservatives. He argues that modern, so-called liberal cultures tend to emphasize the moral significance of the values of care, liberty and fairness, and they tend to downplay the moral significance of the values of loyalty, authority and sanctity.

By contrast, conservative cultures, and traditional cultures more generally, uphold the moral importance of all six categories of value.

Conservative moral psychology treats values like loyalty, authority and sanctity as morally important, morally relevant, in way that liberal moral psychology does not.

Haidt’s own view is that we need to allow space for both moralities. They complement one another. Society is better off with both in play.

This is a very quick introduction to Haidt’s work, and there’s a lot more to say about it, but my main interest here is how he thinks about moral intuition and moral reasoning, his dual-process, “elephant and rider” view of moral psychology.

And I’m interested in how Haidt thinks this model can help us think more critically about our own reasoning, and specifically about the way we approach ethical and political disagreements.

So let’s push on just a bit further. What, ultimately, does Haidt think our moral psychology was designed to do?

Here’s his answer, which has been much quoted and discussed.

Our moral psychology was designed by evolution to unite us into teams, divide us against other teams, and blind us from the truth.

This picture goes a long way to explaining why our moral and political discourse is so divisive and so uncompromising.

But what is the “truth” to which we are blind?

It’s this: that the moral world that we inhabit, the “moral matrix” within which we live, is the only one that can be rationally justified.

In other words, we think our righteousness is justified. “We’re right, they’re wrong”. But this conviction in our own rightness is itself a part of our moral psychology, part of our moral matrix, that has been selected for its capacity to unite us into teams and divide us against other teams. That feeling of righteousness that we experience is nothing more than an evolutionary survival tool.

Now, given this view, what would an enlightened moral stance look like?

For Haidt, an enlightened moral stance is one that allows us to occasionally slip out from under our own moral matrix and see the world as it truly is.

This is essential to cultivate what Haidt calls “moral humility”, to get past our own sense of self-righteousness.

This is valuable because doing so will allow us to better see how other people view the world, and will contribute to greater sympathy and understanding between cultures.

And doing so will increase our capacity for constructive dialogue that has a real chance of changing people’s behavior.

That’s what Haidt believes.

Assessing Haidt’s Model

Let me summarize the parts of this that I like, from a critical thinking perspective.

I like the elephant and rider model, for the reasons I mentioned earlier. It’s a great way to introduce dual process thinking, and it captures some important features of the asymmetry between System 1 and System 2 that are harder to explain if you’re just working with these labels.

I think Haidt’s work on moral emotions and moral psychology is very important. It does paint a naturalistic, evolutionary picture of the nature of morality that will be hard for many people to swallow who aren’t already disposed to think this way. In fact, this is also his account of the origins of naturalistic origins of religion. So it’s a direct challenge to devout religious belief, and religious views of morality,  and even many traditional secular views of morality. But I think the exercise of trying to see things from this perspective is a valuable one.

Also, Haidt’s empirical work on differences in moral psychology have some immediate applications to moral persuasion.

The basic rule is if you’re a conservative and you want to persuade a liberal, you should try to appeal to liberal moral values, like care, fairness and liberty, even if you yourself are motivated differently.

If you’re a liberal trying to convince a conservative, you can appeal to these values too, but you’ll do better if you can make a case that appeals to conservative values of loyalty, authority or sanctity.

Robb Willer, a sociologist at Stanford, has been studying the effectiveness of moral persuasion strategies that are inspired by Haidt’s framework. I’ll share some links in the show notes.

I also like Haidt’s views on moral humility, and I like this notion of cultivating an ability to step outside our own moral matrix, if only for a short time — to see our tribal differences from the outside, and how they operate to simultaneously bind us and blind us. That’s a skill that takes practice to develop, but from a persuasion standpoint I think it’s an essential, “argument ninja” skill.

Now let me offer a few words of caution.

I know there are some anti-PC, anti-SJW audiences who view Haidt as something of an intellectual hero and who seem eager to swallow just about everything he says, but just as with Kahneman, his popular work doesn’t necessarily reflect the internal debates within his discipline, or the degree of consensus there is within the field about the topics he writes about.

So, just so you know, but there’s a wide range of opinion, both positive and negative, about Haidt’s work, among psychologists, and outside his field, especially among evolutionary biologists and philosophers.

There’s disagreement about the moral categories he uses; there’s considerable disagreement about his thesis that moral reasoning is almost never effective at motivating moral action or revising moral beliefs; there’s a ton of debate over his use of group selectionist arguments in his evolutionary psychology; and among philosophers, there’s a large contingent that believes that Haidt simply begs the question on a number of important philosophical positions, that he draws much stronger conclusions about the nature of ethics than his descriptive psychology alone would justify.

Now, these debates are par for the course for any prominent academic, and they tend to stay within their academic silos. They don’t have much impact on Haidt’s reputation as a public intellectual.

But when I’m teaching this material, I have to remind people that there’s a difference between presenting a position that I happen to think has important insights, and uncritically endorsing whatever the author says on the subject.

The more models we have the better. So in that spirit, I’d like to introduce a third dual-process model of reasoning. This one is by Joshua Green, who also works on moral reasoning and moral psychology. But his take-away message is quite a bit different from Haidt’s.

Joshua Greene: The Digital Camera Model

Joshua Greene is an experimental psychologist and philosopher. He’s Professor of Psychology at Harvard, and he’s director of Harvard’s Moral Cognition Lab.

He published a book in 2013 called Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. He covers a lot of the same ground as Haidt, in that he endorses a broadly dual-process view of cognition, and specifically a dual-process view of moral judgment.

Greene also agrees with Haidt that our moral psychology was designed to bind us into groups that we belong to, and pit us against groups that we don’t belong to.

However, Greene is much more optimistic about the role that moral reasoning can and ought to play in our moral psychology.

But let’s start with Greene’s preferred model for our divided mind, because I think it has a lot to recommend it. Kahneman has his System 1 and System 2 agents. Haidt has his Elephant and the Rider. Joshua Greene’s model is the different modes of operating a digital camera.

Here’s how it goes.  A modern digital SLR camera has two modes of operation.

There’s the point-and-shoot mode that offers you different PRESETS for taking pictures under specific conditions. Landscape, daylight, sunny. Sunset, sunrise. Night portrait, night snapshot. Action or motion shot. And so on.

If you’re facing one of these conditions you just set the preset, point the camera and shoot, and all the work is done. You get a good quality picture.

But if you need more control of the camera settings, you can switch to manual mode.

There you can make individual adjustments to the f-stop, the aperture, the shutter speed, focus, white balance, filters, the lens you’re using, and so forth.

Now, that’s a lot more work. It takes more knowledge and effort to operate the camera in manual mode, and actually take a good picture. But for those who can do it, it’s fantastic.

In general, it’s good to have both options, the preset mode and the manual mode.

The presets work well for the kind of standard photographic situations that the manufacturer of the camera anticipated.

The manual mode is necessary if your goal or situation is NOT the kind of thing the camera manufacturer could anticipate.

Both are good for different purposes. It’s optimal to have both, because they allow us to to navigate tradeoffs between efficiency and flexibility.

So, what’s the analogy here?

Preset mode is like our System 1, fast thinking. Heuristic shortcuts function like cognitive point-and-shoot presets.

Manual mode is like System 2, slow thinking. Conscious, deliberate calculation functions like manual mode.

System 1 presets, or heuristics, work well for the kind of standard cognitive tasks that they were designed for.

System 2 manual mode is necessary to solve problems that your automatic presets were NOT designed to solve.

As with the camera, both modes are good for different purposes, and we need both, for the same reason. They allow us to navigate the tradeoff between efficiency and flexibility.

Now, Greene applies these distinctions directly to moral psychology.

System 1 generates our intuitive moral judgments. System 2 is responsible for deliberate moral reasoning.

Both modes are good for different purposes, and we need both.

Notice how different this is already from Haidt’s view. Joshua Greene isn’t minimizing the role of deliberate moral reasoning, and he’s not suggesting that moral reasoning does nothing more than rationalize moral intuitions.

Greene thinks that our moral intuitions, operating in automatic preset mode, give good results when the intuitive response is adaptive and appropriate.

Like, if you go out of your way to do something good for me, my natural intuitive response is to feel grateful and to feel like I now owe you something in return. So I’m more inclined to help you when you need it, or accept a request from you.

That’s a natural response of moral reciprocity, and the basic instinct is hard-wired into us. You scratch my back, I’ll scratch yours. Reciprocal altruism.

But when we’re dealing with problem situations that are fundamentally new, our automatic settings aren’t trained to solve these problems.

Here are some examples.

Consider our modern ability to kill at at distance.

Historically, killing was usually a face-to-face affair. Our emotional responses to personal contact are conditioned by this.

Our emotional responses were never conditioned for cases where we can kill at a distance, like bombing, or with drones.

So our moral emotions don’t respond as strongly to the prospect of lives lost when killing is conducted at a distance, compared to when it’s done face-to-face.

Similarly, our ability to save lives at a distance is a relatively new situation. If we see a child drowning in a pool across the street, we would judge someone who simply walked past as some kind of moral monster.

But if we’re given information about children dying in other parts of the world, and that only a few dollars from us could save a life, we don’t judge those who fail to donate those dollars as moral monsters.

Our natural moral sympathies diminish, they fall off, with distance.

Another example is almost anything to do with intercultural contact between groups. Our intuitive moral psychology is designed to facilitate cooperating within groups, not between groups. It’s much easier for us to harm, discredit and dehumanize people who we see as outsiders.

Another example is any situation that involves long time frames, uncertain outcomes, or distributed responsibility.

This is what we’re facing with the problem of global climate change. It’s the perfect example because it involves all three.

There is nothing in our evolutionary or cultural history that could train our moral emotions to respond appropriately to this problem.

So, for all these reasons, Greene argues that we should treat our automatic moral intuitions in these cases as unreliable.

When this is the case, what we should do is place greater emphasis on our deliberate moral reasoning, our System 2 reasoning.

What kind of reasoning is that? Well, that’s another part of the story that I don’t have time to get into, but Greene has an argument that System 2 moral reasoning basically involves figuring out the actions that will maximize good consequences and minimize bad consequences.

And he argues that this is what we ought to do. So Greene is defending a form of consequentialist moral reasoning in contexts where we have reason to believe that our intuitive moral judgments are unreliable.

So, to sum up, Greene and Haidt have very similar, dual-process, evolutionary views of moral psychology.

But they have very different views about the role of deliberate moral reasoning within this scheme. Haidt is skeptical, Greene is much more optimistic.

And notice that Greene’s digital camera model of dual process reasoning also includes a new element that we haven’t seen before. Haidt has the Elephant and the Rider. Greene has the automatic preset mode and the manual mode. But implicit in Greene’s model is a third element, the camera operator, the person who has to decide which mode to use in a given situation.

Greene chose this model because what’s most important for Greene is this meta-cognitive skill, the skill of deciding when we can rely on our intuitive moral responses and when shouldn’t trust them, and switch over to a more deliberate form of moral reasoning. There’s nothing like this in Haidt’s model.

And one final important difference between Haidt and Greene is that they also have very different views about the moral values that should matter to us.

Haidt thinks that in general shouldn’t privilege liberal over conservative moral values, that society is better off if we allow for the full range of moral values to thrive.

But Greene’s argument suggest thats we should privilege one of these liberal moral values, specifically the value of care for human welfare.

The sorts of consequences that Greene is talking about involve the happiness and suffering of individuals. So our system 2 moral reasoning, according to Greene, should (a) have priority in these fundamentally new problem situations, and (b) be focused on determining those actions that promote happiness and minimize suffering of individuals.

That’s quite different from Jonathan Haidt’s position.

Now, for the sake of being fair, I should add that there are just as many philosophers who take issue with Greene as with Haidt, so neither has a special advantage in this regard. For most moral philosophers are very cautious about inferring normative ethical conclusions from these kinds of empirical arguments.

The Value of Multiple Models

What can we take away from this survey of dual process thinking in psychology? What’s the critical thinking upshot?

Well, remember I talked about the value of having multiple mental models. We’ve got three different authors, giving three different versions of a dual process view of the mind, with three different mental models to represent these processes.

Kahneman has his System 1 and 2 actors, Haidt has the Elephant and the Rider, and Greene has the digital SLR camera.

They’ve all got useful things to say, but the problems that motivate their work are different, and for that reason, the models they use are different. Our goal as critical thinkers should be to understand why the author chose the model they did, and why they thought it was important for their purposes.

And we can apply dual process thinking to a wider range of situations, because we understand better the problems that these authors were trying to solve when they introduced those models.

And its important to remember that we don’t have to choose between them. We want different points of view, we want different perspectives.  There might be some tension between them and the overall picture may be messier, but reality is messier, and the multiplicity of models helps us to see that.

A Final Word

I want to end with another suggestion. I’m a big fan of dual process models. I think some version of these ideas is going to survive and will always be part of our understanding of human reasoning.

But saying this doesn’t commit you to any deeper view of how the mind and the brain work, or of the fundamental processes responsible for the patterns in the observable phenomena that we see.

So you should know that there’s a lot of work being done by different people trying to show that, for a given class of such phenomena, a single process theory, rather than a dual process theory, is able to explain the patterns.

But this is just science. You’ve got observable phenomena and generalizable patterns within those phenomena.  Then you’ve got hypotheses about different types of processes that might explain these patterns. You should expect to see multiple, competing hypotheses at this level. This fact doesn’t invalidate the observable patterns or the ways we can use those patterns to predict and intervene in human behavior.

And it’s worth remembering that Kahneman could have done all of his work without understanding what a neuron is, or anything to do with the physical underpinnings of information processing in the brain and the body.

We shouldn’t be surprised that scientists who work more closely with the brain tend to think about these issues very differently. Philosophers of mind and cognitive science also tend to think of these issues differently.

So, my suggestion is that, even if you’re a fan of dual process models, as I am, you should be open to the possibility that at a more fundamental level, the dual process distinction may not hold up, or how we think about the distinction will be radically different from how we might think of it now.

And this is okay. There’s lots of areas in science like this.

Think about the Bohr model of the atom that you learn in high school science class. You’ve got electrons moving  around the nucleus in different shells, or orbitals, that occupy different energy levels. And you can explain how atoms absorb and emit photons of radiation by showing how electrons move from one orbital to another.

It’s a very useful model, you can use it to predict and explain all sorts of interesting phenomena.

But the distance between that simple model, and our modern understanding of the fundamental nature of particles and forces, as represented in quantum field theory, say, is almost unimaginable.

At that level, the language of localized particles “orbiting” and “spinning” is just a figure of speech, a way of talking about a reality that is far removed from the ordinary way we use those concepts.

We shouldn’t be surprised if something similar happens here. Except in the case of the mind and the brain, we don’t have anything like a fundamental physical theory, so there’s even more room for possibilities that we haven’t even imagined yet.

Read More

Episode 018 Why We Need the Argument Ninja Academy

018 – Why We Need the Argument Ninja Academy: Interview for StoryHinge Podcast

A bit of a departure for episode 018. I hope you enjoy this interview I did with Jason Vidaurri over at the StoryHinge podcast. He was kind enough to let me repurpose the audio of our interview for the Argument Ninja podcast.

On this episode I answer questions about my story, my approach to philosophy and critical thinking, why critical thinking is valuable and important, how our media environment is making it increasingly difficult to think critically for ourselves, what I think is the most glaring omission in standard approaches to critical thinking education, and why the martial arts model of critical thinking that I’m developing at the Argument Ninja Academy is such a useful model.

In This Episode:

  • A quick overview of my background (5:35)
  • When I realized I wasn’t a career academic (7:10)
  • Why I wanted to do “philosophy journalism”  (8:20)
  • Philosophy and critical thinking have never been a part of the public school curriculum (in North America) (9:10)
  • Why is this so? (10:30)
  • Why the “integration” approach to critical thinking education is a “crazy choice” (11:20)
  • The real social function of public education (12:35)
  • A definition of “philosophy” (16:00)
  • The goals of critical thinking (17:20)
  • The consequences of not caring about critical thinking (19:30)
  • How do you know if you’ve “done enough” thinking on a topic? (20:11)
  • Why critical thinkers need to pay attention to the psychology of human reasoning (28:40)
  • Philosophy has an uncomfortable relationship with rhetoric and persuasion (29:00)
  • A new scientific understanding of human reasoning (30:20)
  • The rise of the “persuasion industry” (21:30)
  • Why we need a program that teaches principles of good reasoning and the psychology of belief and persuasion (32:45)
  • The Archery model  (33:20)
  • Confirmation bias (34:00)
  • Cognitive biases narrow the focus of our attention (35:00)
  • Availability bias, and an example (36:00)
  • How lottery advertising exploits availability bias to get people to buy lottery tickets (37:15)
  • How covert influence strategies undermine critical thinking (39:40)
  • How persuasion technologies hijack our attention (40:35)
  • Origins of the Critical Thinker Academy (43:00)
  • Critical thinking skill development: how the Argument Ninja Academy differs from the Critical Thinker Academy (44:20)
  • The martial arts model of critical thinking performance and instruction (45:40)
  • Ethical issues that arise when teaching persuasion, and how the martial arts model helps us to think about this (47:20)
  • Advice for people considering making big life changes (50:15)

References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode

Read More

017 White Belt Curriculum Part 2

017 – White Belt Curriculum (Part 2): The Tao of Socrates

In episode 017 I give an update on new content at the Argument Ninja website (http://argumentninja.com), and I finish reviewing the white belt curriculum for the Argument Ninja Academy program.

The third and fourth learning modules in the white belt curriculum are titled “Socratic Knowledge” and “Socratic Persuasion“.

In this episode I also have an extended case study of a challenging persuasion case over the following issue: Do Christians and Muslims worship the same God?

In This Episode:

  • Preparing for an upcoming talk on cognitive biases and causal reasoning (2:47)
  • The Feynman Technique (5:10)
  • Why the Argument Ninja podcast is like a novel, and the Argument Ninja Academy is like the movie based on the novel (7:00)
  • I wrote 14 new articles for the Argument Ninja website (10:10)
  • A working draft of the Argument Ninja Academy curriculum (10:45)
  • All my recurring supporters on the Wall of Thanks (11:29)
  • My steering committee (12:10)
  • Relationship of the Argument Ninja program to themes often discussed in other podcasts — martial arts for the mind (Joe Rogan, Tim Ferriss, Bryan Callen and Hunter Maats, Sam Harris, Jocko Willink) (12:57)
  • Mixed Mental Arts (15:30)
  • Socratic methods (18:30)
  • A typical Socratic dialogue (18:53)
  • Socrates as the first moral epistemologist (22:00)
  • Why Socratic knowledge is valuable (22:30)
  • Socratic knowledge as knowledge of argument structure; Socratic questioning as a tool for building this structure (27:00)
  • Socratic knowledge is compatible with saying “I don’t know” (29:00)
  • Socratic knowledge and training within the dojo (31:40)
  • Socratic methods as a tool of persuasion (33:20)
  • Simple mental models to help us think about persuasion strategy (35:10)
  • Hard styles versus soft styles in martial arts and persuasion (36:00)
  • Socratic questioning as a soft style (37:30)
  • The core belief network model (38:52)
  • Persuasion strategy based on the core belief network model (40:53)
  • Socratic questioning as a tool for mapping the core belief network (42:30)
  • The bank heist model (44:30)
  • The Indian Jones swap model (45:35)
  • Case study: “Do Christians and Muslims worship the same God?” (47:00)
  • Why the best tool for guiding a Socratic conversation is Socratic knowledge — i.e. knowing what you’re talking about (50:33)
  • Arguments against the claim that Christians and Muslims worship the same God (51:25)
  • Arguments for the claim that Christians and Muslisms worship the same God (52:50)
  • Thinking about the “emotional resonance” of these arguments (57:45)
  • An example of an “Indiana Jones swap” (59:27)
  • Why I initially titled this module “Street Epistemology”, and why I changed it (1:01:40)
  • Origins of the Street Epistemology movement (1:02:34)
  • Why the Argument Ninja Academy is non-partisan with respect to ethical, political and religious beliefs (1:05:37)
  • Thanks to new monthly supporters on Patreon! (1:07:32)

Quotes:

“The kind of understanding that Socrates is trying to acquire, and that he’s testing with his questions, isn’t just knowledge of the facts, even if they’re true facts. He wants some understanding of why they’re true, what grounds them, why we’re justified in believing they’re true.”

“The Socratic method of asking questions in an open, non-confrontational way is just one persuasion tool among many, but it’s a particularly useful tool for this kind of challenge, because it’s a soft technique. It’s designed to slip past the guards and avoid triggering defenses.”

“I’m committed to creating a learning environment that isn’t partisan in any obvious way. Just like in a martial arts class. You line up at the start of class in your uniforms, you start working on your exercises and techniques, and the focus is on the program, not what race or gender or nationality you are, or what political or religious group you may belong to. That’s the environment that I want to create.”

 


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Introduction

This is the Argument Ninja podcast, episode 017.

Hello everyone. Welcome back to the show. I’m your host, Kevin deLaplante.

I got an email from a fan of the show earlier this week. His name is Darrell. He says “I can’t keep silent any longer. I must have another Argument Ninja fix. When’s the next episode?”.

And then I checked and saw that it’s been over a month since the last episode. So Darryll, before I start in with the second part of my overview of the white belt curriculum, let me try to exonerate myself a bit, because I actually have been pretty busy. Along with a bunch of other things, this episode has taken more time than I had expected to come together. But I’m happy with the end result, I hope you will be too.

Let me tell you what’s on the agenda. The episode has three sections. The first section is dedicated to news and updates. The second and third sections cover the remaining two modules in the white belt curriculum.

In news and updates, first I’m going to talk about a speaking gig I have coming up in Toronto. Second, I’m going to talk about the connection that this show has to some other podcasts that you may be familiar with — Joe Rogan, Tim Ferriss, Bryan Callen and Hunter Maats, Sam Harris, Jocko Willink,  — and a recent Skype chat I had with Hunter Maats about the “mixed mental arts” movement that he’s spearheading. And third, I’m going to share some major updates I’ve made to the Argument Ninja website, at argumentninja.com.

Then we move on to the white belt curriculum and third and fourth learning modules. The third module is on Socratic Knowledge, and the fourth is on Socratic Persuasion. I’m subtitling this episode “the tao of Socrates” because this is the first illustration of the yin-yang complementarity of argumentation and persuasion, which is a recurring theme of mine and it’s an important theme in the Argument Ninja program.

This episode will introduce a bunch of very simple mental models to help us start thinking about the challenge of persuading someone to reconsider or change a belief that may be closely connected to their identity.  And for the sake of an example we’re going to look at attitudes and arguments surrounding this question: Do Christians and Muslims worship the same God?

And then at the very end I’m going to talk about the relationship between what I’m calling Socratic Persuasion and the Street Epistemology movement, which is closely related.

Prepping for a Talk

Okay, let’s get into news and updates since last episode.

There are a few things that have occupied my time the past few weeks.

The first is that I had to develop a keynote talk for a conference coming up in early April, in Toronto. It’s an annual professional meeting for legislative and performance auditors in Canada. This is a group whose job involves investigating and assessing the performance of public bodies, like government departments and programs. I gave a talk to this same conference last year on the subject of cognitive biases and critical thinking, and that went over pretty well so they invited me back this year.

This year there’s a panel on the subject of causal reasoning and identifying the root causes of a fault or a problem. There’s a whole literature on what’s known as Root Cause Analysis, that has developed in fields like quality control in manufacturing, accident analysis in occupational health and safety, failure analysis in engineering, risk management in business, and so on.

I was tasked with doing a presentation on cognitive biases and debiasing in Root Cause Analysis, and reasoning about cause and effect more generally. What sorts of biases can influence our judgment about when A is the cause of B, and so on? What can we do to avoid or reduce the errors that we’re prone to in this area?

So this month I had to research and prepare this presentation, which resulted in a slide deck with about 75 slides in it. And I had to submit these materials well in advance because this is a bilingual conference, so as I’m giving the talk in English there are two slide presentations running simultaneously, a version in English and a version where all the text is translated into French, for the benefit of the French speaking attendees. The organizers need enough lead time to translate the slides and make sure everything’s formatted properly.

This was a lot of work. But as always, I use teaching as a tool for learning.

I spent a number of intense days immersed in this literature, from a variety of different fields, including the philosophical literature on the logic of causal reasoning, and tried to synthesize a story to tell to a body of 250 people, most of whom would have no prior exposure to these ideas, and make the story engaging and relevant to their practical interests. And I had to tell this story without using any technical jargon.

I bring this up because it’s an excuse for me to mention an important technique for learning and testing your understanding of a concept. Some people call this the “Feynman technique”, named after the physicist Richard Feynman. It’s a thing, you can google it.

It basically involves taking a concept and writing down an explanation of the concept in plain English, without any jargon, as though you were teaching it so someone else, a student who is new to all of this.

If you get stuck doing this, that indicates that you don’t understand the concept properly yet. When someone understands a concept deeply, that usually translates into an ability to explain it to someone else in simple language. If you find yourself resorting to technical terms or confusing language or leaps in reasoning, that’s a sign that there’s a problem with your understanding. You need to go back, review the source material, pin-point your problem and work it out. And then come back and try to give that explanation again. Iterate this process until you’ve got it down, and test it on a real person who doesn’t know the subject already.

That’s the Feynman method for learning, and you can apply it to just about anything.

What many students don’t realize is that this is how their teachers really developed their understanding of the subjects they teach. They learned through having to explain it to new batches of students every year.

This is one of the reasons why I love to give talks, and why I love producing videos, and why I love creating these podcasts. They’re a way for me to keep learning new things. It’s the Feynman technique repeated over and over.

Anyway, that’s the first thing that was occupying my time this past month.

Adding Content to the Argument Ninja Site

The second thing has to do with the Argument Ninja project itself, and advancing awareness of what we’re trying to do here.

I want to remind listeners that my goal with this project isn’t just to keep producing podcasts in the privacy of my home. It’s to actually make something. Something big, something that makes an impact. Something with the potential to transform people’s lives.

My vision for the Argument Ninja Academy is going to require a team of people to invest their time and talent, over an extended period of time, to make it a reality.

And to do this, it’s going to take more than just a few more monthly supporters on Patreon. Even if I had a huge audience and was making Sam Harris money on Patreon, that wouldn’t be enough. Because what I want to build requires expertise and resources that I don’t have.

In producing these podcasts, and writing articles for the Argument Ninja site, I sometimes think of myself like a writer, a novelist, who is writing a story that he hopes will one day get turned into a movie. I can write the novel all by myself, that’s no problem. In this space, I’m the boss. I’m the creator of this world, I have the vision, I know what I’m doing.

But what I’m proposing for the Argument Ninja Academy is more like the movie adaptation based on this novel. It’s that story, translated into a very different medium. And movies are fundamentally a collaborative medium. If it’s a big  a big project you need a producer, a director, a cinematographer, art director, actors … a whole team.

And when you turn a book into a movie, the author of the book may not be even be the best person to write the screenplay for the movie. The vocabulary of the medium puts constraints on how the story should be told. Screenplays need to be adapted by people who understand these constraints.

For the Argument Ninja Academy, I’m envisioning an online platform where people log in and are lead through a series of learning experiences that, over the course of days and weeks and months, are designed to develop rather sophisticated skills in critical thinking, argumentation, communication, persuasion, and more, in an environment that is fun and engaging and demanding enough to keep people motivated to stay in the program and continue to learn and benefit from it.

The platform that I envision is going to have game-like elements, it’s going to have social and collaborative learning elements, and it’s going to reproduce, to the extent that this is possible, the look and feel of training in a martial art.

The team that is going to build this platform isn’t going to come from academic philosophy or psychology or education. It’s going to come from elearning professionals and instructional designers and gamification experts and web developers and graphic designers and web-based project managers.

So, part of my job at this stage is to try to generate interest in this kind of project, to eventually recruit people with the right expertise and resources to help make this happen.

Now, for a while the Argument Ninja website was only hosting these podcast episodes. There wasn’t really anything else there. As a web resource it wasn’t a great recruiting vehicle.

So what I’ve done over the past month is add content to the site that can better serve as an information resource for people who might be interested in this project.

First, I wrote a bunch of articles for the Argument Ninja site, that are intended to get readers up to speed on my vision of what critical thinking is all about and what this project is all about.

In total I assembled 14 articles, and you can see them all right now over at argumentninja.com.

The first group is about the goals and benefits and importance of critical thinking.

There’s another pair of articles that talk about what I think is wrong with traditional approaches to critical thinking education, and that sets up the next set of articles, which are about what critical thinking education looks like when you think of it as a martial art. There’s an article on mixed martial arts and critical thinking, there’s an article on sparring and critical thinking, and so on.

Another thing I did since the last episode was work on a version of the curriculum for the Argument Ninja Academy, in terms of sequences of learning modules.

So I have a page on the website — it’s called “curriculum” on the main menu — that shows nine belt ranks, from white belt to black belt, with four learning modules associated with each belt rank.

This is just a working document and it’s very early in the process, so this is obviously going to evolve over time. But there is a logic to the progression, which I’ll be talking about on the podcast. Mostly I just want people to be able to see something that makes it easier to imagine what this program might look like.

I also added an updated “wall of thanks” in the sidebar on the Support page that lists all of my supporters who have committed some amount of dollars per month to help support this work. I’m happy to say that at the time of this recording there are over 360 names on that list — which is great in itself, thank you so much to everyone — but the list also serves as a signal to visitors, that all these people have judged that this is something worth supporting, and in that sense it’s also a marketing tool — it helps to legitimize this project in the eyes of people coming to it for the first time.

So, I’ve been trying to set things up so that my team and I have something to work with as we plan our next moves.

I know I’ve been making vague references to my “team” for a while now, which I’m now calling my “steering committee”, since that’s basically the task that’s been occupying us for a while — clarifying the audience for this project, clarifying the vision, building a plan for recruiting the right kind of talent, and so on.

I haven’t mentioned any names or been more specific because we still need to work out some things and make sure all our ducks are lined up in a row. But that’ll change in the near future, and I’ll let you know when that happens.

Mixed Mental Arts and Chatting With Hunter Maats

Okay, there’s one more thing I wanted to mention before we get on with our review of the white belt curriculum.

As many fans of this show and the whole Argument Ninja theme have noticed, there’s a small but growing movement afoot that is taking this metaphor of martial arts for the mind seriously, and a lot of this is driven by people who have grown an audience in the podcast world.

On this list I would include people like Joe Rogan, Tim Ferries, Bryan Callen, Hunter Maats, Jocko Willink, Sam Harris, and others. I know I’m leaving people out. Politically and ideologically they cover a wide range of views, but all of these people have an interest in mental culture and its relationship to physical culture, and in analogies between training your mind and training your body.

Now, I had a Skype chat with Hunter Maats a couple weeks ago, and I want to talk about why this is significant, so let me back up a bit.

Many of you are familiar with Joe Rogan. He’s a standup comic and an actor. Some of you may know him as host of the reality show Fear Factor. He’s also a martial artist and a commentator for MMA.  And for quite a while he’s been hosting a very popular podcast, the Joe Rogan Experience, on audio and on YouTube.

Joe’s podcast guests include entertainers and athletes but also authors and academics. He’s very intellectually curious, very culturally curious, and he uses the podcast to indulge his curiosity and to give a platform for views that he finds interesting and important.

Now, Joe’s show has attracted a lot of like-minded people, with his particular combination of interests — intellectual interests, cultural interests, but also an enthusiasm for athletics, physical training and combat sports.

Another public entertainer who is cut from the same mold is Bryan Callen. He and Joe are good friends. Bryan is a standup comic and an actor as well — he was one of the original cast members of Mad TV when it aired back in the mid 90s. And he’s a boxer, and he’s also been hosting his own podcast for a number of years, the Bryan Callen Show.

Bryan’s show has a smaller footprint than Joe’s, but it covers a lot of similar themes and they often share guests.

Now, for quite a while, Bryan has had a co-host on the show, Hunter Maats. Hunter and Bryan go back a long way. Their fathers were both in the international banking industry, and they both did a lot of overseas traveling when they were younger. Hunter did a degree in biochemistry from Harvard and he worked as a tutor and co-wrote a book a few years ago on the science of learning and achievement aimed at high school and college students, called The Straight-A Conspiracy.

So in recent years, Hunter has been pushing the Bryan Callen Show in a direction where it features more academics and authors talking about science and politics and education and critical thinking and cultural literacy.

And more recently, Hunter has helped to coin a term, “mixed mental arts”, that has become the new title of their podcast. There’s a new website, mixedmentalarts.co — that’s “c-o”, not “c-o-m” — where you can see podcast episodes and blog posts. And what’s really cool is that they’re making this very much a fan-based, user-driven platform.

The guiding idea behind mixed mental arts, as they define it, has a lot of overlap with the themes that I’ve been pushing about problems with our education system, the need to take seriously what psychology is telling us about how human beings actually think and form beliefs and make judgments, and the need to promote various kinds of literacy in the public — critical thinking literacy, cultural literacy, media literacy, and so on.

So, fans of their show noticed affinities with my show and what I’m trying to do, and invited Hunter and me to talk to each other, and that’s what we did. Shout-out to Nicole Lee for that.

We had a great chat, not surprising, and the upshot is that we’re looking at opportunities to support each other’s projects and collaborate on some new projects.

Mixed mental arts, as Hunter and Bryan envision it, is a very big tent that can include many different kinds of initiatives.The Argument Ninja Academy is just one such initiative.

I’m super-impressed with the enthusiasm of the mixed mental arts fans and supporters. I can certainly learn a lot from you about how to mobilize a fan-base.

Anyway, that’s my shout-out to Hunter and Bryan and the fans of their show, if they’re listening — you guys are awesome.

White Belt Curriculum (Part 2)

Okay, I’m going to transition now to the main topic for this episode, which as advertised is a continuation of our overview of the learning modules in the white belt curriculum that we started last episode. So if you need to pause and get yourself a snack to re-orient yourself, you’re welcome to do that.

Ready? Okay.

The white belt curriculum that I’ve outlined has four modules. There’s an introductory module called “What is an Argument Ninja?”, there’s a module that introduces the Argument Analysis sequence, and there are two modules that are organized around the concepts and skills associated with Socratic reasoning and Socratic questioning.

We talked about the first two modules in the last episode. So now I’m going to talk about these last two modules.

I’ve recently updated the names for these two modules. Same content, just different names. As of this podcast I’m calling the third module “Socratic Knowledge”, and the fourth module “Socratic Persuasion”.

Socratic Knowledge

So, let’s start with “Socratic knowledge”.

Most of us are familiar with the term “the Socratic method”. In education it’s associated very broadly with using questions and dialogue as a tool for teaching and learning.

But of course the term goes back to the Greek philosopher Socrates, who lived in the 4th century BC, and it’s also associated more narrowly with his particular style of doing philosophy, which has had a big impact on how Western philosophy defines itself.

We don’t have any writings by Socrates himself. We get our information from secondary sources, the most important of which is Plato, who was a student of Socrates. Plato wrote a number of famous dialogues, and he cast Socrates as the central character in most of these dialogues.

So a typical encounter between Socrates and another character would go something like this. Socrates is looking to educate himself about a particular concept, like beauty, or courage, or justice. So he visits a person who claims to be an expert of some kind on these concepts — What is beauty? What is courage? What is justice? The person offers a first answer, and then Socrates raises a question about the answer — maybe he identifies an obvious counter-example, or an ambiguity — and in response the person modifies their answer, or offers a new one, and this process of questioning and revising answers continues.

And the usual result is that at some point the person that Socrates is talking to is unable to answer a question that seems essential to the concept, and they’re stuck. They reveal that they didn’t really know what they claimed to know.

For example, if the issue is “what is courage?”, a person might be able to identify a range of acts that we would recognize as courageous, but be unable to say what it is that all these acts have in common that makes them courageous. They can’t say what courage is in general, what the essence of courage is, because Socrates has shown that all of the proposed answers that the person has given lead to contradictions or are too broad or too narrow or are unsatisfactory in some other important way.

Now, this may not seem like a very positive result. It shows that people can be confident about their understanding of a topic, but under questioning they reveal that their understanding is flawed or incomplete, and that their confidence may be misplaced. And in a typical dialogue, Socrates doesn’t offer any positive answers of his own. So what have we gained at the end of the day?

Well, in the tradition of Western philosophy, where Socrates is regarded as kind of a hero, there are two features of this kind of exchange that illustrate something important and valuable.

The first is that it highlights a certain kind of understanding as a goal of knowledge.

The kind of understanding that Socrates is trying to acquire, and that he’s testing with his questions, isn’t just knowledge of the facts, even if they’re true facts. He wants some understanding of why they’re true, what grounds them, why we’re justified in believing they’re true.

This focus on grounding and justification is the core of the classical definition of knowledge as “justified true belief”. You can’t know that a certain claim is true if it’s not true. You can’t know it if you don’t actually believe it. And even if you believe it’s true, and it turns out to be true, that doesn’t mean that you know it’s true. To know it’s true is to have some story to tell about why you’re justified in believing it. Why you’re entitled to believe it.

This search for the ultimate ground of our knowledge is what defines the branch of philosophy called “epistemology”. Epistemology is the study of the nature and origins and justification of knowledge.

So one way of thinking about what Socrates is doing is that he’s trying to figure out the ground, the justification for, our ethical and political beliefs. And for this reason, Socrates is regarded as the first important moral philosopher in the Western philosophical tradition. He’s the first moral epistemologist.

Now, it may be tempting to think that the lesson to draw from these dialogues is that we should be skeptics about moral knowledge, since Socrates himself doesn’t claim to have the answers. But that’s the wrong lesson to draw.

First of all, if you have reason to believe that a certain philosophical claim is wrong, that IS a valuable form knowledge. When an experiment falsifies a scientific theory, we’ve learned something important. It means we can move on, we can look for answers elsewhere.

Second, the fact that so many of these dialogues end in doubt and uncertainty is partly a function of the fact that these are deep philosophical questions that by their nature resist easy answers. What is justice? What is virtue? What is courage? These are hard questions! We shouldn’t be surprised to discover that our first stabs leave something to be desired.

Socrates is not a skeptic about knowledge. He’s just interested in a certain kind of knowledge, in a certain domain, that happens to be very difficult to acquire.

So let’s focus on the kind of knowledge that he’s interested in, and put aside for now the difficulties with the particular questions he’s asking.

Let’s grant that we can know things. But there are different ways of knowing, different ways our opinions can be grounded, and not all of them express a deep understanding. Socrates is after this deeper understanding.

And what we’re talking about doesn’t have to be esoteric or abstract. I’ll give an example.

Let’s say we’re on the outskirts of a city, and I ask you what’s the best way to get to City Hall, which is downtown. And you give me a good answer.

There are lots of different ways that could happen.You might look at the map application on your phone, and just show me the route directions. Or you might recall how you got to City Hall last year, which is the last time you made the trip, and you describe to me the route that you took.

Or, you could do both of these things, but also a third thing: you explain why certain routes that look good on the map are actually slower because of road construction projects that are going on right now.

All three of these ways of grounding your opinion might count as knowing how to get to City Hall. But the nature of the grounding, the justification, is different in each case.

In the first case, your knowledge is grounded in a reliable source, the map application on your phone. You trust the information it’s giving you. In the second case, your knowledge is grounded in your first-hand experience, the fact that you’ve been there yourself.

And in the third case, your knowledge may include the other two, but you also have a grasp of the bigger picture that makes your opinion even more valuable. You understand the broader context of what’s going on, you’re aware of relevant facts that aren’t obvious to everyone, and you’re bringing all of this to bear on the recommendation you give, which ends up being more useful and of greater value because of it.

It’s this third kind of knowledge that Socrates is going for when he’s pushing us to justify our opinions in deeper ways.

This is the kind of understanding that we expect of our most creative experts. It’s what you have when you’re not only responsive to the evidence, but you also have insight into how that evidence hangs together, into the explanation of the facts, not just the facts themselves.

In everyday life, all these different ways of knowing are important and valuable. I’m not going to give up using the map on my smartphone, it’s very convenient. I’m not going to give up relying on my personal experience. But for many important tasks, we really do need a deeper level of understanding.

If there’s a tricky operation that needs to be performed, and you’ve got a choice between the surgeon who’s done this procedure once and a surgeon who’s done it a thousand times, under a variety of conditions and different kinds of patients, who are you going to choose? The more experienced surgeon, of course, precisely because they have a better understanding of how to perform the procedure successfully under a wider range of conditions. This kind of understanding is a superior guide to action. This is the kind of understanding that grounds genuine expertise.

Now, I want to make a connection between this kind of understanding, and ideas that I’ve talked about elsewhere on the podcast.

Remember from episodes 9 and 10, we talked about “argument matrices”, and what it means to really know what you’re talking about. In those episodes I tried to show that the kind of knowledge that supports genuine critical thinking and genuine understanding is knowledge of argument structure, which I generalized with the term “argument matrix”.

Take a claim, and start asking for reasons why we should believe that the claim is true. Those reasons take the form of arguments — if such and such is true, this gives us to reason to believe that this claim is also true. Socratic questioning pushes us to make explicit, or maybe even consider for the first time, the argument structure that supports our beliefs.

When Socrates pushes someone to justify the assumptions of their argument, that’s increasing what I called “argumentative depth”. When he forces us to consider a completely different set of reasons to believe or not believe something, that’s widening the scope of our understanding — I called that “argumentative breadth”. When you starting thinking about all of these arguments are related to one another, including both the supporting arguments and objections and replies along different branches, that’s what I called the “argument matrix” for that particular claim in question.

So we can think of Socratic questioning as a method for constructing the argument matrix associated with a particular claim, to the best of our ability. And as I said in those earlier podcasts, these are always going to be partial and incomplete.  My argument matrix will be different from your argument matrix. Mine might be deeper in some areas than yours, and yours may be broader in some areas than mine.

And they’ll be limited. They’ll have a finite number of branches, and every branch is going to terminate somewhere. Justification doesn’t go on forever. You’ll eventually hit premises that you’ll just take as given, you can’t see how you’d give a deeper justification for them. I may not be happy with where you stop, I might believe I can push it farther. Those kinds of disputes are normal.

But through the process of Socratic dialogue, where we push each other to broaden and deepen our understanding of a topic or an issue, we’re building out this argument matrix, like a crystal structure that grows and becomes more articulated over time.

Now, it’s very important to realize that this concept of Socratic knowledge doesn’t have to result in a single, firm conviction about the issue. One could have such a conviction, and on matters where evidence is strong you might expect it. But this kind of knowledge is perfectly consistent with saying, at the end of the day, “I don’t know”. I don’t know if there’s a God or not. I don’t know what courage is, in general. I don’t know what form the ideal state should take. I don’t know whether artificial intelligence is going to be good or bad for humanity.

But when a person says “I don’t know” after having built an argument matrix surrounding the issue that is both broad and deep, that’s a fundamentally different thing than a person saying “I don’t know” who hasn’t given any serious thought to it all. The former is an expression of a deep understanding of the issue, from a position of knowledge. The latter is an admission of ignorance.

One of the things you learn about Socrates when you study Greek philosophy is this famous story of Socrates visiting the Oracle at Delphi, and the Oracle tells him that he, Socrates, is the wisest man in Athens. Socrates is deeply puzzled by this because he says that he doesn’t know anything; he’s just a seeker of the truth, he doesn’t claim to have the truth.

But then after questioning all these so-called experts he realizes the Oracle might be right after all. Socrates is the wisest man in Athens because he alone is prepared to admit his own ignorance rather than pretend to know something he does not.

The lesson that we’re supposed to take from this is that admitting your ignorance, or more generally, being honest about the limits and fallibility of your own knowledge, is itself a form of wisdom that we should value and try to cultivate. And this is absolutely true.

But let’s not fool ourselves. Socrates really does have a great deal of knowledge, and not just knowledge — understanding. Through his conversations with these experts, he has nurtured the growth of the argument matrices that embody his understanding of these deep philosophical issues. When he says “I don’t know”, at the end of the day, after all this critical analysis, it’s not an expression of ignorance, it’s an expression of wisdom.

So, this is the picture of Socratic knowledge that I want you to think about. This is the conception of knowledge that I’m introducing to students in this learning module.

I’m going to say one more thing before we move to Socratic Persuasion. Let’s bring back the martial arts metaphor for a second.

This whole discussion we’ve had here, about the role of Socratic dialogue in constructing arguments and argument matrices, and deepening our understanding of an issue, and cultivating the right kind of epistemic humility that is borne of this understanding — this is all dojo stuff.

This is part of critical thinking education that is learned in the dojo. It’s like learning the five tenets of taekwondo, or the deeper goals of any traditional martial art. As you train in the martial art, you’re asked to learn these principles, and over time, your understanding of them, and your appreciation for them, will grow. It changes who you are, as a person. In martial arts, it’s part of the process that turns you into a martial artist. In critical thinking, it’s part of the process that turns you into a critical thinker.

But even though you’ve changed as a person, and you now carry these principles with you outside the dojo, it would be foolish to expect other people outside the dojo to follow these principles too, or respond to them in the same way that you do.

Plato’s dialogues are dramatic recreations that are designed to highlight the philosophical principles that he finds valuable and important. They’re not portraits of realistic exchanges that you would expect to encounter on the street. They’re meant as a resource for training in the dojo.

If you want to apply these principles outside the dojo, to real communication with real people, you need to reorient your mindset. You need to switch to “persuasion” mode.

And that leads us to our next topic, “Socratic Persuasion”.

Socratic Persuasion

This is the name of the fourth module in the white belt curriculum.

In both the third module and the fourth modules we’re talking about the Socratic method of inquiry. But in the third module the focus is on using Socratic methods as a tool for acquiring a certain kind of knowledge that is essential to critical thinking. In the fourth module, the focus is on Socratic methods as a tool of persuasion, to influence what people think and believe.

This is a subject that you almost never see discussed by philosophers (with some notable exceptions, which I’ll get to). They don’t think of it as an issue for philosophy, and to be honest, it makes them feel a little impure. They tend to see this as an exercise in rhetoric, persuasion for its own sake, and as such they see it as a contaminating influence, a corrupting influence, on the pursuit of knowledge and wisdom.

But let me highlight that we’re not talking about persuasion for its own sake. I don’t even know what that would mean. We’re talking about persuasion in the service of whatever goals we may have. We can and should be critical of the goals, but it makes no sense to criticize persuasion as such. That would be like criticizing the use of physical force, period. What could that even mean? What matters is how physical force is used, and to what ends. If I use physical force to control and intimidate my partner, or rob people, or violate the rights of others, that’s bad. If I use physical force to lift a fallen tree branch off the middle of the road, or to protect myself or innocent people from harm, that’s good.

There are a lot of opportunities to talk about the ethics of persuasion in greater depth in the Argument Ninja curriculum. In this module, at this early stage, my goals are much more limited. I want to talk about Socratic methods as a tool of persuasion, but because this is the first module in the curriculum to actually talk about persuasion “outside the dojo”, I also want to use it to introduce this whole broader topic, of the psychological realities that we need to anticipate when we interact with people in the real world.

Mental Models for Thinking About Persuasion

To do this, I find it helpful to think in terms of simple mental models that capture an important concept in a way that’s easy to visualize.

We’ve seen a few of these already. The whole picture of critical thinking as a martial art is a useful mental model, because it forces us to think about critical thinking in terms of learned skill development and performance. The dojo is a mental model, because it makes us think about the ritualized spaces that we construct that support this kind of skill development. Argument matrices are a mental model. They help us understand the structure of knowledge that supports genuine understanding.

There are lots of mental models that can help us to think about the persuasion skills that we’re trying to develop.

For example, here’s another mental model imported from martial arts.

Hard versus Soft Techniques

In martial arts we often talk about hard styles versus soft styles, or hard techniques versus soft techniques. We can take this concept, this model, and apply it to persuasion techniques.

Hardness involves meeting force with force. So a kickboxing low kick aimed to break the attacker’s leg is a hard technique. A karate block aimed to break or halt the attacker’s arm is a hard technique.

Softness involves a minimal use of force to achieve a particular goal. So, redirecting an opponent’s momentum so that they lose balance and make themselves vulnerable is a soft technique. Joint locks that require relatively little force but that can immobilize an opponent can be thought of as soft techniques.

So, extending the analogy to persuasion, if someone is giving an argument and you raise a devastating counter-argument that is intended to stop it cold, that’s a hard technique.

Socratic persuasion methods tends to fall under the soft category, for an obvious reason. If I’m asking you for your opinion on a topic, and I’m responding to what you say with more questions, then I’m less likely to be perceived as trying to impose an opinion on you, because I haven’t given you my opinion. The whole exchange is less likely to trigger a hard defensive reaction. But the hardness or softness of the exchange is really a function of a whole bunch of factors that include the context, the tone you set, the word choices you use, and so on.

In general, if you’re on the receiving end of a persuasive technique, and it feels like compulsion, like you’re being forced to say something or admit something that you don’t want to say or admit, that’s a hard technique. If it doesn’t feel like compulsion, if it feels like you’re saying something that you agree with, or doesn’t feel like it’s imposed on you, that’s a soft technique.

There are contexts where hard techniques make sense and do work. Interrogations and cross-examinations are hard. Honest peer review is often hard. But in general, Socratic methods lend themselves to a softer style of persuasion. And used in this way, they can be extraordinarily effective in exposing deeper layers of a person’s psychology. They are a true “ninja” technique.

But for Socratic methods to be effective the focus has to be on controlling the psychology of the encounter, without raising alarm bells or triggering a defensive posture.

The Core Belief Network Model

Now, this language I’m using, of raising alarm bells and triggering defenses, suggests another set of mental models. And you’ll notice, as models of the psychology of belief, these are all cartoonishly simple and unrealistic. But that’s exactly why they’re valuable. The each capture an important idea that we can use to help think about persuasion strategy.

Here’s an example. I call it the “core belief network” model.

You can think of the structure of our beliefs as an interconnected network. But some beliefs in this network are more central to our identity than others. These are basic stances on who we are, what our goals are, what our place is in the grand scheme of things, how we should live, what grounds our self-worth, and so on.They’re connected in such a way that if we’re challenged on these beliefs, we tend to experience that as a threat to our identity, so we naturally want to resist such challenges.

So let’s imagine that these beliefs are literally at the center of this belief network, because they’re central to our identity.  And we can think of the core of this network as surrounded by defensive mechanisms that function to protect and preserve these beliefs.

Then as we move outward in our network, we encounter beliefs that are less and less central to our identity. We’re more open to revising these beliefs without feeling existentially threatened, but we still care about them.

For example, I believe that climate change is a serious problem, but it’s not central to my identity. I’m happy to consider arguments that climate change is not a serious problem. But I’m going to hold those arguments to a pretty high standard, because I think there’s a lot at stake if we’re wrong. I’m not going to change them on a whim.

Now, as we move to the periphery of our network we find beliefs that we really could could care less about. I believe that the actress Anna Paquin was born in Canada, my wife says she was born in New Zealand. Do I care one way or the other? Not really. I can Google it and be happy with whatever Wikipedia says.

So the mental model is of a network of beliefs that is hierarchically organized so that beliefs that are more resistant to change are closer to the center and beliefs that are less resistant to change are closer to the periphery.

Right away, we have a simple framework that helps us think about persuasion strategy. If you want to change someone’s belief, the first question to ask is, where is it located in this network? If you’re targeting beliefs closer to the core, that’s going to require a different strategy than if you’re targeting beliefs farther from the core.

And how do you know where the belief is located in this hierarchy? It may be obvious for some beliefs, but it’s easy to misjudge these things.

I know an older gentlemen who has been a practicing Catholic his whole life, and it’s always been clear that being Catholic is important to his identity. But what is it about his Catholicism that really matters to him? It turns out that it’s not Catholic doctrine per se, or even Christian doctrine. When you push him on this, he seems genuinely agnostic about most Catholic doctrines, including doctrines as basic to Christianity as the divinity of Jesus.

What actually matters to him, it turns out, is identification with the cultural tradition, the rituals of the church service, and the mere fact of being a member of a religious tribe that has a long history. The thought of belonging to no tribe is much more disturbing to him than the thought that his tribe may not have the correct answers to deep theological questions. He cares very little about theological questions.

Now that would likely come as a surprise to anyone who didn’t know him well. There’s a general lesson here. When thinking about a persuasion strategy, we inevitably make assumptions about what beliefs are more central than others, but it’s easy for these assumptions to be wrong. So it’s better to treat these as hypotheses that are open to testing and revision.

And getting back to Socratic methods, one of the very best ways of testing these hypotheses is through Socratic conversation that is open, respectful, curious, non-confrontational and non-judgmental. People are happy to disclose what they really care about when they feel it’s safe to do so. This information is extremely valuable, if your eventual goal is to try to influence what a person believes about a particular topic. You can think of these kinds of conversations as helping you to develop an internal model of a person’s belief network, based on real data, and not just guessing.

The Questions-as-Network-Mapping-Tools Model

Here’s another mental model that I like that lets me vividly imagine this kind of information-gathering activity. If you’ve ever seen the movie Prometheus, the prequel to the Alien movies, there’s a scene where the crew is exploring this alien structure, and they need to map the area. So they release a bunch of these floating robotic drones that travel through all the open spaces of the structure and use lasers to map their surroundings. And all this data is sent back to a central computer that uses it to build a map of interior of the alien structure. Which they eventually discover is the interior of a crashed spaceship.

Socratic conservation is a powerful tool for mapping out the structure of a person’s belief network, and in particular, identifying how central or peripheral a given belief is to a person’s identity. I like this Prometheus example because its easy for me to imagine, but if you’ve ever had a medical procedure that uses radioactive tracers to study how your internal physiology is working, it’s a similar idea.

As part of a persuasion strategy you can think of this as reconnaissance — the information-gathering stage of the  mission.

The Bank Heist Model

Now, with this hierarchical belief network model in our head, we can imagine other analogies that are useful when thinking about persuasion strategy, especially when we’re targeting beliefs closer to the core.

One that I like is the model of a safecracker planning a bank heist. The goal is to break into a high security bank vault and steel something, like a brick of gold. The vault is surrounded by layers of defenses that become increasingly tough to bypass the closer you get to the central vault. The closer you get the more sensitive these defenses are, so that if you make a wrong move, alarms going off, lasers shoot at you, bombs explode, you name it.

So, we can think of the persuasion task, under this mental model, like an Ocean’s 11 heist, or a Mission Impossible assignment. Can you get inside the bank vault and steel the brick without triggering any alarms? Or more aptly, can you get inside the bank vault, and alter, or swap out, one of the bricks, without triggering any alarms? We don’t necessarily want to remove a belief — more often what I want is to alter it, change it, revise it.

The Indian Jones Swap Model

I’m going to add one more detail to this bank heist model. If you’ve seen the movie Raiders of the Last Ark, there’s the opening scene, where Indiana Jones is trying to steal a golden idol that is sitting on an alter at the end of a room. It’s protected by booby traps. Step on the wrong stone and an arrow shoots out of the wall at you.

He manages to get close to the idol, but he anticipates a final trap. If he lifts the idol off its base, it might be wired to detect the release of pressure and trigger a defense. So Indiana Jones pulls out a bag of sand that weighs roughly the same as the idol, and as he removes the ideal, he immediately substitutes the bag of sand on the altar.

This was a good idea, but in the movie it turns out it wasn’t exactly the right match, or it picked up the change in some other way, because a volley of defenses are triggered and Indian Jones has to run for his life to escape them.

What I like about this model is this idea that to make a change without triggering a defensive reaction, you may need to substitute it with something that plays the same role, or a similar role. In my head I literally think of it as the Indiana Jones Swap Model, but I know this is saying more about my pop culture upbringing and fondness for genre movies than anything else.

When we’re talking about beliefs, the central idea of the core network model is that beliefs are connected to one another, so that changes in one belief can propagate through the network and impact other beliefs. That makes them hard to isolate, and it’s one of the challenges of belief revision. It’s very hard to change just one belief without disturbing other beliefs in the network. When you get to beliefs near the core, it gets even harder, because the beliefs connect to very deep attitudes.

Consider again my belief that Anna Paquin was born in Canada. Yes, that belief is connected to other beliefs, so if I found out that I was wrong, that would have some impact elsewhere. But not to anything that really matters to me.

But let’s imagine I’m an evangelical Christian and I’m asked to consider a belief that really does matter to me. I’m asked to consider whether Christians and Muslims worship the same God. Most evangelical Christians will say no, Christians and Muslims do not worship the same God. Muslims generally think the opposite; they believe that they worship the same God as Jews and Christians.

Now, in principle one could treat this as an academic question for theologians and try to look at the arguments in an even-handed way. But it practice it would be very hard for an evangelical Christian to consider these arguments without also considering the impact this issue would have on other beliefs that are central to their identity. Like the belief that the only path to salvation is faith in Jesus Christ and his atonement for our sins.

A belief like this is like the Golden Idol. Under normal conditions, you can’t change it without triggering a defensive reaction, because it’s so connected to other beliefs that really matter to a person.

However, the Indian Jones swap does suggest a persuasion strategy. If you can find a way to swap out that belief with another one that plays roughly the same role within the ecosystem of beliefs that matter to one’s personal identity, that’s a strategy. That’s a way of getting in, making a change, and getting out, without setting off the alarms.

I’ll come back to this example in a minute.

Socratic Knowledge and Socratic Persuasion

Okay, so we’ve looked at a bunch of mental models that can help us visualize and think about the strategic challenge of trying to change someone’s beliefs, especially beliefs that for various reasons are resistant to change.

The Socratic method of asking questions in an open, non-confrontational way is just one persuasion tool among many, but it’s a particularly useful tool for this kind of challenge, because it’s a soft technique. It’s designed to slip past the guards and avoid triggering defenses.

But how do you guide the conservation in the direction you want it to go, if all you’re doing is asking more questions in response to the answers that someone is giving you? It seems like the conversation could end up anywhere.

Well in principle that’s right. It could end up anywhere. That’s actually one of the strengths of the technique, especially if you don’t know much about the person you’re talking to.

But in general you do want to guide the conversation in the direction of the target belief, the one we want to change. And then when you get close to the target, you want a strategy for making the other person think about the belief in a new way.

So let’s imagine I’m talking to myevangelical Christian friend, and I want them to reconsider their belief that Christians and Muslim’s don’t worship the same God.

You may have all sorts of persuasion tools in your toolkit, but I’ll tell you the tool that is far and away the most useful one here, when we’re trying to get someone to consider an issue discursively, using their reasoning faculties.

Are you ready? Here it is. It’s knowing what you’re talking about.

And I mean this in this sense of Socratic knowing — knowing the structure of the argument matrix that surrounds this issue. Knowing what the most common arguments are, and the most common objections to those arguments. Knowing where the strengths and weaknesses of these arguments lie.

Because if you’re familiar with the structure of argumentation and debate around an issue, it’s much easier to guide a conversation in the direction you want.

But that means you have to make an effort and do a little research.

Let’s consider this question of whether Christians and Muslims worship the same God. The standard arguments against this view come from conservative Christians, and are based on basic differences in the conception of God that they see as central to Christianity.

This is the belief that the Christian God is a triune God — three persons in one: God the Father, God the Son and God the Holy Spirit. Jesus is God the Son; he is the physical incarnation of God on earth, at the same time fully human and fully divine. And Jesus is central to the story of salvation in Christianity, because his death on the cross served as a sacrifice that atoned for the sins of all of humanity, granting us access to an eternal life in Heaven that we do not deserve and can never deserve, through our own efforts.

Muslims don’t hold this view of God or Jesus. Standard doctrine says that Jesus was a divinely inspired prophet who was born of a virgin and who revealed the will of God through his life and teachings. But he was not divine himself, he was not God incarnated in human form. And his death did not atone for the sins of humankind. Muslims believe that it still falls upon each of us to atone for our sins, if we are to be granted salvation. On this view, God is not a trinity, God is one, a unity.

So from here it seems to follow that Christians and Muslims do not worship the same God.

However, there is also a long tradition of scholarship that argues that this conclusion does not follow, even granted these differences in how God is conceived.  The argument turns on a distinction: in some case, two different descriptions can refer to two different things; but in other cases, they can refer to the same thing.

Let’s say you and I were both at a party, and the next day I tell you that I had this great conservation with a guy who I thought was the smartest guy at that party. And you tell me about this guy who you thought was the best looking guy at the party. We could be referring to two different people, but we could also be referring to the same person, under two different descriptions.

So for those who say that Christians and Muslims worship the same God, what they want to say is that Christians and Muslims may disagree on the attributes of this being they call God, but when Christians talk about worshipping the Lord God, and Muslims talking about worshipping Allah, those terms can still refer to the same being. In which case one can say that they do indeed worship the same God.

Now, there are a number of lines of reasoning that can support this conclusion, but I’m going to focus on one in particular, and I’ll explain why later.

The natural urge of conservative Christians is to contrast Christianity and Islam, to emphasize the differences between the faiths rather than the similarities. But these very same theological differences also characterize the relationship of Christianity to Judaism. Yet for Jews and Christians, even granting the history of Christian anti-semitism, there is a much stronger willingness to emphasize the similarities and the continuity between the two faiths, rather than the differences, in spite of the fact that it shares those very differences with Islam.

According to standard doctrine, Jews reject the divinity of Jesus as well. They don’t believe that Jesus was God incarnate or that his death atoned for our sins. Judaism, in this regard, has more in common with Islam than it does with Christianity.

Now, here’s a line of reasoning that you could use to get a conservative Christian to rethink how they view God in Christianity and Islam.

One can ask, when Christians talk about the God of the Jewish Old Testament, the God of Abraham and Isaac and Jacob, do they not think they’re talking about the same God that they themselves worship? Yes, Christians and Jews may not agree on all of the attributes of God, but they don’t think of themselves as talking about two different Gods. They think of themselves as talking about the same God.

And Jesus himself was a Jew. Is it not clear in the New Testament that Jesus thinks of God the Father, his father in Heaven, that he prays to, as the very God that his fellow Jews have historically worshipped?

When framed in this way, our intuitions say “yes”, and these intuitions are pretty widespread. They’re precisely why Christians are happy to use the inclusive label, “the Judao-Christian tradition”, to describe the shared theological space that these faiths occupy.

This line of reasoning is even more compelling when you focus on the person of Jesus himself. Jews and Muslims and Christians may disagree about the nature of Jesus. But no one feels compelled to say that Jews and Muslims are talking about two different people. No Jewish or Muslim or Christian scholar says that the Jesus referred to in the Koran is a different Jesus than the one referred to in the New Testament gospels. They’re not referring to two different people; they’re referring to the same person, who happens to be conceived differently in these different theological traditions.

Now, what I’ve done here is sketch out some of the branches of the argument matrix that surrounds this question of whether Christianity and Islam worship the same God.

And I think it’s clear that knowing this background puts one in a better position to have a productive conversation with a conservative Christian on this topic.

What you do not want to do is rush in all excited and throw all of this at them, and expect them to respond the way you want them to. That’s the rookie mistake. Remember, we’re operating very close to the core here. Doing that could very likely trigger alarm bells and a defensive posture.

That’s why the Socratic method, and maintaining an open, non-confrontational tone, and letting the other person lead the discussion, is such a valuable tool. If you’re disciplined about it, it will save you from 90% of the mistakes that most people make when they enter into conversations like this.

But the method is even more powerful when it’s informed by a good understanding of the psychology of belief in general, and the psychological significance of the issues in question. This is a topic we cover later in the program, but listeners to this show know that even a good argument is unlikely to be perceived as good if it doesn’t have the right emotional resonance for the audience.

In this case, the current cultural rhetoric around the relationship of Christianity to Islam is a rhetoric of conflict. The emotional resonance is negative. It emphasizes differences, it connects to fears of radical Islam, it and reinforces an us-versus-them mentality.

But the cultural rhetoric around Christianity and Judaism is largely a rhetoric of solidarity, at least among conservative Christians in North America. The emotional resonance is positive. Christians and Jews are viewed as their own cultural group with its own shared tribal loyalties, distinct from Islam.

This is why it can be a challenge to get a conservative Christian to accept that Christians and Muslims worship the same God. Because they are likely to see this as a petition for solidarity with Islam, and that threatens to betray this shared Judeo-Christian identity.

But this is exactly why the argument strategy I outlined has a better than average chance of being considered. It takes this contentious proposition, which is initially viewed with suspicion, and associates it with something positive, namely, the good will and solidarity that Christians feel toward Jews. You’re showing them that the very same reasoning that underwrites this widely held view that Christians and Jews worship the same God, also applies to Christians and Muslims. And because that reasoning has a positive, non-threatening association in the former case, it’s more likely to carry this positive, non-threatening association over to the latter case.

For me, this is an example of an Indiana Jones Swap. We’re not swapping out beliefs per se; what we’re swapping out are negative emotional associations attached to a belief, with positive associations, so that the belief will be considered in a more positive light and won’t trigger a defensive reaction.

That’s the idea, at least. I’ve had this conversation with several conservative Christians myself, and raising the issue of the relationship of Christianity to Judaism always gives them pause, because they can see the implication of rejecting the reasoning. If you reject it in order to exclude Muslims, it seems to imply that you should exclude Jews as well, and conclude that Jews and Christians don’t worship the same God either. They can always bite that bullet, but it’s a conclusion that most would prefer to avoid if they could, and that’s exactly the emotional resonance that you’re leveraging with this argument.

Summing Up

So, I hope you’re getting a sense of the topics that I want to cover in this unit on Socratic persuasion. This is the first introduction to the psychology of persuasion in the curriculum. We haven’t done anything on cognitive biases or dual-process theories of the mind yet, but we can still get the ball rolling with a number of simple mental models. Soft versus hard persuasion techniques. The core belief network model. Using questions as tools for mapping the belief network. The bank heist model. The Indiana Jones swap.

Simple models like these allow us to start building a vocabulary for talking and thinking about persuasion strategies.

As we learn more psychology later in the program we can start using more sophisticated models that you actually see in the literature. Like Jonathan Haidt’s Rider and Elephant model, Daniel Kahneman’s fast versus slow thinking model, Dan Kahan’s cultural cognition model, and so on.

Now, on the topic of Socratic methods per se, I actually haven’t said very much about technique, because that’s a hard topic to cover in a short space. But there are some great resources available on this, which I’ll make available in this module. I want to mention one here specifically.

A Word About Street Epistemology

In my first draft of the Argument Ninja curriculum, I didn’t call this unit Socratic Persuasion. I called it Street Epistemology.  I used that term because it’s already associated with a movement to apply Socratic methods to critical thinking in real world contexts.

But I switched the name to Socratic Persuasion for two reasons. One, “street epistemology” uses a bit of philosophical jargon that my advisors noticed, and we agreed that we didn’t want to use technical terms like “epistemology” in our public-facing documents in a way that might confuse people.

And two, “street epistemology”, is used to refer to the use of Socratic methods in a very specific context. My usage is broader and my goals are different.

The term “Street Epistemology” was coined by philosopher Peter Boghossian, and Peter’s agenda is clear. He doesn’t want people to believe anything on faith alone.  This is part of a larger goal of promoting atheism and skepticism about pseudoscience and the supernatural.

The book in which he coins this term is called A Manuel For Creating Atheists, so the title gives you a good sense of where he’s coming from. The idea is to train people in a method of Socratic conversation that can be used anywhere, but preferably face-to-face, and where the goal is to get people to rethink the epistemological foundations of their religious or supernatural beliefs. The target here isn’t the beliefs themselves — it’s not a manual for convincing people that there is no God. The target is the underlying view that such beliefs can be justified on faith. It’s a manual for getting people to realize that faith is an unreliable method of forming true beliefs.

Peter’s book has inspired the Street Epistemology movement, which promotes these goals and the Socratic conversation techniques that are taught in the book. You can visit their home website at streetepistemology.com. The person who is most associated with the Street Epistemology movement today is Anthony Magnabosco. He does a lot of speaking, writes many of the blog posts on the website, maintains Facebook pages, a YouTube channel, and so on. But it is a community driven movement.

Peter’s book, A Manual for Creating Atheists, is polarizing for sure. If you’re not sympathetic to the mission, if you’re a religious person yourself, you’ll find the rhetoric hard to swallow. But the book has one great virtue that is important for our purposes. It has the very best discussion in the literature of Socratic conversational technique that is strategically designed to get you close the center of a person’s belief network without triggering alarms.

It’s called Street Epistemology because it’s intended to serve as a practical guide to having productive and persuasive conversations on sensitive topics, with people on the street, outside the dojo. It emphasizes skill development, and that makes it a valuable resource for the Argument Ninja program.

There’s a great summary summary document on the Street Epistemology website that runs through the main principles and techniques, and I’ll link to it in the show notes. They also have a number of YouTube videos that show actual conversations where the techniques are being applied, and those are valuable to watch well.

They’ve also developed an app for mobile devices, called Atheos, that is basically the pocket version of the Street Epistemology guide. Development of the app was supported by the Richard Dawkins Foundation. I’ll link to it in the show notes.

Now, let me say a few words about how I would situate myself and the goals of the Argument Ninja program with respect to the goals of the Street Epistemology movement. Because there are some important differences.

The official position of the Argument Ninja program is that I don’t care what you believe when you join this program. My focus is on teaching people how to think, not telling you what to think.

However, learning how to think will inevitably have an impact on what you think. You can’t develop a rich background in logic and argumentation and moral reasoning and scientific reasoning and the psychology and sociology of belief and persuasion and NOT be changed that experience. I guarantee that it will change you.

But how any individual person will respond to this curriculum is unpredictable, and I don’t have an agenda about where it should lead. My goal is to help people become independent critical thinkers, that’s it.

So I’m committed to creating a learning environment that isn’t partisan in any obvious way. Just like in a martial arts class. You line up at the start of class in your uniforms, you start working on your exercises and techniques, and the focus is on the program, not what race or gender or nationality you are, or what political or religious group you may belong to. That’s the environment that I want to create.

But to implement that goal, I’ll end up using resources that are developed by people with more specific agendas, just because they’re really good resources. The Street Epistemology approach to Socratic conversation is an example.

Wrapping Up

Well, I think that about wraps it up for this episode. We covered a lot of ground, but the beauty of podcasts is that you can listen to them over again whenever you want. And I’ll remind you that there’s a full transcript of this podcast below the show notes over at argumentninja.com, so if you’re a reader that’s an option for you. I’ve got links to all the people and the sites I’ve mentioned.

I want to thank my new monthly supporters over at Patreon. I’ve added about 40 new Patrons over the past month, which I very much appreciate. I’m just under $1000 dollars a month on Patreon, and when I hit this goal I’ve promised to convert one of my paid courses at the Critical Thinker Academy into a free course, so please know that you’re helping to make these resources available to people who might otherwise not be able to afford access.

As well, anyone who pledges at a level of $3 per month or higher gets complete access to all of the video courses at the Critical Thinker Academy, AND your pledge reserves you a spot in the Argument Ninja Academy when it is finally launched.

So, if you’re not already a Patron, I hope you’ll take advantage of this opportunity.

Thanks again for listening, I hope you have a great week, and I’ll talk to you again soon.

 

Read More

016 – White Belt Curriculum (Part 1)

The Argument Ninja training program that I’m developing is inspired by martial arts training principles. The curriculum is spread over nine belt ranks (white belt, yellow belt, orange belt, etc. )

In this episode I give an overview of the learning modules that make up the white belt curriculum, and dive deep into the second module, an introduction to Argument Analysis.

In This Episode:

  • Overview of the White Belt Modules (2:20)
  • Module 1: What is an Argument Ninja? (4:20)
  • The Goals of Critical Thinking (4:56)
  • We Have a Problem (5:41)
  • Solution: The Argument Ninja Academy (6:47)
  • Module 2: Argument Analysis (I) (8:35
  • Worry: No One Talks Like This (9:00)
  • It’s About Learning the Principles (10:22)
  • Wax-on, Wax-off (11:38)
  • Definition of an Argument (14:35)
  • Demanding Clarity (20:30)
  • Vagueness and Ambiguity (22:00)
  • Example: Is Trump a Conservative? (23:55)
  • Argument Analysis Skills (26:30)
  • Comment: Argumentation vs Persuasion (28:00)
  • Example: “Make America Great Again” (29:13)
  • Wrapping Up (31:11)

Quotes:

“The Argument Ninja curriculum is unique in that it places equal emphasis on classical principles of logic and argumentation, and modern psychological understanding of how human beings actually reason and make decisions. We teach students how to reason well, but we also teach them the persuasion principles that are used in the influence industry, and how to use those principles.

Rational argumentation is fundamental to critical thinking. If you want to improve the quality of your thinking and learn to truly think for yourself, you have to learn this.

But if all you know is rational argumentation, and you think that equips you to engage with people effectively in the real world, you’re in for a rude awakening. The world will make no sense to you. It will chew you up and spit you out.

This is why at the Argument Ninja Academy we teach both skill sets, the Light Arts and the Dark Arts.”


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Introduction

This is the Argument Ninja podcast, episode 016.

Hello everyone. Welcome back to the show. I’m your host, Kevin deLaplante.

On this podcast I’ve tried to argue that we desperately need a new approach to critical thinking education, one that combines classical principles of logic and good argumentation with a modern understanding of how human psychology actually works, what factors actually determine what we feel, what we believe and how we behave. When you bring these together you have a unique and powerful foundation for critical thinking which I call “rational persuasion”, the fusion of rational argumentation and the psychology of persuasion.

As regular listeners know, I view rational persuasion as a martial art. That can mean a lot of things, but first and foremost it means that I view rational persuasion not primarily as a body of knowledge, but rather as a skill set that requires training and practice to develop. Yes, there are concepts and theories to learn, but fundamentally, rational persuasion is something that you do, that you express through intelligent, skilled action.

The Argument Ninja training program that I’ve been talking about over the past few episodes is intended to teach the art and science of rational persuasion, with a focus on skill development rather than rote learning.

The goal that my team and I are pursuing is to implement this program in an online learning environment, a virtual Argument Ninja Academy, that is inspired by martial arts training principles.

The curriculum that I’ve laid out breaks the training down into a number of levels, or belt ranks, like you’d see in a traditional martial arts program. Moving forward for the next few episodes of the podcast, the plan is to systematically unpack and explore this curriculum, at least for the first couple of belt ranks.

Last episode I gave a conceptual overview of the white belt experience and how martial arts programs approach the teaching and learning of complex skills.

What I want to do now is get specific and talk about each of the modules and skill elements that we’ll be teaching students at the white belt level.

Overview of the Modules

With the current version of the curriculum, each belt rank has four learning modules. That may change in the future, but for now there are four.

(1) What is an Argument Ninja?

The first white belt module is called “What is an Argument Ninja?”. This is intended to orient new students to the program. It covers basic ground on what critical thinking is and why it’s important, the motivations for the Argument Ninja approach to critical thinking, and why the martial arts model is a useful one for what we’re trying to do.

(2) Argument Analysis (I)

The second module is called “Argument Analysis (I)”.  Argument literacy is central to the curriculum, and this is the first in a sequence of modules on principles of good argumentation that are distributed across the belt ranks. So in yellow belt there’s “Argument Analysis (II)”, orange belt there’s “Argument Analysis (III)”, and so on.

(3) Socratic Questioning

The third module is called “Socratic Questioning”. This introduces students to the ideas and motivations behind the Socratic method of inquiry, which was made famous by Plato in his dialogues featuring the character of his teacher, Socrates. The method shows how to use questions to investigate beliefs and arguments and to engage a person’s higher-order thinking skills.

(4) Street Epistemology

The fourth module is called “Street Epistemology”, and it’s closely related to Socratic questioning. You can think of it as applied Socratic questioning in the service of persuasion. The term “street epistemology” was coined by philosopher Peter Boghossian, and it’s intended as a non-threatening, non-confrontational technique for engaging with other people and persuading them to critically reflect on their beliefs.

Okay, that’s a quick overview of the four modules that make up the white belt curriculum. On this episode of the podcast I’m going to briefly expand on the first module, and then go deep on the second module, on Argument Analysis. That’ll use up our time. On the next episode of the podcast I’ll cover modules three and four, on Socratic Questioning and it’s application in “street epistemology”.

Module One: What is an Argument Ninja?

Module one is where new students get an orientation to the Argument Ninja Academy, and get a sense of the bigger picture that motivates the Argument Ninja philosophy.

It does this by way of introducing key concepts in critical thinking, and showing why traditional approaches to critical thinking education fall short.

I’m not going to elaborate on this at length here because this has basically been the theme of this whole podcast, so if you’ve been following along you should be familiar with the story.

But for those who may be jumping in for the first time with this episode, here’s the short version.

The Goals of Critical Thinking

Critical thinking has two important goals that everyone values:

  1. To improve the quality of our reasoning and decision-making.
  2. To learn to think for ourselves.

If we lack in either of these, we suffer for it. Poor reasoning and bad decisions can lead to disaster, personally and professionally. And if we are unable or unwilling to think for ourselves, we become vulnerable to manipulation and exploitation.

Also, an informed populace capable of independent reasoning is essential to the health of democratic societies, where the public has to take responsibility for the actions of its elected leaders. So critical thinking is important for democratic citizenship.

We Have a Problem

However, with respect to critical thinking skills in the general population, and critical thinking education, we have a serious problem.

Decades of scientific studies on human rationality show that most of us are much poorer critical thinkers than we believe ourselves to be.

And the public education system has not been effective in teaching critical thinking skills. Most graduating high school seniors can’t pass a test of argument literacy. This isn’t surprising, given that even basic concepts of argument analysis aren’t taught anywhere in the public school curriculum.

In addition, the persuasive messaging of advertisers, marketers, politicians, activists, and the media has an enormous influence on what we believe, what we value and how we behave.  Yet for the most part we’re not consciously aware of the influence of this “persuasion matrix” on our thinking and behavior, or the harm that we suffer because of it.

In short, we as individuals, and collectively as a society, suffer in a myriad of ways from a lack of awareness and a lack of critical thinking skills.

Solution: The Argument Ninja Academy

The Argument Ninja Academy was designed as a solution to this problem, by providing an online platform where anyone with the interest and motivation can learn the thinking and persuasion skills that are necessary to survive and thrive in the 21st century,

The Argument Ninja curriculum is unique in that it places equal emphasis on classical principles of logic and argumentation, and modern psychological understanding of how human beings actually reason and make decisions. We teach students how to reason well, but we also teach them the persuasion principles that are used in the influence industry, and how to use those principles.

In other words, when it comes to argumentation and persuasion, we teach both the Light Arts and the Dark Arts. A graduate of this program, an Argument Ninja, is proficient in both.

This combination of skills is essential if our goal is to be able to recognize and resist the influence of the “persuasion matrix” on our thinking, and to think critically and independently for ourselves. At the same time, learning these skills means that we also know how to assert our will through the persuasion matrix, because we understand how it operates, in a way that few others do.

That’s the basic story. In this first module in the Argument Ninja curriculum, we elaborate on this story, give examples to illustrate the ideas, and so on.

In terms of skill elements for this module, we’ll use some basic quizzing tools to ensure that students are following along. You can’t get too detailed here because students won’t appreciate the details until they’re farther along in the program.

Module Two: Argument Analysis (I)

Okay, that gives you some idea of what’s in module 1. Let’s move on to module 2, which is the first in a series of modules on argument analysis.

Argument analysis is one of the pillars of the Argument Ninja program, but it’s easy to misunderstand why it’s so important. And from our perspective, where we’re focusing more on practical skill development that actually makes a difference to how we reason and communicate, one might wonder why it is so important, since argument analysis can seem quite formal and artificial, and one might worry that it’s not all that useful in real-world communication because no one actually talks the way that arguments are represented in symbolic logic or formal argumentation theory.

1. A Worry: No One Talks Like This

Let me give you an example. Here’s a simple three-line argument:

1. All whales are mammals.

2. All mammals breathe air.

Therefore, all whales breathe air.

I’ve presented this argument in what’s called “standard form”, meaning that it’s written in a standard way for purposes of argument analysis. You write each premise on a separate line, you might put a number in front of them so it’s easy to refer to them, and then you write the conclusion at the end and flag it with a word like “therefore”. So the argument is presented as a list of statements, with the conclusion at the end, and all the other statements function as the premises of the argument. Premise 1, “all whales are mammals”; premise 2, “all mammals breathe air”, therefore, conclusion, “all whales breathe air”.

These kinds of simple arguments show up all over the place in logic and critical thinking textbooks. And one of the first things you notice is how formalized all of this is, and how no one talks like this.

“Gee mom, do whales breathe air? Well son, consider the following. One, all whales are mammals. Two, all mammals breathe air. Therefore, it follows that that all whales breathe air. Brilliant! Thanks mom!”

No one talks like this. So why spend time thinking about simple three-line arguments in this formalized way?

2. It’s About the Principles

Well, the answer is that what we’re trying to do when we study arguments like these is learn basic foundational principles of logic and argumentation, and the easiest way to illustrate these principles is to use simple arguments like these.

The principles themselves are not simple; they have subtleties, and they connect to philosophical issues that can be quite deep. So we use very simple arguments to make it as easy as possible to recognize and talk about the principles in action, so that we’re not distracted by the complexities of natural language.

Now, you can use formal methods to represent more complex arguments, and capture more of the way that human beings actually talk. But that’s not the reason why we study argumentation theory, from a critical thinking standpoint, and it’s not the skill set that we’re aiming for in these modules.

Our goal is to acquire a good understanding of the basic principles, understand why they are what they are, and then internalize these basic principles, to the point where we can recognize them, and apply them, to ordinary speech and to our own thinking.

That’s how this works. Through the study of arguments, and learning principles for distinguishing good and bad arguments, you’re learning a conceptual framework that develops and informs your ordinary thinking and communication skills.

If you’ve watched the original The Karate Kid movie, it’s like Daniel learning “wax on”, “wax off”. Daniel thinks he’s just doing boring chores, waxing Mr. Miyagi’s car. He doesn’t see the connection to his karate training.  When he gets frustrated, Miyagi reveals that Daniel has been learning defensive blocks, internalized into muscle memory, through the circular waxing motions.

When a beginning chess player studies simple chess tactics, like forks and pins and skewers, it’s the same thing. Through repetition, you internalize the patterns, so that you can anticipate and exploit these patterns in a real match.

So, even with our simple three-line argument there are lots of logical principles that one can highlight and talk about.

For example, this argument that we just gave has the property that, if all the premise are true, the conclusion has to be true, it’s impossible for it to be false. If all whales are mammals, and if all mammals breathe air, then it follows as a matter of sheer logic that all whales must breathe air. If whales are a subset of mammals, and mammals are a subset of things that breathe air, then whales must be a subset of things that breathe air. You can draw the circles, there’s no escaping it.

This kind of argument, where the conclusion follows with necessity from the premises, is called a “valid” argument. In logic, this term, “valid”, has a very precise meaning that picks out a very precise property of arguments. This is the standard terminology used in logic and argument analysis, and students in the Argument Ninja Academy are going to learn and use this terminology.

Now, we can also note that this argument is an instance of a general argument pattern that has the following form: All A are B, All B are C, therefore, All A are C.

So our argument has exactly the same logical structure as “All New Yorkers live in the United States”; “All people who live in the United States live in North America”, therefore, “all New Yorkers live in North America”.

This argument is also valid. And we can see, by thinking about examples like these, that what makes these arguments valid isn’t the specific content of the premises, but rather certain structural features of the argument, the form of the argument that is captured by that argument pattern: all A are B, all B are C, therefore all A are C. Any argument of that form will be valid, regardless of what you put in for the As, Bs and Cs, as long as the substitutions are consistent.

From reflecting on simple examples like this, we’re introduced to the distinction between valid and invalid arguments, which is an extremely important logical principle. It’s an important part of argument literacy. This is the jumping off point for the next important logical concept, which is the distinction between strong and weak arguments, which helps us talk about the kind of reasoning we do in the natural sciences, and so on.

So, to summarize: we use simple arguments so we can learn and talk about broader principles of logic and argumentation that aren’t so simple.

3. The Definition of an Argument

Now, another reason why it’s important to start with simple arguments is because the concept of an argument is actually fairly complex, and it does a lot of work for us.  This concept, expressed in the basic definition of an argument, assumes, or implies, a number of important critical thinking concepts. When you learn the definition of an argument, you’re also being introduced to these critical thinking concepts.

I’ll give you an example, but let’s start with the definition of an argument. What is it that makes a collection of statements an argument, rather than just a collection of statements?

The answer is that it’s a function of the interpretation we give it. An argument is a kind of “speech act”, something we do with language. The key thing is that we’re to imagine that some of the statements are being treated as premises, and these premises are being offered as reasons to accept another statement, the conclusion.

We can unpack this a bit more. In standard logic and argument analysis, the premises and the conclusion are assumed to be statements that can be either true or false, but not both. And we’re to imagine that in offering these premises, the reason why we should accept the conclusion is that the premises are true.

In other words, we’re saying that if the premises were true, they would give us good reason to believe the truth of the conclusion; and if in addition we believe that the premises are in fact true, then it follows that we have good reason to believe the conclusion is also true.

Actually, if we’re making all of our assumptions explicit, we also need to clarify who the “we” is. We need to assume that there’s an arguer and an arguee — someone is offering these premises as reasons to accept the conclusion, to some audience. The audience could be oneself — we can use arguments to convince ourselves to accept a conclusion. But more often an argument is directed toward an audience that is different from the arguer, and the intent is to persuade the audience to accept the conclusion of the argument, based on the reasons given.

So, right away, with this basic definition, we’ve made a lot of assumptions about the nature of this speech act. When I introduce this definition to students we always have a discussion about how it compares to our ordinary intuitive understanding of what an argument is.

It certainly doesn’t capture all of the associations we have with the term “argument”. In ordinary language we often use the term to imply that there’s some kind of emotional disagreement or confrontation, like when my daughter comes home and she’s upset and we ask why and she says she had an argument with her boyfriend. Our definition doesn’t carry any of these associations about conflict or confrontation. But it’s designed that way on purpose, to force us to focus on the quality of the reasoning rather than the emotional state of the parties involved.

The definition also includes elements that not everyone would think to include.

For example, there’s a long tradition in rhetoric of interpreting arguments as a form of persuasive speech where reasons are given, but the focus is on what makes this kind of persuasion effective, rather than whether the persuasion is justified, in the sense that the reasons given really are good reasons.

Argumentation in classical rhetoric isn’t primarily concerned with the question of justification, it’s concerned with the question of effectiveness. How can I increase the likelihood that my reasons will be interpreted as persuasive reasons, by my audience?

Our definition, on the other hand, forces us to elaborate on the concept of what it means to actually have good reasons to accept a conclusion, independent of whether the argument is successful at persuading its audience.

Again, this is by design. When we add this, we’re stipulating that this is what a theory of argumentation is about — it’s about the distinction between good and bad reasons for belief.

We here at the Argument Ninja Academy also care very much about what factors actually make an argument persuasive to an audience, and we teach these principles too. But that’s to study argumentation as a mode of persuasion. Don’t confuse that with argumentation as a theory of good reasoning. We need to remember that these are two different things.

In the Argument Analysis modules in our curriculum, we’re focusing on the distinction between good and bad reasons for belief, and teaching students how to identify bad arguments and how to construct good arguments of their own. We focus on the psychology of persuasion in other parts of the curriculum.

Now, another interesting question that often comes up when we talk about the definition of an argument is whether we really need to include the notion of “truth” in the definition.

Premises and conclusions are defined as statements that can be either true or false, but not both. Why do we need to say this? The truth of the premises is offered as reasons to accept the truth of the conclusion. Why do we need this? Why can’t we just say that accepting or believing the premises is reason to accept or believe the conclusion? What do we add by saying “accept as true” or “believe as true”? And does this impose a restriction what we can argue about? If I don’t believe that moral beliefs can be true or false, for example, does that mean that it’s impossible to argue about them?

These are all good questions.  A decent theory of argumentation, as it relates to critical thinking, needs to answer them. And as you can see, these questions can easily push us into philosophical territory. At the very least they force us to clarify our assumptions about what we’re doing.

The Argument Analysis modules in the Argument Ninja curriculum address these issues, but you can see how it would be easy to get lost in philosophical tangents. In the curriculum I tend to stay away from philosophical discussions that aren’t directly relevant to the goals of critical thinking, or that don’t make any difference to the actual reasoning and communication skills that we want our students to develop.

4. Demanding Clarity

Now, I said that the definition of an argument assumes, or expresses, some valuable critical thinking concepts. Here’s an example.

The requirement that premises and conclusions be expressed in the form of statements that can be true or false, does impose a high standard on what we can and cannot argue about.

But this restrictiveness is actually a powerful critical thinking tool, and it can be a powerful persuasion tool, because it forces us to clarify what’s at issue in a debate and what all parties are actually saying about the issue. And it can reveal vagueness and ambiguity in our thinking, and in the thinking of other people.

Let’s back up and talk about this standard. We’re saying that premises and conclusions have to be statements that make an assertion of some kind, an assertion that can be either true or false.

What this means in practice is that for a sentence to function as a premise or a conclusion in an argument, both the person giving the argument and the intended audience of the argument, must have a shared understanding of the meaning of that sentence.

In this context, what it means to understand the meaning of a sentence is to understand what it would mean for the sentence to be true or false. That is, it involves being able to recognize and distinguish in your mind the state of affairs in which the assertion being made is true from the state of affairs in which it’s false.

Consequently, if the sentence is too vague or its meaning is ambiguous, then it can’t function as a premise or a conclusion in an argument, because in this case we don’t know what it means for it to be true or false. We literally don’t know what we’re talking about.

5. Vagueness and Ambiguity

Let me say a word about vagueness and ambiguity. There is a difference.  If I ask my daughter when she’ll be back from visiting friends and she says “Later”, that’s a VAGUE answer. It’s not specific enough to be useful.

On the other hand, if I ask her which friend she’ll be visiting, and she says “Hannah”, and she has three friends named Hannah, then that’s an AMBIGUOUS answer, since I don’t know which Hannah she’s talking about.  The problem isn’t one of specificity, it’s about identifying which of a set of well-specified meanings is the one that was intended.

Now, it’s important to realize that all natural language suffers from vagueness to some degree. If I say that the grass is looking very green after last week’s rain, one could always ask which shade of green I’m referring to. But it would be silly to think that you don’t understand what I’m saying just because I haven’t specified the shade of green. “Green” is vague, but in this context, it’s not a barrier to meaningful communication.

So, for purposes of determining whether a sentence can function as a premise in an argument, the question to ask isn’t “Is this sentence vague?”, but rather, “Is this sentence TOO vague, given the context?”.

If all I’m doing is trying to determine whether the grass needs watering or not, the specific shade of green probably doesn’t matter. But if I’ve been sent to the paint store to pick out a can of green paint that my wife wants to paint a room in our house, then specifying the shade of green really does matter.

Now, from an argumentation standpoint, what this does is force us to be clear about what’s at issue and what’s being said before we can even begin to assess arguments for and against a position. We need to make sure that there’s a shared agreement on what the position is actually asserting.

If we discover that there is no shared agreement then we shouldn’t be thinking about evaluating arguments yet.  The first thing we need to do is ask for clarification.

6. Example: Is Trump a Conservative?

Let me give you an example. Suppose the issue is whether Donald Trump is a conservative or not. Is Donald Trump a conservative?

Well, if you survey people you’ll find that the majority of people say yes, he’s a conservative. A significant minority say no, he’s not really a conservative. And another minority will say they’re not sure, if given the option.

So it looks like we have a difference of opinion about this claim, and it’s tempting to go ahead and start looking at the  arguments that people give to support their position.

But this would be a mistake.

If I were to ask a mixed group of people whether they think Donald Trump is a conservative, but first ask them had to write down what they mean by the term “conservative”, or what they think it means, what you’ll discover, if you survey those definitions, is that there is no shared agreement among this group about what it means to be a conservative.

If you’re not a political junky, you might just associate conservatives with being Republican, and since Trump was the Republican nominee, he must be conservative.

Or you might associate conservatism with checks and balances on the power of government, and reducing the size of the federal government.

Or you might associate conservatism with religious values, or with a certain kind of foreign policy, or a certain kind of economic policy, or a certain position on immigration.

Or, as you’ll discover if you survey people, you may not have a clear idea of what conservatism means at all.

The result is that when one person answers the question “yes”, and another answers “no”, they might be responding to two different questions. But you won’t know this unless you ask people to clarify what they mean. Without this step you’re setting up a situation where people are really arguing past each other, rather than against each other. They may actually agree with each other more than they realize, and not know it.

I’ve done this exercise in the classroom many times, not with Trump but with the other political figures or government administrations. It’s very illuminating when you actually read out the various definitions of “conservative” that students are using to base their judgment, to the rest of the class. People are genuinely surprised to learn that other people are using a definition that was not even on their radar.

And its also illuminating to see how many students are willing to answer “yes” or “no” to the question, and are quite confident in their choice, but when you push them to clarify the basis of their judgment, they admit that they don’t have a clear idea of what conservatism means. From a critical thinking standpoint, this isn’t ideal, of course. From a persuasion standpoint, this puts them a very vulnerable position.

We elaborate on this issue of vulnerability to persuasion in the other white belt modules, the modules on Socratic questioning and so-called “street epistemology”. I’ll come back to this on the next podcast.

7. Argument Analysis Skills

Okay, let’s talk briefly about the skills that students will be expected to learn and demonstrate in this argument analysis module.

In terms of content, the focus is on the definition of an argument; the nature of statements, or propositions; the parts of an argument; the role that truth and falsity plays in the definition; and the issues we talked about here concerning truth and meaning and the need for clarity.

We also talk about how arguments in natural language can differ from arguments presented in standard form, how to identify premises and conclusions in natural language arguments, and how to tell whether a sample of text actually contains an argument or not.

We don’t get into criteria for evaluating arguments until the next module in the Argument Analysis sequence, which is at the yellow belt level. So we don’t talk about valid versus invalid, or strong vs weak arguments, until later on.

In terms of skills, students will be drilled on the key concepts using a lot of simple examples. Given an argument in natural language, can you identify the premises and the conclusion? Can you distinguish a piece of language that contains an argument from one that doesn’t? Given a sentence, can you tell if the meaning is clear enough to function as a premise or a conclusion in an argument? What’s the difference between a vague sentence and an ambiguous sentence? How can we use definitions, or stipulations, to clarify the meaning of sentences?

You can use simple text or audio or even video samples to drill these skills.

One thing you learn when you become sensitive to the presence of absence of arguments, is how common it is for even long pieces of published writing, or long sections of a speech, to contains plenty of assertions but no actual arguments — no reasons are offered to accept the assertions.

8. A Comment: Argumentation vs Persuasion

To wrap up, I want to come back to a point that I made earlier, about argumentation versus persuasion.

There’s a difference between learning how to reason well and learning how to be persuasive in the eyes of an audience. Skill in one doesn’t automatically translate into skill in the other.

In the Argument Ninja program, we’re going for both. But we have to treat these skills separately, at least at the beginning. The Argument Analysis modules are about good reasoning. It would confuse things terribly to get serious about persuasion when we’re still just trying to figure out the difference between a good reason a bad reason.

And when I say confuse things terribly, I mean it.

When you switch your focus from good argumentation to effective persuasion, the world turns upside down. Up is down, black is white. What’s good is bad and what’s bad is good. It’s the upside down from Stranger Things.

Let me give you an example. When it comes to good argumentation, clarity is a virtue and vagueness is a vice. If the statements you’re making aren’t clear enough, you can’t argue with them or about them.

But when it comes to persuasion, it’s the opposite — vagueness is a virtue, and clarity is, or can be, a vice.

Take Trump’s campaign slogan: “Make America Great Again”.

From an argumentation standpoint, this slogan too vague to have any precise meaning. When was America great? In virtue of what was America great? What would it mean for America to not be great? None of this is specified, so it’s impossible to argue about. Without further clarification, there’s no way to evaluate, for example, whether America is more or less great after one year of Trump’s presidency.

From a persuasion standpoint, however, the vagueness of this slogan is a virtue, not a vice. The vagueness invites people to project their own conception of greatness onto the slogan, to have it mean whatever they intuitively want it to mean.

The vagueness makes it possible to rally a diverse coalition of voters, with different backgrounds and different concerns, around the same slogan.

From this perspective, it’s a brilliant piece of persuasion.

You see many, many examples like this, when you’re a student of good argumentation and persuasion. Methods of reasoning that are treated as fallacies, from an argumentation standpoint, are often highly effective from a persuasion standpoint, and widely use for this reason.

In political campaigning and in advertising, for example, constant repetition of simple, emotionally resonant but cognitively meaningless slogans, is a very effective persuasion technique, and there are psychological and physiological reasons why it is.

9. Take-Away Message

So, here’s the take-away message. Rational argumentation is fundamental to critical thinking. If you want to improve the quality of your thinking and learn to truly think for yourself, you have to learn this.

But if all you know is rational argumentation, and you think that equips you to engage with people effectively in the real world, you’re in for a rude awakening. The world will make no sense to you. It will chew you up and spit you out.

This is why at the Argument Ninja Academy we teach both skill sets, the Light Arts and the Dark Arts.

Wrapping Up

Okay, that about covers what I wanted to talk about on this episode. We’ve looked at two of the four white belt modules in the Argument Ninja program. Next episode I’m going to talk about the remaining two modules, on Socratic Questioning and “street epistemology”. And I’ll say more about the principles I use for deciding what modules go together in a given belt rank. It’s not random, there are reasons why the white belt curriculum starts here rather than somewhere else.

You can find a complete transcript and show notes for this podcast over at argumentninja.com.

This podcast does need support. I’m the only one on my team that doesn’t have a separate, full-time job. This is what I’m doing full-time, and until this program starts generating money, I’m relying on the support of listeners like you.

You can support the podcast, and the creation of the Argument Ninja Academy, and earn yourself a reserved seat in this program, by pledging a small monthly amount and becoming patron.  You can do this through Patreon, at patreon.com/kevindelaplaante. Or you can visit the support page at argumentninja.com. Oh, and if you become a monthly s support, you also get access to the entire video course library over at the the Critical Thinker Academy, at criticalthinkeracademy.com. That’s a huge deal.

You can also help to spread the word by leaving a rating and a review on iTunes, and by sharing links to podcast episodes, or the Critical Thinker Academy website, on your social media channels. If you’re on Facebook you can follow discussions at facebook.com/criticalthinkeracademy. I share links to the podcast episodes there as well.

Thanks for listening, I hope your week goes well, and I’ll see you next time.

Read More