004 – The Classroom is My Dojo


In This Episode

I explore reasons why standard critical thinking textbooks say almost nothing about the psychology of human reasoning and persuasion.

  • argumentation as rhetoric vs argumentation as tool for philosophical reasoning
  • why Plato was so hard on the Sophists
  • what it was like being socialized into philosophy as a student
  • the martial arts training hall as a ritualized space
  • why the philosophy classroom is like a dojo for training in the martial art of rational argumentation
  • understanding the rules inside the dojo vs the rules outside the dojo
  • critical thinking texts as martial arts training manuals
  • argumentation and the dream of universal reason
  • why critical thinking needs both approaches to argumentation

Quotes:

“I can’t expect a stranger to honor the rules of rational argumentation any more than I can expect a guy strangling me in a street fight to automatically release his grip if I tap out.

“We can’t make all of society our dojo, but we can teach techniques that can make us better prepared for life on the street. It’s time that critical thinking education did the same.”


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


This is the Argument Ninja podcast, episode 4!

Hi everyone and welcome to the Argument Ninja podcast. I’m your host, Kevin deLaplante, and I’m a philosopher and critical thinking educator.

You can go to argumentninja.com to learn more about this podcast, show notes for each episode, my background, and my other online projects, including the Critical Thinker Academy, which is a site that hosts video tutorials on a wide range of topics related to logic, argumentation and critical thinking.

In the last episode we looked at a case study on the ethics of persuasion. When is it okay to intentionally use persuasion techniques that operate unconsciously, to achieve your goals?

I’ll have a lot more to say about this question, because it’s really fundamental to my project, but in this episode I want to return to an issue that I touched on in the first episode.

This is about the disconnect that I see between traditional ways of teaching logic and argumentation and critical thinking, and the psychological reality of how people actually form beliefs and what actually motivates people to change their mind.

At many universities you can take a full 40-hour course in symbolic logic, and a full 40-hour course on critical thinking, and get no exposure to basic concepts in classical rhetoric and persuasion, no exposure to the literature on cognitive biases and human reasoning, and no exposure to the social psychology literature on why seemingly irrational beliefs and behavior persist in different social groups.

I’m not kidding. No exposure, none.

Yes, there are exceptions, and in different textbooks you’ll see passing references here and there, a sprinkling of material on some of these topics … but as a generalization it’s still true.

I said it in the first episode and I’ll say it again: this is a disaster for critical thinking education.

In this episode I want to talk about how this situation came to be.

And I want to talk about this from my perspective as someone who taught logic and critical thinking in university philosophy departments for many years, in exactly this way, covering no material on rhetoric, no material on persuasion, no material on cognitive biases, no material from the social sciences.

I only started to wake up after teaching this way for seven or eight years. I slowly started adding extra material to my courses from these different sources. By the time I left academia, in 2015, after almost 20 years as an academic philosopher, my critical thinking courses were about 50% standard logic and argumentation, and 50% material from these other sources.

But I had to create my own reading packages to do this. The standard critical thinking textbooks weren’t any help, and for the most part they’re still not much help.

So, what’s going on here?

Well, the first thing to realize is that critical thinking education at the college and university level, where these courses are offered at all, has become the responsibility of philosophy departments, or in smaller colleges, humanities departments with a few philosophers on staff. There are exceptions, but this is generally the case.

And the second thing to realize is that historically, the dominant trend in Western philosophy has been to distinguish logic and argumentation from rhetoric or psychology.

Philosophers have tended to believe that philosophy, as a discipline, has a special claim on logic and argumentation. That in a certain way it “owns” these fields, because philosophy is uniquely concerned with the foundations of knowledge and standards of correct reasoning.

So, the separation that I’m pointing to, between the aims of logic and argumentation as philosophers have understood them, and these other branches of the humanities and social science, is actually a feature, it’s not a bug.

There’s a story to tell about why this is so, and I think this story needs to be understood and appreciated if we’re going to move past it and develop a more integrated, multi-disciplinary approach to argumentation and critical thinking.

And just a heads-up: I’m not going to crap all over philosophy and say that its approach to logic and argumentation is mistaken or misguided.

In fact, I want to defend it.

What needs to be crapped on is the idea that this approach, by itself, can serve as a foundation for effective argumentation in the social environments where most of live.

But the aims, the goals, of the philosopher’s model of argumentation, are vitally important. They need to be part of the package of concepts that we teach when we teach critical thinking. They just can’t be the only concepts we teach.

Much more on this later.

Oh, and for those who enjoy the martial arts analogies, I promise there’ll be one here. I’m calling this episode “The Classroom is My Dojo”, and there’s a reason for that.

For now, let’s start at the beginning.

If you want a quick one-sentence definition of “rhetoric”, you can say that it’s the “art of persuasive speech”.

Rhetoric is about the various ways we can use language and other forms of symbolic communication, to persuade an audience.

The study of persuasive speech goes back thousands of years.

Argumentation — understood as a type of rhetoric, a type of persuasive speech — has also been studied for thousands of years.

When it’s studied like this, you have to treat argumentation as a deeply psychological and social practice.

Why? Because it’s about offering reasons for a particular audience to accept a particular conclusion, or agree to a particular course of action, in a particular social and historical context.

In the West, we see the first systematic teaching on argumentation and persuasion with the ancient Greeks.

This is partly because Greek democracy in the 5th century BC placed a premium on a man’s ability to deliver a persuasive speech.

Political governance and decision making involved someone getting up in front of an assembly and making an oral case for a particular point of view, and winning the support of the majority.

In Greece, around the second half of the 5th century, a whole new profession popped up that offered to teach the art of persuasive speech, sometimes for a fee.

These traveling instructors would show you how to argue persuasively on any subject — ethics, philosophy, science, art, whatever — not just political topics.

In Greek philosophy, these teachers of argumentation and rhetoric were called Sophists.

The term “Sophist” derives from the Greek words for “wisdom”, sophia, and “wise”, sophos.

The Sophists claimed to be wise, and to teach wisdom.

Now, there is no doubt that there were some really smart, educated guys among the Sophists. But they had a mixed reputation among the Athenians.

Their critics were bothered that the focus of their instruction seemed to be how to be persuasive in whatever field or topic you chose, on whatever side of an issue you chose.

Plato featured the Sophists in several of his dialogues, and his student Aristotle talked about them as well. Their historical reputation has certainly been colored by the way they’re presented in Plato and Aristotle, the two influential philosophers of antiquity.

Plato, in particular, had a very negative view of the Sophists. He distinguished the use of argumentation in the service of persuasion, from the use of argumentation in the service of truth and wisdom and virtue, and he charged the Sophists with indulging in unscrupulous and fallacious reasoning, for persuasive effect.

This charge has stuck. Over time the dictionary definition of the term “sophistry” has come to mean the deliberate use of fallacious reasoning for persuasive effect.

Now, as a matter of historical scholarship, this is almost certainly an overly reductive and unfair characterization of what the Sophists were doing.

But for whatever reasons, Plato’s judgment had a huge influence on how subsequent generations viewed the Sophists.

Now, whether this judgment was fair or not, it did help to create an identity for Western philosophy, as fundamentally about the search for true wisdom, not just the appearance of wisdom.

This distinction, between a good argument, and a persuasive argument, has become fundamental to philosophy.

The goal of argumentation, on this view, isn’t persuasion for its own sake — it’s persuasion for good reasons.

Consequently, philosophers have spent a lot of time thinking about what constitutes good reasons to believe something.

This approach to argumentation treats it as a fundamental tool of philosophical reasoning, and by that I mean a tool for exploring the logical implications of our beliefs, justifying our beliefs, and uncovering truth and falsehood.

This is what I meant when I said that philosophers feel that they have a special claim on logic and argumentation, that philosophy “owns” these fields in a way that no other discipline does.

Plato’s concern was that if philosophers focus too much on the rhetorical dimensions of argumentation, they risk losing sight of these larger philosophical goals.

Fast forward 2500 years, and the situation hasn’t changed much.

Philosophy has largely followed Plato’s lead in that for the most part philosophers don’t study rhetoric and don’t teach rhetoric, and generally don’t have a positive view of rhetoric, because of its perceived association with persuasion and manipulation at the expense of truth.

So, as a philosophy student I was required to study formal logic. And there we learned about different systems of formal reasoning and how to symbolize natural language sentences in these different systems, and how to evaluate the logical structure of arguments expressed in these logical systems.

And in the first philosophy class I ever took, we were assigned a textbook called Logical Self-Defense, written by philosophers, which was quite popular as a critical thinking text.

That book covered basic concepts in argument analysis, it had a big section on informal fallacies of reasoning, and to its credit, a big section on critical thinking about the media and advertising.

What a text like this does, basically, is show you how human beings routinely violate norms of good argumentation, in the hope that you, the reader, will be better equipped to detect these violations when they occur.

This is all great as far as it goes, but as I said earlier, texts like these say almost nothing about the psychology of human reasoning, about the cognitive processes that underly human behavior, about the social conditions that influence human behavior and human judgment — in short, they say almost nothing about human nature that is relevant to understanding how argumentation actually operates in the real world.

I didn’t know this of course. I thought I was learning everything there was to learn about how to reason well.

And I was thrilled.

I was a keen student, and like many keen students who are exposed to a little logic, I started to notice fallacies everywhere — it’s like you’ve been given glasses that let you see things you’ve never seen before.

And I was thrilled with the kind of discussions we had in my philosophy classes, where the whole focus was on reading for the argument, reconstructing arguments, criticizing and revising arguments.

Any topic was fair game. We talked about arguments for and against abortion, pornography, infanticide, terrorism, war, belief in God, whether we have a soul, the morality of capitalism vs Marxism, you name it — with no worry about offending anyone’s sensibilities based on the subject matter alone.

And everyone understood the rules of the game. If an argument entailed a contradiction, or relied on an assumption that was false or dubious, everyone, students and teachers alike, understood that that was a problem that needed to be resolved, not dismissed or ignored.

As students, we learned to admire well-crafted arguments, and well-crafted counter-arguments that stayed on topic, that didn’t dodge the issue or change the subject.

We came to regard a clever, compelling counter-argument as a beautiful thing. It takes skill to come up with them. As philosophy students we learned to enjoy and value the dialectic of argument, objection, reply, rebuttal, and so on.

It was like studying chess and learning basic chess moves and strategy, and then studying classic chess matches and learning how brilliant people applied these strategies, and invented new strategies along the way.

And we learned not to mistake criticism of the argument for criticism of the person giving the argument.

We also learned that philosophical argumentation is intended to be a social thing, a public thing, that you conduct within a community.

You create an argument with the expectation that you’ll present it to an audience. And the responsibility of the audience is to interrogate the argument as forcefully as possible, to test for strengths and weaknesses, and to test one’s ability to defend the argument against criticism.

All academic fields are public and open to peer review, but philosophers rightly have a reputation for being especially forceful in their interrogation.

I remember a chemist friend of mine visiting me at a philosophy conference and sitting in on a session. The speaker had about 40 minutes to deliver his presentation, and then the audience had another full 40 minutes to ask questions.

My chemistry friend had never seen anything like it. First of all, he’d never heard of a speaker getting this much time for their presentation. He was used to 20 minutes max, and sometimes he’d only get 10 minutes at a conference to deliver his presentation, with 5 minutes of Q&A.

But this was 40 minutes of Q&A. 40 minutes of a room full of people taking turns criticizing one or another aspect of the argument, often engaging in lengthy exchanges with the presenter, following a chain of reasoning and allowing the other person to reply and ask follow-up questions.

If you’re an outsider, this experience can feel very confrontational, very stressful, like being in a boxing ring for 40 minutes with a bunch of fighters lined up to take turns on you.

My chemist friend was fascinated by the whole thing, but at one point he leaned over to me and asked “is it always like this?”, and I had to answer “yes”, most of the time — this is what peer review looks like in philosophy.

And I had to reassure him that most of the time, there’s no hard feelings. Of course people can be rude and unreasonable, and no one appreciates that, but no one trained in philosophy is bothered by the idea of having their arguments stress-tested in this way.

In fact, we appreciate the feedback enormously. We don’t want to defend bad arguments. We appreciate it when weaknesses are brought to light.

But more than that, most of us take great pleasure in the exercise itself. It can be exhilarating to be a part of, and exhilarating to watch, if you’re into the subject.

There’s definitely a performance element to it. You’re presenting your work to an audience, and people want to see how well you present it and how well you handle objections.

And there’s a game-like combat element to it.

I’m not the first one to point this out — it’s not unlike sparring in martial arts.

But the test is occurring on two levels, simultaneously.

On one level you’re testing an idea, an argument. You stress-test it to identify weaknesses and improve it.

On another level, you’re testing yourself, how well you perform “in the ring,” so to speak, in front of a real opponent, not just an imaginary opponent.

This is what it means to be socialized into academic philosophy, as a profession.

And let’s not forget, this is an academic profession.

To succeed as an academic philosopher, you need to do original research that is subject to peer review and that passes the test of peer review.

You measure success by your ability to create arguments that are judged to be worthy of publication.

That’s what the PhD degree is designed to do — to get students to a point where they can produce original scholarship that is recognized by peers as making a contribute to the field, and that can pass the test of peer review.

Now, let me back up remind ourselves of why I’m talking about this.

I’m trying to shed some light on why critical thinking textbooks written by philosophers and taught by philosophers, focus almost exclusively on principles of logic and argumentation and say very little about psychology or how persuasion works in the real world.

In some sense, I think all philosophy students are aware of this disconnect.

You just have to have the experience of going home after class and try to have a conversation with your parents or your friends about what you talked about in school, and see how quickly you can get people upset.

You know that principle that we learn, about not mistaking the argument for the person?

Well, in the real world most people feel quite the opposite.

If you criticize someone’s ideas, most people will interpret that as an attack on them.

Their shields go up, and they’ll assume a defensive posture.

They’re not going to thank you for pointing out the weaknesses in their position.

There isn’t a philosophy student alive who hasn’t had this experience, of walking into a conversation feeling like you’re going to help people work through an argument, like you do in class, and you end up making people mad at you.

So, given this reality that everyone in philosophy (and many people outside of philosophy) can relate to, why isn’t this discussed in the critical thinking textbooks, or in logic classes?

Why isn’t the psychology of belief and persuasion part of the discussion of what it means to give a persuasive argument?

Well, I think there are two reasons for this.

One has to do with what I talked about earlier, about the historical legacy of the Sophists, the suspicion that philosophers have about rhetoric, and their commitment to argumentation as a tool of philosophical reasoning.

But the primary reason, I think, has more to do with the socialization of students within academic philosophy, and the socialization of professional philosophers that I just described.

Let’s ask ourselves — what is the environment that teacher and students find themselves in, when studying philosophy?

It’s the controlled environment of the philosophy classroom, with all the conventions and expectations that come with it, that students are socialized into, starting from their first day in class.

In the classroom, principles operate that don’t operate outside of it.

In that space, everyone agrees that the goal of reading a text is to extract the argument and subject it to critical analysis, in accordance with certain rules about how that analysis should go.

In that space, we try hard to distinguish criticism of an argument from criticism of a person.

In that space, a failed argument is just as instructive as a successful one.

In that space, there’s agreement that what we’re trying to do, as a group, is ultimately to gain some wisdom on a topic that matters to us. It’s not to win arguments.

Which means that in that space, in the classroom, you can get away with saying things and doings that you could not reasonably expect to get away with in the world outside the classroom.

In this sense, I submit that the classroom is very much like the training hall of a traditional martial art.

These training halls are ritualized spaces … some might describe them as sacred spaces … where respect for the principles and goals of the martial art are built into the rules that govern the space.

There are rules for how to enter and exit these halls, what you’re allowed to wear, what you’re allowed to say, how you address the other students and your instructors.

In taekwondo, for example, you bow when you enter and bow when you leave. When you step on the matt you address one another as “sir” and “ma’am”. You raise your hand to ask a question. If you’re late you need to be given permission to join the group by the instructor, you can’t just walk in.

After any pair practice between students, you bow in a particular way and shake each other’s hand, and say “thank you sir” or “thank you ma’am”.

There are rules for safe training that everyone learns and must abide by, or they’re forced to leave.

I could go on, but you get the idea.

And most importantly, no one expects people outside the martial arts training hall — outside the dojo, the dojang, the kwoon, the akhara, whatever name you give it — no one expects the rules of the hall to apply outside the hall.

There’s no confusion about that.

The academic classroom is a ritualized space, just as much as the martial arts training hall.

The philosophy classroom is a particular kind of ritualized space, where the teacher establishes and enforces rules that express and reinforce the principles of the discipline.

That’s the space that I experienced as a philosophy student.

And here’s my point. In the classroom, because it is a ritualized space, as I said, you can get away with saying things and doings that you could not reasonably expect to get away with in the outside world.

Now, what does this have to do with how critical thinking texts are written?

What I’m saying is that these texts are written much like the official training manuals for a particular martial art.

What they teach you is the principles and practices of the martial art, within the idealized environment of the training hall, not the noisy public world outside the hall.

Critical thinking texts teach the principles of logic and argumentation that are the backbone of the Western philosophical tradition that emphasizes argumentation as tool for philosophical thinking.

In short, they’re teaching students what philosophical reasoning looks like, and how to do it.

But this is crucial — they’re teaching students what philosophical reasoning looks like, and how to do it, in a space where these principles will be shared and honored.

And it works, to the extent that one can successfully create this ritualized space where everyone agrees to follow the rules.

When you play a sport, you have to find a way to ensure that everyone follows the rules, or at least incentivize the players to follow the rules. Otherwise you can’t play the sport.

In the world of academia, the rules are built in to the social and professional structure of the academic discipline. I have to follow the rules if I want to keep my job, earn the respect of my peers and advance in my career.

In a classroom environment of a college or university philosophy program, the rules are established by the conventions of the discipline and by the leadership and example of the instructor.

With the right support in place, these rules are usually not hard to achieve or maintain.

But it’s not guaranteed. I’ve seen them break down.

If a class is really badly managed, it can break down. If there are ideologically motivated students in the class who are committed to challenging the rules and disrupting the environment, it can break down.

There’s a lesson here: a culture that respects the rules of philosophical debate and argumentation doesn’t happen on its own. It takes work and effort and vigilance, by a community, to maintain.

But in the wild world outside the classroom? At home, on the playground, at your work place, in the media, on the internet, on the streets, in the halls of government?

You can’t expect these rules to apply.

I can’t expect a stranger to honor the rules of rational argumentation any more than I can expect a guy strangling me in a street fight to automatically release his grip if I tap out.

So, getting back to these critical thinking texts, we need to ask some questions.

If it’s as obvious as I say it is, that these texts are inadequate to prepare students for how to argue persuasively in the wild, why doesn’t the field recognize this?

Why don’t textbook authors acknowledge that what they’re really teaching is an idealized form of intellectual debate, that only works within social environments that support these intellectual values?

Well, there are probably a number of different factors at play, but let must describe how I felt about teaching this material for many years, because I think it reflects how most philosophy instructors think about it.

I think we’re all just a little bit dazzled by the universality of what we study, and by the dream of universality.

I know I was, and I still am.

When you’re first learning the elements of argument analysis, it really does seem like you’re learning something that has universal scope and significance.

We start off giving a few examples of arguments, and then we quickly move to the general question — what do all arguments have in common, that makes them arguments?

And we give an answer — an argument is a collection of statements or propositions, one of which is singled out and called the conclusion, the others are called the premises; in which the premises are being offered as reasons to believe or accept the conclusion.

Then we dig deeper into the components of this definition. What is a proposition? How do propositions differ from other linguistic expressions? What does it mean to offer reasons. What does it mean to offer good reasons? And so on.

And when we start talking about fallacies of reasoning, these are framed as general patterns that show up everywhere that human beings communicate.

This is all so general, so abstract, that it’s easy to believe that it’s an ahistorical description of a universal feature of human reasoning, if not rationality itself.

In fact, it’s such a compelling notion that for long stretches of Western intellectual history, philosophers and theologians have assumed that the basic principles of logic are universal rules of rational thought, and that the universe itself, to the extent that it’s a rational, intelligible universe, should conform to these rules as well.

That’s what I would tell my students, trying to sell them on the philosophical significance of logic and argumentation, not just its practical usefulness.

And as a philosopher of science, I was also very aware that when I was teaching logic and argument analysis, I was setting up a conceptual framework that I would later use to talk about the logic of scientific reasoning, and how scientific inferences can be justified.

There’s a case to be made that modern science as we know it wouldn’t exist without this dream of universal reason animating it, and the logic of scientific reasoning driving the assessment of evidence and the acceptance and rejection of different scientific theories over time.

That was another way to sell this material to students — to show them how important these ideas were, what roles they placed, in the intellectual history of the West.

So as a philosophy teacher, I was perfectly comfortable teaching this material in the standard ways it had been taught, because I believed all of this.

And with some qualifications and caveats, I still believe it.

The study of argumentation, as a tool for philosophical reasoning, does tell us something universal about the nature of logic and rationality.

It does describe ideas that have had a huge influence on the intellectual history of the West, and on the birth of modern science.

And it does pave the way for a deeper understanding of modern developments in lots of different fields outside of philosophy, like mathematics, linguistics, computer science, artificial intelligence, and so on.

So, there’s a strong case for the educational value of learning logic and argumentation and critical thinking in the ways that this is traditionally taught.

And this is the reason why philosophy instructors are generally happy to teach this material in the way it’s been taught for so many years.

They’re not completely blind to the reality of how argumentation and persuasion works in the wild. They’re aware that the material may not be all that relevant outside the classroom.

But that was never the primary aim of teaching this material.

As philosophy instructors, we emphasize the value of teaching and learning this material independent of its effectiveness or ineffectiveness as a tool for rational persuasion.

Now, I believed all this, and I still believe it now.

But having said all that … it doesn’t change the fact that when it comes to understanding human reasoning and human nature, and how real people make judgments and decisions and respond to arguments, and how to be more effective at persuading people on the basis of reasons … the standard material on logic and argumentation is completely inadequate to the task.

Well I’ve gone a lot longer than I had intended to, so let me try to summarize that main take-away points from this episode.

The first is that historically there are two distinct approaches to the study of argumentation.

You can study it as form of persuasive speech, as a form of rhetoric; and you can study it as a tool of philosophical reasoning. Both are about persuasion, but the philosophical approach places its focus on persuasion for good reasons, rather than persuasion for its own sake.

The second point is that the philosophical approach to argumentation is genuinely effective only within an idealized social context where the norms of rational argumentation are respected and valued.

I tried to show how the culture of the philosophy classroom and the profession itself helps to create and reinforce this social context, and why it’s foolish to assume that the world outside the classroom will respect these rules.

And I drew an analogy with the ritualized spaces of the martial arts training hall, and how the rules and principles the martial art are embedded in these spaces, and enable students to practice the art in a supportive environment.

The philosophy classroom, in this sense, is like a dojo for training in the martial art of rational argumentation.

The third take-away point is that, even though the standard principles of rational argumentation aren’t really that helpful in understanding how argumentation and persuasion work in the wild, outside the classroom, they still have educational value and they’re still important to learn.

In particular, we need to appreciate the role they’ve played in the intellectual achievements of the West, and especially the critical spirit that animates scientific reasoning.

The main takeaway, for me, is that we need to appreciate just how much social support is needed to maintain and reinforce this critical spirit.

We reason best as members of a community that values and respect the principles of critical inquiry.

That community needs to be established and nurtured for individuals to thrive and learn and practice the principles that animate that community.

The classroom can function as such a community. Social institutions, like the institutional structure of science, or the judicial system, can help to create and maintain such communities, where certain rules of argumentation are taught and reinforced.

But argumentation in the public or private spheres outside of these intentional, ritualized communities, these sacred spaces, poses a real challenge.

You can’t make all of society your dojo.

That’s why when martial arts schools teach students practical self-defense, they approach it very differently from the way they normally teach the basic elements of the martial art.

In taekwondo you spend a lot of time learning how to kick to the head, but you would never teach that as a basic self-defense technique.

We can’t make all of society our dojo, but we can teach techniques that can make us better prepared for life on the street.

It’s time that critical thinking education did the same.

Thank you for listening, and I’ll see you next time on the Argument Ninja podcast.

Read More

003 – How to Make People Like You


In This Episode

  • Introduction to questions about the ethics of persuasion, and why this matters to my project.
  • South Park on the rules for getting bigger tips.
  • A case study: Derek prepares for a lunch date with Carla. Is it ever okay for Derek to use persuasion techniques to get Carla to like him?
  • Dale Carnegie’s six rules for getting someone to like you.
  • Robert Cialdini’s five principles for getting someone to like you.
  • Do normal social skills involve unconscious psychological persuasion?

Quote:

“The persuasion techniques that I’m talking about involve the intentional use of our knowledge of human nature for the purpose of manipulating the attitudes, beliefs and behaviors of people, at an unconscious level. If you prefer we can call it “intentional unconscious persuasion”. The shorthand that I sometimes use for this is “mind control”.


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


This is the Argument Ninja podcast, episode 3!

Hi everyone and welcome to the Argument Ninja podcast. I’m your host, Kevin deLaplante, and I’m a philosopher and critical thinking educator.

You can go to argumentninja.com to learn more about this podcast, my background, and my other online projects, including the Critical Thinker Academy, which hosts over 20 hours of video tutorials on a wide range of topics related to logic, argumentation and critical thinking.

I’m using this podcast, whose full title is “Become an Argument Ninja”, as a platform for me to work out some new ideas on how to combine principles of good argumentation, with principles of effective persuasion, that are grounded in our best current understanding of how human beings actually form beliefs and make decisions.

The skill set that results from this integration, this merger, is what I’m calling “rational persuasion”.

Ultimately, what I’m trying to with his show is develop the outlines for a program of instruction, in the art, science and ethics of rational persuasion.

What’s the connection between rational persuasion and the title of the podcast?

This expression, “argument ninja”, is a little tongue-in-cheek. Obviously I’m drawing on the recent trendy use of “ninja” to refer to any technique or strategy that is very effective, and that is usually only known to practitioners with lots of skill and experience.

If you google “ninja tips” or “ninja tricks”, you’ll get hits like “10 tips for the work-at-home ninja”, “How to Become a Photoshop Ninja”, “Ninja Tips for Healthy Living”, and so on.

So, I’m using “argument ninja” in a loose way to refer to effective argumentation and persuasion techniques that are only known to a select few.

But I’m not just using it in this loose, colloquial way.

As I’m articulating this conception of rational persuasion, I’ve found myself pushing a martial arts analogy.

And in the last episode, episode 002, I shared some reasons for taking seriously this idea that we should treat rational persuasion as a martial art, not just figuratively, but literally.

Now, in this episode, episode 003, I want to introduce another important element to our discussion of rational persuasion.

This is the ethics of persuasion, the morality of using persuasion techniques to achieve your goals.

We can ask this question independent of argumentation, and it’ll probably help to do that, at the start.

Is it ever morally okay to use your knowledge of the psychology of persuasion, to get another person to do what you want them to do, or to believe what you want them to believe?

That’s the general persuasion question.

When talking about persuasion and argumentation, the question is this:

If I’m giving you an argument that is intended to persuade you to accept a conclusion, is it ever morally acceptable for me to use my knowledge of the psychology of persuasion to influence how you respond to my argument, to make you more inclined to accept it, than you otherwise would be?

I’m going to need an answer to this, because what I’m trying to develop in this show is an approach to argumentation that integrates the standard logic-based principles of good argumentation, with psychological principles of effective persuasion.

It would be a bad start for me if it turned out that there is no morally acceptable way of doing this.

Now, to help understand why there’s any moral issue at all, let me clarify how the kind of persuasion that I’m talking about works.

The persuasion techniques that I’m talking about involve the intentional use of our knowledge of human nature, for the purpose of manipulating the attitudes, beliefs and behaviors of people, at an unconscious level.

If you prefer we can call it “intentional unconscious persuasion”.

The shorthand that I have sometimes used for this is “mind control”.

Now, that term, “mind control”, is a loaded term, in the sense that it’s almost always used to convey something negative or sinister, so I have to be careful how I use it.

But I don’t want to prejudge the issue, and I’m not defining this kind of persuasion as inherently good or bad.

These techniques aren’t exotic or mysterious. You don’t have to understand theoretical psychology or the cognitive mechanisms underlying the techniques, to learn how to use them effectively.

Many effective influence strategies are embedded in the tips and tricks that people pick up when they’re learning a job that involves persuasion, like learning to be a salesperson, or learning to be a courtroom lawyer.

Restaurant servers, for example, learn early on that a short, friendly conversation with a customer can increase the size of the tip they get.

Many restaurants leave small gratuities with the bill — like a mint, or a chocolate — as a matter of policy, knowing that doing so is correlated with larger tips.

The TV show South Park did an episode that illustrates this kind of training.

The scene is set in a restaurant named Raisins, inspired by the Hooters restaurant chain. All the young girl servers are named after sports cars: Porsche, Mercedes, Ferrari, etc.

It’s Ferrari’s first day, and Mercedes goes over the basics with her.

Here’s the clip:

[Mercedes: First of all, there’s a five foot rule. If you come within five feet of a customer, you need to acknowledge them, even if they’re not at your table. “Hey, cutie.” (waves and winks) When you’re not serving food or talking with customers, you need to dance around and have fun. We use things like Hula Hoops, silly strings, and water guns to play with the other girls. Be sure to giggle a lot, and be sure to show off your raisins.

Now, when you take a customer’s order, you need to sit down at the table with them and make them think you’re interested. Write your name down for them and make them feel special. “Oh man, I am so bored. Thank God you guys came in.” If you want good tips, the most important thing is physical contact. Just a simple hold of the arm can mean the difference between five and twenty dollars. “I’ll be right back with your order, guys.” (holds Ferrari’s shoulder)]

In this episode, Butters falls in love with one of the servers because he thinks the attention he’s getting at the restaurant is sincere.

This kind of training goes on in any profession where a central aim is to get people to perform some kind of action — purchase a product, sign a donor card, vote for a political candidate, sign a contract, negotiate a treaty, and so on.

Principles of effective persuasion are learned through experience and passed down through training and mentorship, without having to be framed in the language of neuropsychology and cognitive science.

Indeed, in many contexts we don’t think of these practices as manipulative in any negative sense. We may think of an effective salesperson simply as “good with people”, not a mind controller.

Our first piece of critical thinking advice on this issue is to distinguish the descriptive issue of what does or does not constitute unconscious psychological persuasion, from the normative issue of whether any particular instance of such persuasion is good or bad, justified or unjustified.

To give another examples of why we need the distinction, consider stage magic for entertainment purposes.

Persuasion techniques are used extensively in magic acts, they’re fundamental to the practice.

But in the context of stage magic the manipulations and deceptions are used to delight and entertain us, not to exploit or hurt us.

Intentions and goals matter to our assessment of these cases. We don’t condemn stage magic or mentalist acts simply because they use mind control techniques. But when the very same techniques are used to con or scam people out of money, we rightly condemn the practice as unethical.

So, let’s agree that we can imagine cases where what I’m calling mind control is unobjectionable, and we can imagine cases where it’s deeply objectionable.

Now, if you bring up the topic of mind control in the context of interpersonal relations, I guarantee you’re going to divide people.

But I think the questions that naturally arise in these situations are important ones, and they’ll push us to think harder about when using these techniques is acceptable and when it’s not.

I’m going to describe three scenarios, three different descriptions of an interaction between a man and a woman, Derek and Carla.

I want you to think about your own responses as I describe these scenarios.

Okay, here’s the first one.

Scenario 1:

Derek is a young man preparing to meet a young woman, Carla, for a lunch date. He was introduced to her briefly at a party the previous night. He likes her and wants to make a good impression.

Derek takes time to shower and shave, style his hair and pick a nice flattering shirt.

He greets Carla outside the restaurant, holds the door open for her and they go inside.

Derek compliments Carla on her shoes.

As they wait for a server he begins a conversation about the most recent episode of Game of Thrones, which he had overheard Carla discussing at the party.

They talk about their mutual appreciation for the show and various theories for how the story will unfold.

Derek asks Carla several leading questions about her background and interests, and shares a humorous story about a mutual friend.

Overall, the two have an enjoyable and engaging lunch date.

___

So, what do you think about Derek’s overall conduct in this scenario? Positive? Negative? Neutral?

Now let’s consider another version of this story.

Scenario 2:

Derek is a young man preparing to meet a young woman, Carla, for a lunch date. He was introduced to her briefly at a party the previous night. He likes her and wants to make a good impression.

Derek has recently finished reading Dale Carnegie’s classic book How to Win Friends and Influence People, first published in 1936.

He is also familiar with social psychologist Robert Cialdini’s seminal work on persuasion and influence, summarized in his 2001 book Influence: Science and Practice.

Dale Carnegie has a section in his book titled “Six Ways to Make People to Like You”. Here are his six rules:

Rule 1: Become genuinely interested in other people.

Rule 2: Smile.

Rule 3: Remember that a person’s name is to him or her the sweetest and most important sound.

Rule 4: Be a good listener. Encourage others to talk about themselves.

Rule 5: Talk in terms of the other person’s interests.

Rule 6: Make the other person feel important— and do it sincerely.

Robert Cialdini’s book surveys six proven principles of persuasion.

One of these is the principle of “liking”: people are more easily persuaded by people who they like.

The chapter on “liking” discusses factors that can cause us to like someone.

Here is the list:

Physical Attractiveness – “Research has shown that we automatically assign to good-looking individuals such favorable traits as talent, kindness, honesty, and intelligence.”

Similarity – “We like people who are similar to us. This fact seems to hold true whether the similarity is in the area of opinions, personality traits, background, or life-style.”

Compliments – “…we tend, as a rule, to believe praise and to like those who provide it, oftentimes when it is clearly false.”

Contact and Cooperation – “…becoming familiar with something through repeated contact doesn’t necessarily cause greater liking. […we must be] working for the same goals…we must ‘pull together’ for mutual benefit.”

Conditioning and Association – “[Compliance professionals are] incessantly trying to connect themselves or their products with the things we like. Did you ever wonder what all those good-looking models are doing standing around in those automobile ads?”

Derek is familiar with all of these principles, both the Carnegie principles and the Cialdini principles.

He wants Carla to like him, and he sees that some of them can be applied to his upcoming date.

So, Derek does a quick Google search of Carla and finds her profile on Facebook.

He skims through the entries and notes some of the topics she mentions or likes.

He notes a link that she shared on Game of Thrones theories, and he reads the associated story.

Derek plans his date with Carla. He thinks through how we wants to present himself and how the conversation should go: clean up, remember to smile, compliment her shoes, share a funny story, bring up Game of Thrones, pay attention and listen, ….

Derek also knows from his reading that physiological arousal in human beings can be triggered by dilated pupils.

Pupils dilate during sexual arousal, and experiments show that seeing someone with dilated pupils can trigger a mirroring physiological response, which can make the person with the dilated pupils appear more attractive to you.

Under dimmer lighting, our pupils naturally widen. Derek knows this.

When he and Carla enter the restaurant he looks for a booth that is more dimly lit and leads her there, hoping that he will benefit from the pupil dilation response over the course of their lunch date.

And from here, the remainder of the lunch date unfolds as described in scenario 1.

Now, what do you think about Derek’s overall conduct in this scenario? Any different from the version described in scenario 1?

The common response when I present these scenarios is that almost everyone has no problem with scenario 1. In fact, Derek’s conduct on this date seems admirable to many people. One friend of mine said “I only wish my son had social skills like that”.

By contrast, many people have a strong reaction to scenario 2. They believe there is something objectionable about Derek’s attitude and behavior as he prepares for his date with Carla.

Here are some written quotes from a survey I gave at a talk when I presented this case.

1. “Derek googling Carla to get information on her that he can use to his advantage … that’s just creepy.”

2. “In the second scenario, it doesn’t seem like Derek is treating Carla like a person. He’s treating her like an object that he can manipulate to get what he wants.”

3. “He’s being intentionally manipulative. He’s fooling her into thinking they’re having a spontaneous, genuine conversation, when it’s really not.”

4. “Derek is running a “pick-up artist” playbook, and I find it all offensive.”

However, not everyone is so judgmental:

5. “There’s nothing wrong with learning how human behavior works, and applying that knowledge. Why not use what you know to your advantage?”

6. “We plan conversations in our heads all the time, when we anticipate talking to someone and we’re a little bit anxious about it. I do that practically every time I walk into a faculty meeting.”

7. “If we can assume that Derek isn’t intentionally lying to her about anything just to get her to like him, and is genuinely interested in Carla as a person, I don’t see a problem with any of this.”
______

I find this range of responses fascinating. They point to something important about how we view ourselves, and how want to view ourselves. What exactly this is, is something I’m going to return to in later episodes.

Now, just to add a new spin, consider this third scenario.

….

Scenario 3:

Derek has had difficulty all his life with social skills and “reading people”. He often failed to pick up on social cues, and that lead to frustration and isolation.

Derek considers himself as operating at the high functional end of the autistic spectrum, closer to Asperger’s.

As a highly intelligent teenager, Derek decided to undertake a study of human social behavior, with the goal of “cracking the code” of normal human social interaction.

He read self-help and psychology books, kept notebooks in which he wrote down the patterns he found were associated with pro-social behaviors, wrote out checklists of what to do or consider in different scenarios, and worked hard at practicing these skills.

Over time Derek’s social skills improved dramatically, and he learned to function well with other people, acquire and maintain friends, and so on.

But because of the way Derek’s brain works, he still has to anticipate and plan social interactions in a more conscious and deliberate way than most people. He hasn’t internalized these principles in the intuitive, unconscious way that most people do.

At the party, Derek noticed Carla and was attracted to her. He wanted to ask her out, but was naturally anxious about this prospect.

To prepare, he consciously implemented some of the social strategies he had learned, based on his research and experience.

The rest of the date plays out as described in scenario 2.

Now, what do you think about Derek? Is he still a manipulative creep, or are you less judgmental of his behavior?

Almost everyone I ask, when given this third scenario, is more understanding and sympathetic to Derek.

Here are some quotes from that survey:

1. “Now I wouldn’t describe him as manipulative. Actually, there’s something charming about how hard he’s working to impress Carla.”

2. “The difference for me is now we understand his underlying motives better. He’s had to struggle to learn social skills that the rest of us take for granted, and this is just his way of compensating.”

3. “I still don’t like the pupil arousal thing, I still find that weirdly manipulative. But I don’t have as much of an issue with him planning out the date in the way that he did.”

So, what do you think of these cases?

What do they tell us about the factors that matter to us when it comes to judging what forms of “mind control” are acceptable and what forms are not?

I’m going to pick this up next episode, but I would love to hear your thoughts on this.

You can leave comments at argumentninja.com, episode 3.

And as an epilogue to this episode, I want to add that my description of Derek in scenario 3, where he struggles with learning social skills and has to teach himself to be more deliberate and self-conscious about his interactions with other people — that description is based on a real guy that I ran across, who wrote a real book on this very topic.

His name is Daniel Wendler, and you can watch him talk about his journey in a TEDx talk on YouTube. I’ll link to it in the show notes, but if you search “My life with Asperger’s”, “Daniel Wendler”, you’ll find it.

Here’s a short clip from that talk.

(Play clip)

I think there are lessons to be learned from this example, but I’ll save that for another episode.

Thanks for listening, and I’ll see you next time.

Read More

002 – Why Rational Persuasion is a Martial Art


In This Episode

  • The goals of critical thinking, and how critical thinking differs from rational argumentation.
  • Why the core of any critical thinking training program must include the study of rational argumentation and the study of cognitive biases and the psychology of persuasion.
  • The problem for which the traditional martial arts was a solution.
  • The “martial environment” of critical thinking today.
  • Why rational persuasion should be viewed as a martial art.
  • The distinction between rational argumentation and rational persuasion.
  • An example: contradiction and inconsistency, versus cognitive dissonance.

Quote:

“The warrior knows things that no one else knows, valuable things that can inform our understanding of what it means to live a good life, even for those of us lucky enough to never personally experience war or violence.


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


 

This is the Argument Ninja podcast, episode 2!

Hi everyone and welcome to the Argument Ninja podcast. My name is Kevin deLaplante, and in this episode I talk about the relationship between critical thinking and rational persuasion, and why, even though I identify as a critical thinking educator, and I have a website called The Critical Thinker Academy, the focus of this podcast is rational persuasion, rather than critical thinking more broadly.

I also push the martial arts theme a little further, and give some reasons to think of rational persuasion as a martial art.

And I give an example that illustrates the distinction I want to draw between rational argumentation and rational persuasion.

So, just to recap, in the first episode I mentioned six different components, or skill sets, that are relevant to critical thinking, but I said that if I was forced to identify the two most important skill sets that should be included in any instructional program on critical thinking, the first would be skill in rational argumentation, and the second would be an understanding of cognitive biases and debiasing strategies.

We need skill in rational argumentation because we need to know what it means to have good reasons to believe anything, and what it would mean to have good reasons to believe any particular claim at issue. These are the questions that theories of rational argumentation are trying to answer — what it means to have a good argument for a particular conclusion.

We need to understand cognitive biases and debiasing because much of our thinking and behavior is the result of automatic, unconscious cognitive processing that make us predictably irrational in various ways, and we need to understand how this works, and what we can do to avoid or minimize the negative effects of the cognitive biases that we’re prone to.

So, this gives us part of the story of how I distinguish critical thinking from rational argumentation.

Critical thinking is an umbrella for a collection of skills and dispositions that are relevant to achieving certain goals. Rational argumentation is just one of those skills that fits under the umbrella.

But there’s a more important difference that I want to emphasize.

If I say that I want to be a better critical thinker, or I want to improve my critical thinking skills, that tells you something about my goals, but by itself it doesn’t say much about the methods I’m using to achieve those goals.

The methods are like tools in a toolkit. Rational argumentation is a tool in the critical thinker’s toolkit. But how you use these tools, and for what ends, is determined by higher level goals.

So what the primary goals, or aims, of critical thinking?

I think there are two distinct sets of goals.

The first has to do with the quality of our thinking. One of the aims of critical thinking is to improve the quality of our beliefs, judgments and decisions.

What does this mean? Well, it can mean different things, depending on which of these we’re talking about.

When we’re talking about beliefs, the most obvious measure of quality is how likely they are to be true. All other things being equal, I want my beliefs to be true, not false.

When we’re talking about judgments, in this context I’m using the term to refer to the process by which we arrive at a belief or a decision. I want my judgments — the process by which I arrive at a belief or a decision — to be reasonable, justifiable, reliable, and so on.

When we’re talking about decisions, or choices, that’s a different category again. Decisions are actions of some kind; they can’t be true or false. But they can be rational or irrational, justified or unjustified, effective or ineffective, and so on.

These are all different ways that the quality of our thinking can be improved, and this is one of the goals of critical thinking — to improve the quality of our thinking.

So, I said that critical thinking has two goals; what’s the second goal?

The second goal is one captured by phrases like, to learn to think for yourself; I want to teach my kid how to think for him or herself; I want to be an independent thinker.

What do these phrases mean?

First and foremost, they express values, things we care about.

They express the value of freedom of thought.

They express the value of autonomy, the ability to make decisions for ourselves and pursue our own goals.

They express the values of agency and responsibility, the notion that as individuals we want to claim authorship and ownership of our own beliefs and values. We don’t want to think of ourselves as mindlessly parroting what we’ve been told to believe by governments, corporations, the media, religion, our peers, and so on.

These values are often associated with the aims of critical thinking, and they should be.

I summarize them like this: another important aim of critical thinking is to learn to think for ourselves, to be able to claim ownership and responsibility for our beliefs, judgments and decisions.

So, to sum up, when we say that we want to be critical thinkers, we’re saying at least two things:

One, that we want to improve the quality of our beliefs, judgments and decisions.

And two, that we want to be able to think for ourselves, to be able to claim and ownership and responsibility for these beliefs, judgments and decisions.

There are still lots of questions about these sets of goals and how they relate to each other, and what exactly terms “ownership” and “responsibility” mean. But that’s okay. We can recognize the goals even if it’s a struggle to articulate them. And that’s all we really need to begin the process of becoming better critical thinkers.

I think the analogy with martial arts training is helpful here.

In any traditional school of martial arts, if you ask what the ultimate purpose of the training in that art is, you’ll most likely get an abstract, philosophical answer. No one will say that the ultimate purpose is to win fights.

The Japanese martial tradition of Budo, for example, was influenced by the three great philosophical traditions of Shinto, Confucianism and Zen Buddhism.

It might be hard to articulate or understand what the ultimate goals of the martial arts influenced by this tradition are, and different masters might emphasize different elements — self-mastery, enlightenment, non-dual awareness, inner peace, virtue.

But none of this prevents a student from starting training, learning your stances, learning strikes and kicks, learning your first forms.

Similarly, it might be difficult to articulate or understand the ultimate goals of critical thinking, but that doesn’t prevent a student from beginning to train and cultivate the skills and attitudes that are essential to critical thinking.

Now, let’s get back to our original question, which was about the relationship between critical thinking and rational persuasion, and why the primary focus of this podcast is going to be rational persuasion.

Yes, there are interesting questions one can ask about the ultimate aims and goals of critical thinking.

But let me appeal to the martial arts analogy again.

For me, this is like asking questions about the ultimate purpose of studying a martial art. It’s important to have some idea of how to answer this, but you won’t spend most of your day at the dojo debating the meaning of non-dual awareness and self-mastery.

You’ll spend most of your day learning and practicing the various elements of the martial art. Conditioning, stretching, stances, forms, strikes, grappling, sparring, and so on.

These are the various skill components that constitute the martial art. It can take years to learn how to perform a movement correctly, with speed and precision and power.

It will take years to internalize the techniques so that your body responds automatically in a sparring or a combat situation.

It may take years to understand how learning these skills contributes to the higher goals and values of the martial art.

But there’s no shortcut, if the path you’ve chosen, to wisdom or virtue or enlightenment, is a martial path.

“Martial” means “inclined or disposed to war”. “Of, suitable for, or associated with war or the armed forces”. “Characteristic of or befitting a warrior”.

The martial arts were born out of a practical reality, the reality that in this world there are people, or groups of people, who may threaten us, with coercion and violence; and to maintain our autonomy and our way of life, we may need to defend ourselves, and resist the aggressor, with our bodies.

In this world there are occasions where we need to able to resist violence, and inflict violence, when necessary.

The question is, how we can do this effectively, without falling into mindlessness and brutality? How can we practice violence without abandoning our higher ideals and values?

That’s the problem that the martial arts were designed to solve.

And for the traditional martial arts, the solution went beyond mere self defense. The solution became a way accessing a kind of insight and wisdom that can only be accessed by whose who have traveled the martial path.

The warrior knows things that no one else knows, valuable things that can inform our understanding of what it means to live a good life, even for those of us lucky enough to never personally experience war or violence.

So what does this have to do with critical thinking and rational persuasion?

The goals of critical thinking define the higher ideals and values that we want to realize.

These goals may be abstract — truth, rationality, understanding, self-awareness, ownership, responsibility, autonomy, agency — but they are the star that we aim for, that we strive to navigate towards.

But we live in a world that isn’t always supportive of these ideals. The world throws up obstacles, that obscure the stars, that distract us, that sometimes actively work against these ideals.

What sorts of obstacles? There are many. Our cognitive limitations can be an obstacle. Our social position and the opportunities, or lack of opportunities, that come with it, can be obstacle. People can be an obstacle, institutions can be an obstacle, governments and the media can be an obstacle. Anyone or anything that has an interest in influencing what we believe, what we value, and how we behave, can be an obstacle.

The reality of this world, the real world we live in, is that we are bombarded on a daily basis with competing messages from many different sources, seeking to influence how we think, what we believe and what we care about.

A few of these sources may have our best interests at heart — our parents’ influence, for example — but for the vast majority, we are nothing but nameless targets of influence campaigns hatched by companies or institutions, designed to serve the goals of those companies and institutions, not our goals.

Each of us lies at the intersection of these multiple, diverse lines of external influence, constantly pushing and pulling us in different directions.

It’s important to know that these persuasion campaigns are most successful when we’re unaware that we’re being persuaded, when their influence is hidden, or unconscious, and we’re convinced that we’re responsible for our own beliefs and decisions, even when we’re not.

And for most of us, most of the time, this is exactly how we experience things, this is how persuasion works. We don’t see the lines of force, we aren’t consciously aware of their influence.

This is the challenge of critical thinking in the real world.

I’m not exaggerating when I say that we face a situation not that different from the one faced by those who were inspired to develop the traditional martial arts.

The challenge we face is a martial challenge. Battles are being waged for our minds. We are vulnerable to real harm, sometimes even physical harm, when bad ideas and bad ideologies turn into motives for violence.

And so we need to ask these questions:

How do we train ourselves to see these influence campaigns for what they are, to make the lines of force visible?

How do we defend ourselves against these forces that threaten our autonomy, our agency, and obscure our pursuit of the truth?

How do we actively combat the forces that are the most pernicious, the most damaging?

Well, I think the whole array of tools in the critical thinker’s toolbox is important here.

But at the center of this effort must be a two-pronged program; one for developing our awareness of what rational belief and rational decision-making actually looks like, and one for developing our awareness of the psychological mechanisms that govern how people actually think and act.

When this awareness is present, the lines of force can be made visible. We can learn to see the manipulation and persuasion tactics for what they are.

And more importantly, we’ll have tools for reclaiming our autonomy and agency, as independent thinkers. We can ask ourselves, “do I really have good reasons to believe this?”, and we’ll know what it means to answer that question.

This skill set, this two-pronged program of training and instruction, is what I’m calling “rational persuasion”.

This is the martial art that lies at the center of our commitment to ideals of critical thinking. And like a traditional martial art, it is also a means by which can pursue and fulfill these ideals.

So, this is part of the story for why I want to pursue this conception of critical thinking and rational persuasion as a kind of martial art.

It’s not the whole story, there’s still lots more to come, but I hope you’re starting to see the outlines of what I’m getting at.

Now, before I wrap up this episode, I want to draw your attention to a distinction that you might have missed.

I’ve been talking about rational persuasion as a martial art, but earlier, when I was talking about a two-pronged approach to critical thinking, I used the term rational argumentation, and I distinguished that from the study of cognitive biases and the psychology of persuasion.

I want to clarify this distinction between rational argumentation and rational persuasion.

When I talk about rational argumentation, I’m referring to the study of good and bad arguments as this is normally presented in logic and critical thinking courses taught philosophy departments.

So, in a class like this, you’ll talk about premises and conclusions, you’ll talk about the difference between the truth properties of individual claims and the logical properties of whole arguments, you’ll learn the distinctions between valid and invalid arguments, strong and weak arguments, deductive and inductive arguments, sound and cogent arguments, and so on. You’ll learn about formal and informal fallacies, and if there’s time you’ll start talking about the distinctive features of different types of arguments, like scientific arguments, or moral arguments, and then time is up.

There are dozens of textbooks that are routinely used to teach this material, in critical thinking classes all around the world.

Now, apart from the fact that a course like this really only gets you started on this topic — it’s like, just enough to get you your yellow belt in argumentation — the main problem with this approach to critical thinking is that too often it ignores the psychological realities of how real people use arguments in different social contexts for different purposes, and how real people respond to, and resist, efforts to persuade them.

So consequently, this approach doesn’t adequately prepare students for how to reason and argue in the real world, with real people, about real issues that matter to them.

What I’m calling “rational persuasion” is an approach to argumentation that takes this important logic-based material, and integrates it with our psychological and social knowledge of human behavior and human reasoning.

The goal is to bring both these bodies of knowledge to bear on a given situation, so that our argumentative moves are informed by, and are responsive to, a much wider range of factors.

Let me give you an example to illustrate this argumentation/persuasion distinction.

In a logic class you learn about consistency and contradiction. A consistent set of claims is one in which all the claims can be true at the same time, without generating any contradictions.

A contradiction is a special kind of claim. It’s any claim that can be reduced to a situation where you’re asserting one thing, P, and simultaneously asserting it’s contradictory, not-P. So a contradiction is a claim where you’re simultaneously saying that P is true and that P is false.

In classical logic, we treat all contradictions as false, because it’s assumed that they describe an impossible state of affairs.

So, we define an inconsistent set of claims as one that entails a logical contradiction of some kind. And this means, as a matter of sheer logic, that not all the claims in the set can be true – at least one of them must be false.

This is a powerful concept for argumentation, because one of the ways you can refute an argument is to show that the conclusion of the argument, or one of its premises, is inconsistent with other claims that our audience thinks are true.

For example, if we’re presented with an argument against gay marriage that is based on the assumption that, say, legal recognition of marriage should be based on the capacity for a couple to bear and raise biological children, then we can point out that this assumption would logically entail that we should also deny legal marriage to infertile couples, or senior couples who can no longer have children. But no one wants this conclusion, so as a matter of logic it forces you to go back and reconsider that assumption.

Now, this is a powerful logical tool, showing people how their beliefs in one area may be inconsistent with their beliefs in some other area. In a class I would drill students on different arguments and get them to come up with refutations that exploit this principle.

But in reality, the principle is only effective to the extent that people are willing or capable of seeing the contradiction for what it is.

We’re moving now into the realm of psychology and persuasion.

What you actually find, when you go out into the real world and try to persuade people by bringing to light the inconsistencies in their beliefs, is that people have a very natural tendency to resist this move.

We resist it in a number of ways, but one way is by rationalizing away the inconsistency, so that in our minds we don’t experience it as a genuine inconsistency.

Now, a psychological mechanism that has been studied extensively, and that explains a lot of this behavior, is called “cognitive dissonance”.

The idea is that this state of holding two contradictory beliefs in our heads at once, or endorsing two contradictory values, or behaving in a way that contradicts some expressed belief we have, is a state that causes mental stress and discomfort. Our brains are naturally are drawn to states that relieve this discomfort. So we automatically look for ways of restoring consistency between our thoughts and expectations and reality, without having to openly and consciously acknowledge the inconsistency.

The classic example of cognitive dissonance is how doomsday prophets and their followers respond when their predicted date for the end of the world come and goes and everyone is still here. The dissonance in this case may be very acute, the idea that they could be fundamentally wrong about their religious worldview is very painful to contemplate. So, in the face of what would appear to be a blatant contradiction, most followers don’t accept it. They rationalize the events in a way that preserves consistency between the facts and their basic worldview. Maybe their faith is precisely what averted the disaster.

If you want a whole pile of examples of cognitive dissonance in action, I strongly recommend the book Mistakes Were Made (but not by me), by Carol Tavris and Elliot Aronson. It’s an eye-opening book, it should be on every critical thinker’s bookshelf.

But in all my years of studying logic and argumentation in a philosophy program, no one every mentioned cognitive dissonance. That’s not logic, that’s psychology. They do that in a different building on the other side of campus.

Yet this psychological mechanism is clearly relevant to the success or failure of argumentation in the real world.

And this is my point. If we don’t learn how to integrate our knowledge of human behavior into our argumentation strategies, they’re not going to be as effective as they could be.

If I’m employing rational persuasion, then I’m not just thinking about the inconsistency; I’m also thinking about how the inconsistency will be received. I’m looking for ways of reducing the cognitive dissonance in my audience, so that when a contradiction is presented to them, they’re more likely to acknowledge it and respond in the way I want them to.

How I would do this is an other question, but this is an example of what I call an “argument ninja move”.

Well that’s all I’ve got for you today. If you’re enjoying this show it would really help to visit iTunes to leave a rating and a review. Please visit argumentninja.com for show notes and to leave comments on this episode. And please visit criticalthinkeracademy.com and sign up for one of my free video courses there.

Thanks for listening, and I hope you’ll join me next episode.

Read More

001 – Welcome and Introduction to the Show


In This Episode

  • Welcome and introduction to the show.
  • What has drawn me out of podcast retirement to start a new show.
  • Scott Adams on Donald Trump, and the sorry state of political discourse today.
  • Why critical thinking education needs both a theory of how we ought to reason, and a theory of how we in fact reason.
  • Reasons to think of rational persuasion as a martial art.

Quote:

“In this podcast I’m going to teach you what I know about logic and argumentation and ideals of rationality, and I’m going to teach you what I know about the psychology of persuasion and influence.

And along the way I’m going to try to uncover some techniques and principles for integrating these two bodies of knowledge, to make something more powerful, more effective, and more worthy of our human capacities, than either is separately.”


References and Links


Subscribe to the Podcast


Play or download the mp3 file for this episode


Listen to this episode on YouTube

Hi everyone and welcome to the Argument Ninja podcast. My name is Kevin deLaplante, and this show is dedicated to the art and science and ethics of rational persuasion.

It’s going to take a couple of episodes to explain the significance of this title, the Argument Ninja, but I want you to know up front that didn’t choose it just to be cool or trendy.

I chose it because I think we need a new model of what it means to be a critical thinker, and at the heart of this new model is a theory of rational persuasion that will draw on themes from the martial arts, and martial arts philosophy.

So, “argument ninja” isn’t just a buzzword, it’s going to mean something.

Now, I’m not a fan of long-winded biographical introductions, but because this is the first episode of this show, for many of you this is your first introduction to me, so I should start with a little about who I am, and talk a little more about what I want to accomplish with this podcast.

I was a professional academic philosopher for almost 20 years. I’m a philosopher of science by training, and I’ve taught a lot of different courses over the years, but most of my teaching was in logic and scientific reasoning, critical thinking, the history and philosophy of science, and ethics.

I quit my tenured academic job in 2015 to pursue a second career as an independent educator online. This podcast is part of this new online career.

My online business involves, for the most part, creating and selling video courses on topics related to critical thinking and writing.

I also do some speaking and consulting, but most of my time is spent creating videos and video courses, which you can find at my home site, the Critical Thinker Academy, which is at criticalthinkeracademy.com.

My goal with the Academy is to provide a resource for educating people about what scientists and philosophers have learned, over the past 2500 years, about human rationality and what it means to reason well.

I’ve been creating videos for several years, and at this point I have about 20 hours worth of video on that site, spread over about 15 courses.

There are courses there on basic principles of logic and argumentation, formal and informal fallacies, argumentative essay writing, probability theory and fallacies of probabilistic reasoning, and the psychology of cognitive biases and the errors that we’re prone to because of them.

The Critical Thinker Academy site runs on the Teachable platform, which is designed for hosting your own video courses, but it isn’t the best for running a blog or posting show notes to podcast episodes.

WordPress is clearly the best for that, so for this show I’ve got a dedicated URL, which is at argumentninja.com. So you can visit there for show notes and to offer comments and feedback on the shows.

Now, what do I hope to accomplish with this podcast? To start to answer this, I’m going to talk about two sources of motivation.

The first has to do with disturbing trends I’m seeing in the quality of our public discourse, and how people are talking about the role of reason and argumentation in our public conversations, and in human behavior generally.

And the second has to do with my approach to critical thinking education, and the skill set that I think is essential for truly effective critical thinking.

So, about these disturbing trends.

This first episode is launching in the summer of 2016, when Hillary Clinton and Donald Trump are still just the presumptive nominees for the Republic and Democratic Presidential candidates.

Everywhere I’m reading stories about how reason and facts don’t seem to matter to voters, especially Trump voters.

This is obviously frustrating to those of us who are trying to educate people about the importance of forming beliefs and making decisions based on reason and evidence.

But the more disconcerting thing is that I’m reading stories from political commentators who see themselves as political realists, and who seem to be skeptical about reason and rationality itself.

And what I mean by this is that they believe that human behavior is driven almost entirely by intuitive emotional judgments, and that conscious, deliberative reasoning plays at best a rationalizing, after-the-fact role in explaining human behavior, it’s never the reason why people act and behave in the way they do.

And this view, that people act on the basis of emotional or intuitive judgements, rather than reasons, is supposed to help explain the otherwise surprising success and popularity of Trump’s campaign.

I’ve been following Scott Adam’s blog for the past year or so, and this is his spin on Trump. Scott Adams is better known as the cartoonist behind Dilbert, but he’s always been an interesting and idiosyncratic thinker, and though I rarely agree with him on political matters, I’m interested in his take on things, and over the past year or so he’s been writing about Donald Trump and the Trump campaign.

Adams likes to bring up his training as a hypnotist and his deep understanding of the psychology of persuasion, and from his standpoint, Donald Trump is an instinctive genius at the tools of persuasion, the best he’s ever seen. What people don’t appreciate about Trump is that he’s operating at such a high level that they don’t recognize the genius — we see bombast and childish insults and self-aggrandizement and dismiss him as crude and unsophisticated, but in reality he’s a master persuader — he’s playing three-dimensional chess while the rest of us are playing checkers.

Now, what’s most disconcerting to me, is that I think Adams is on to something important, but there’s also something here that I’m very much bothered by.

I’ve spent a lot of time over the past few years immersed in the literature on cognitive biases and the psychology of persuasion and influence. I’ve taught courses on the topic, I give talks to businesses on this topic.

There is something paradigm-shifting about the implications of this research for our understanding of human nature, and the actual causes of our behavior.

And it does shed light on what’s going on this election cycle. Maybe Trump’s success shouldn’t be so surprising, given what we now know about how human psychology actually operates.

But I believe there’s a danger in the pessimistic and frankly cynical conclusions that Scott Adams draws, and that some others have drawn, about the role of reasons and rationality in human life.

Yes, reason may not play the role we naively think it does in motivating and explaining human behavior. Maybe we’re collectively in the grip of an illusion, the illusion that our actions are determined by our consciously held beliefs and reasons.

But even if this is true in some sense, does it follow that it is naive, or inappropriate, to hold people to standards of good reasoning, and to challenge people when they fail to live up to those standards?

Does it follow that if someone contradicts themselves, or offers arguments that are logically weak on their face, or is happy to indulge in distortions and exaggerations and even outright lies, that we shouldn’t condemn these as failures of some kind?

If politicians behave this way, how should we respond?

I don’t know for sure what Scott Adams thinks about this, but my sense is that he thinks there’s just no point in condemning someone for these kinds of failures, because he believes that people are never really persuaded by logic and argumentation anyway; they’re persuaded by other, non-rational psychological factors.

I’m pretty sure this conclusion doesn’t follow, but this question points to issues that I think really need to be talked about more than they are.

Because this isn’t just about Trump and this election. This is about what we, as a society, as a culture, think about the value of reason and rationality, and the value of critical thinking in our personal lives and in public life.

And it pushes me, as a critical thinking advocate and educator, to think harder about these questions.

Is it hopelessly naive of me to think that we can improve the quality of our reasoning, individually and collectively as a society, by teaching critical thinking skills to more people, if as Scott Adams says, “human beings are mindless robots influenced by design, habit, emotion, food, and words”, who can be easily manipulated once you understand these basic psychologic facts?

These are some of the questions that have pushed me to create this podcast. I want to work out a satisfactory answer for myself, by exploring these questions with you.

But just to anticipate what’s to come — I’m not a skeptic about reason and rationality, but I grant that anyone with training in the psychological literature on cognitive biases and persuasion will see this much truth in Adams’ description, that human behavior is strongly influenced by design, habit, emotion, food and words, and people are easily manipulated by those who understand how this works.

The question for me is, how do we integrate these facts about human psychology, which on the surface, seem to rob us of rational agency and autonomy, with our intuitive view of ourselves as rational agents who can form beliefs and make judgments on the basis of reasons and evidence?

If you’re a skeptic about reason, then you think they can’t be integrated, we just need to accept that this intuitive view of ourselves is false, an illusion of some kind. Like the way some people think of free will as an illusion.

Well, I’m not a skeptic about reason, so it follows that I should be able to offer some account of how these different aspects of human nature can be integrated.

That’s partly what I want to do with this podcast — to give myself a platform for working through these issues, teasing apart what’s defensible and what’s not, and over time, develop a conception of human rationality that does justice to what we’ve learned about the causal processes that actually generate human behavior, while in some sense vindicating our intuitive conception of ourselves as rational agents, capable of being moved by reasons and acting on the basis of reasons.

….

So, that’s one reason why I’m doing this podcast. It might seem a little abstract, a little heady, but you’ll have to trust me that these issues have real practical consequences for how we view ourselves, and how we conduct ourselves, individually and as a society.

Now, the second motivation for doing this podcast that I want to talk about today, has to do with the approach to critical thinking that I’ve been developing over the years working on video courses at the Critical Thinker Academy.

And it points to the need for an integrated theory of reason and persuasion that I just talked about — a theory of rational persuasion.

In my work I talk about six different pillars or components of critical thinking, which you can think of as skill sets, or areas of competence, that we may need to draw upon when we want to think critically about an issue or a situation.

These six components include logic, argumentation, rhetoric, background knowledge, character and creativity.

I’ll talk more about each of these in upcoming episodes, but the upshot is that when critical thinking is really effective, it’s because these different components are working together, mutually supporting each other, in the right way.

The topic of cognitive biases and the psychology of persuasion fits within the broad category of general background knowledge that is important for critical thinking.

Now, if I wanted to simplify this six-dimensional model of critical thinking even further, to get at the core of what I think is essential for critical thinking, I would emphasize two components of this model.

The first component is rational argumentation. This involves knowing what it means to have good reasons to believe or do something, and knowing how to recognize bad arguments and construct good arguments of your own. This is what most logic-based courses in critical thinking focus on, starting with simple two-line arguments and developing the concepts of valid and invalid arguments, strong and weak arguments, and so on.

The second component that I think is absolutely essential for critical thinking, is the one we’ve been talking about — understanding the unconscious cognitive biases that influence how we actually form beliefs and make decisions, and understanding debiasing, which is about the various strategies that we can adopt that are effective at minimizing or neutralizing the negative effects of these cognitive biases. I have a course at the Critical Thinker Academy, called “Upgrade Your Mindware”, which is all about these topics.

Critical thinking needs both of these components.

Logic and argumentation tells us how we ought to reason — it provides normative standards for defining what it means to reason well. It tells us what we should be aiming for, in terms of rational belief formation, and rational decision making.

But cognitive bias research give us insight into how we in fact reason, and specifically how our reasoning can often deviate from norms of good reasoning. It can tell us how far off course we’re drifting, and it can give us some tools for minimizing common errors and bringing us into back into closer alignment with these norms.

This is why at the Critical Thinker Academy I have courses on both of these topics.

….

Now, I believe that a program of critical thinking instruction that included both of these topics, on an equal footing, would be an enormous step up from what is currently passing for critical thinking instruction in most classrooms and in most textbooks dedicated to this subject.

Because for the most part, you don’t see both of these topics taught in the same textbooks, in the same classes.

Critical thinking textbooks lean heavily on the logic and argumentation part, but they rarely say anything substantive about the actual psychology of human reasoning.

To get that, you need to take a psychology class, or read a separate book on the subject, but in these sources you get almost no instruction in logic and argumentation.

I want you to see how much of a disaster this is for critical thinking education, if you don’t have both of these elements in play, in the same classroom, or the same textbook.

Think about what it means to teach any complex skill that requires time and practice to learn.

Think about swimming — how would you teach someone the back stroke if they’ve never done it before?

First, you’d show them what a good back stroke looks like, so students can see what they’re aiming for.

Then, when you start practicing, because you’ve got lots of experience with the kinds of mistakes that people tend to make when they start learning the stroke, you help them by anticipating problems and offering tips and strategies that will correct their form and help them move closer to that ideal stroke that you showed them.

Notice how this pulls from two different types of knowledge. You’re combining your knowledge of what a good backstroke looks like — your knowledge of what constitutes excellence in swimming — with your knowledge of how people’s bodies actually move in the water, and what mistakes they tend to make, or challenges they face, when trying to learn the stroke.

To put it another way, you’re integrating your prescriptive knowledge — your knowledge of how things OUGHT to be — with your descriptive knowledge — your knowledge of how things ARE, in reality — to create a program of instruction that can move someone from where they are, to where they ought to be.

So, I think it’s obvious that if you’re missing either of these components, you’re handicapped as a teacher.

And frankly, this is my view of what passes for critical thinking education 90% of the time. It’s fundamentally handicapped.

So, the model that I’ve been developing over the past several years, which puts the descriptive components of critical thinking on the same footing as the prescriptive components, I view as a major improvement, and I will defend it over any approach that focuses on only one of these components at the expense of the other.

….

However, this two-part, prescriptive – descriptive model, still has problems. I’m going to talk about those problems more in the next episode, but for now, let’s just say that it’s still doesn’t provide a truly integrated theory of rational persuasion.

It’s like having a philosopher on your team, and a street fighter on your team. Handy to have both, because one can do things that the other can’t.

You have the thinker, who can tell you what you should believe and why, and you have the fighter, the persuader, who is willing to get their hands dirty, who is effective at getting other people to say and do what you want.

But you don’t have a single person who embodies both sets of skills in an integrated way.

The thinker who fights. The fighter who thinks.

A person whose powers of persuasion are guided by higher ideals and principles, of how we ought to think, how we ought to reason; and whose persuasion is powerful because they’re guided by those ideals.

You see where I’m going with this.

I think it’s worth asking ourselves, what would rational argumentation, rational persuasion, look like, if we thought of it as a martial art?

Yes, you can study martial arts just to win fights.

Yes, you can study persuasion techniques just to win arguments.

But all of the traditional martial arts embody a philosophy, a way of life, that resists this. They have an ethic that is driven by a commitment to values that transcend the goal of winning a fight or beating an opponent.

Someone trained in rational persuasion knows how to persuade — but persuasion isn’t their ultimate goal. Their goal is persuasion for the right reasons. That’s what makes it rational persuasion.

I’m interested in exploring this conception of rational persuasion as a martial art, and seeing where it takes us.

In this podcast I’m going to teach you what I know about logic and argumentation and ideals of rationality, and I’m going to teach you what I know about the psychology of persuasion and influence.

And along the way I’m going to try to uncover some techniques and principles for integrating these two bodies of knowledge, to make something more powerful, more effective, and more worthy of our human capacities, than either is separately.

What is an argument ninja? It’s a symbol, the embodiment of what I’m looking for as we explore this terrain.

Thanks for listening. I hope you’ll join me next episode.

Read More