Cart 0

 

Images.jpg

“But what we require,” I said,
“is that those who take office should not be lovers of rule. Otherwise there will be a contest with rival lovers.”

“Surely.”

“What others, then, will you compel to undertake the guardianship of the city than those who have most intelligence of the principles that are the means of good government and who possess distinctions of another kind and a life that is preferable to the political life?”

“No others,” he said.

~Plato, The Republic

 Unit I

Critical Thinking 3.0 Syllabus (T_R) (2).jpg

The City of Words

 

 

Trust but control.

~Soviet Era Proverb1

Breaking News!

 

The City of Words

The primary task of this course is similar to the thought-experiment that Socrates and company explored in Plato’s Republic. In this classic by Plato, the characters in the dialogue attempt to understand the nature of justice by building a "city of words", an ideal city that functions optimally. The purpose of this exercise is to see when exactly justice enters the picture. Put differently, they discussed an imaginary city, engineered it so that it operates in an ideal way, and then tried to figure out where justice is in this perfect society. This is, in a round-about way, sort of what we'll be doing. This is to say that by enrolling in this course, you are now part of a council which will decide on the policies that will be enacted in the newly autonomous region of Ninewells, a territory just north of the state of Montana. This is a region that was erroneously awarded to British North America—now Canada—in the aftermath of the War of 1812 (due to poorly-drawn maps). The region has been in political limbo for some time, but now we've been given the opportunity to come up with a constitution for the territory. We will call our little group the Council of the 27. The Council of the 27 will decide on social issues such as:

  • what form of government should be utilized?
  • what economic policies should be implemented?
  • what should the educational system look like?
  • what regulations (if any) should be put on industry?
  • what technologies should be regulated and/or banned?
  • whether or not being a parent should require a license?

In order to perform your function well, members of the Council will have to become proficient in the analysis of arguments and the language used within them, the evaluation of evidence, and the capacity to discriminate between different epistemic sources, i.e., discern between good sources of information and bad ones. To this end, I introduce the following Important Concepts in order to begin the required training. In the rest of this section I will make some additional points about the important concepts, we'll look at some Food for Thought, we'll discuss a cognitive bias that might get in the way of good reasoning, and then we will begin to cover the arguments of Plato's Republic—the primary training resource for this course. We will read this work in its entirety, by the way. And so, without further ado, the training begins now.

Important Concepts

 

Some comments

Napoleon III, the last French monarch
Napoleon III, the
last French monarch.

Logical consistency is a cornerstone of critical thinking. For a set of sentences to be logically consistent only means that it's possible that all the sentences in the set are true at the same time. It's not that they actually are true, but just that they can all be true at the same time. That's it. But even though logically consistency seems like a very low bar, it is absolutely essential to be logically consistent when thinking rigorously about the world. For example, if one is trying to explain some phenomenon (e.g., migration patterns), one should make sure that one part of one's explanation doesn't directly contradict another part of the explanation. This seems easy enough in uncomplicated theories. Unfortunatley, though, most phenomena that needs explaining can't be explained by uncomplicated theories. And sometimes it is hard to see how two parts of a theory are inconsistent with each other when the theory has lots of moving parts. To make matters worse, there is a tendency to equate consistency with truth; but they are not the same thing. This is because two statements can be consistent with each other despite them both being false. For example, here are two sentences.

  • "The present King of France is bald."
  • "Baldness is inherited from your mother's side of the family."

"The present King of France is bald" is false. This is because there is no king of France. Nonetheless, these sentences are logically consistent. In other words, they can both be true at the same time; it just happens to be the case that they're not both true.2

Arguments will also be central to this class. Arguments in this class, however, will not be heated exchanges like the kind that couples have in the middle of an IKEA showroom. In this class, we will consider arguments to just be piles of sentences that are meant to support another sentence, i.e., the conclusion.

As you learned in the Important Concepts there are two major divisions to logic: the formal and informal branches. In this class, we will be focusing on informal logic. There are other classes that focus more on the formal aspect of argumentation, such as PHIL 106: Introduction to Symbolic Logic. Classes like this one instead focus on the analysis of language and the critical evaluation of arguments in everyday (natural) languages. One aspect of these classes that students tend to enjoy is the study of informal fallacies. Here's your first one:

 

 

Informal fallacies like argumentum ad hominem, are surprisingly easy to fall prey to. If one isn't keen, placing the argument into numbered form—sometimes called standard form—then one might not notice that the connection between premises and conclusion is insubstantial. In general, we will attempt to place all arguments into standard form. This way we can evaluate each individual premise for truth, i.e., check if the evidence assumed is actually true, as well as assess the relationship between premises and conclusion. You'll get a better idea of why using standard form reliably is a good idea in the Food for Thought below. Then you'll learn about why it's easy to fall prey to informal fallacies in the Cognitive Bias of the Day.

Food for Thought...

 


 

Cognitive Biases

Cognitive Bias of the Day

For evolutionary reasons, our cognition has built-in cognitive biases (see Mercier and Sperber 2017). These biases are wide-ranging and can affect our information processing in many ways. Most relevant to our task in this course is the confirmation bias—although we will definitely cover other biases. Confirmation bias is our tendency to seek, interpret, or selectively recall information in a way that confirms our pre-existing beliefs (see Nickerson 1998). Relatedly, the belief bias is the tendency to rate the strength of an argument on the basis of whether or not we agree with the conclusion. In other words, if you agree with the conclusion, you'll set low standards for the arguments that argue for that conclusion; if you disagree with a conclusion, you'll set a higher bar for arguments that argue for said conclusion. This is clearly a double standard, but one we don't tend to notice. We will see various cases of these in this course.

I'll give you two examples of how confirmation bias might arise. Although this happened long ago, it still stands out in my memory. One student felt strongly about a particular ethical theory I was covering in my PHIL 103 course: Ethics and Society. This person would get agitated when we would critique the view, and we couldn't have a reasonable class discussion about the topic while this person was in the room. I later found out that the theorist who was highlighted in that theory worked in the discipline of anthropology, the same major that the student in question had declared. But the fact that the theorist who endorses a particular theory is in your field is not a good argument for the truth of the theory. In fact, I can cite various anthropologists who don't side with the theory in question. As a second example, take the countless debates that I've had with vegans about the argument from the Food for Thought section. There is an objection to that example every time I present it. Again, this is not to say that veganism is false or that animals don't have rights, or anything of the sort(!). There are good arguments for vegetarianism and veganism, as you can see if you take my aforementioned PHIL 103 course. But we have to be able to call bad arguments bad. And the argument presented in the Food for Thought is a bad argument.

Mercier and Sperber's Enigma of Reason

As an exercise, try to see why the following are instances of confirmation bias:

  • Volunteers given praise by a supervisor were more likely to read information praising the supervisor’s ability than information to the contrary (Holton & Pyszczynski 1989).
  • Kitchen appliances seem more valuable once you buy them (Brehm 1956).
  • Jobs seem more appealing once you’ve accepted the position (Lawler et al. 1975).
  • High school students rate colleges as more adequate once they’ve been accepted into them (Lyubomirsky and Ross 1999).

By way of closing this section, let me fill you in on the aspect of confirmation bias which makes it a truly worrisome phenomenon. What's particularly worrisome, at least to me, is that confirmation bias and high-knowledge are intertwined—and not in the way you might think. In their 2006 study, Taber and Lodge gave participants a variety of arguments on controversial issues, such as gun control. They divided the participants into two groups: those with low and those with high knowledge of political issues. The low-knowledge group exhibited a solid confirmation bias: they listed twice as many thoughts supporting their side of the issue than thoughts going the other way. This might be expected. Here's the interesting (and worrisome) finding. How did the participants in the high-knowledge group do? They found so many thoughts supporting their favorite position that they gave none going the other way. The conclusion is inescapable. Being more informed—i.e., being of high intelligence in a given domain—appears to only amplify our confirmation bias (Mercier and Sperber 2017: 214). Troublesome indeed.

Argument Extraction: Book I (327a-336b)

 

 

 

A First-Rate Madness

We need not limit ourselves to passively reading Plato's Republic. Instead, we should feel free to object to the text, interject with more recent findings, and entertain tangeants. In other words, if Plato's says something that has been shown to be false, we should make this explicit. We should also pepper in all the relevant findings that modern science has to offer. After all, we've learned a lot since Plato published Republic circa 375 BCE; it'd be a shame to not consider these more recent ideas and data. Lastly, even if it is not directly discussed by Plato, we should feel free to entertain nearby ideas and arguments. With that said, here are two ideas related to what Plato wrote about in the portion of Book I that we just covered:

  1. It is hard to define discrete categories of mental health and mental disorders.
  2. It is not entirely clear that certain non-normal psychologies, e.g., depression, are always a hindrance to good reasoning.

St. Antony
St. Antony of Egypt, known
for fighting off his demons.

First off, when discussing problems with Cephalus' definition of justice, Socrates mentions how perhaps it is not the best idea to return, say, weapons to those who are mentally unstable. This point seems straightforward at first glance, but there is a conceptual problem lurking in the background. So let's begin with a discussion of how difficult it is to define the categories of mental health and mental disorder.

Here's some history as prologue. It is my understanding that conceptions of what "mental stability" is have varied widely throughout history—and this might be an understatement. For example, in the 3rd century CE, it was a common belief among Christians that demons were ever-present; they were always lurking right around the corner, ready to tempt you into sin. In fact, it seems like early Christians would blame almost any base desire as having been caused by the whisperings of a demon, whether it be procrastination, absent-mindedness, overeating, and sexual urges (see Nixey 2018, chapter 1).

Today, however, I assume you might have some misgivings about someone who interprets their sexual desires or their tendency to procrastinate as being the result of the whisperings of a demon. To say the least, I certainly wouldn't want this person babysitting my children. But I might even venture to say that this person who thinks demons are whispering in his/her ear is mentally unstable. This is because we now understand that things like day-dreaming, procrastinating, and sexualized thoughts are not caused by demons but by regular psychological processes, e.g., lack of intrinsic motivation, lack of incentives, regular hormonal responses to potential mates, etc. What does this tell us? Well, it reminds us that what was once considered psychologically normal can come to be considered psychologically abnormal in time.

The reverse is also true: things that are considered psychologically abnormal can come to be accepted as perfectly fine. For example, the Diagnostic and Statistical Manual of Mental Disorders III (DSM-III), what is sort of the Bible with regards to classifications of mental disorders that is published by the American Psychiatric Association (APA), actually still listed homosexuality as a mental disorder. This particular edition of the manual, by the way, was published in 1980. It was actually not until the DSM-IV, published in 1994, that the subject of homosexuality was finally dropped.

Clearly it's not the case that being a homosexual made you mentally unstable prior to 1994 but not afterward. Instead, what has changed is our conception of what mental disorders are. It is mental health itself that is difficult to define. We can take this line of reasoning further. In Predisposed: Liberals, Conservatives, and the Biology of Political Differences, the authors discuss the arbitrariness of the DSM. To understand their argument, however, you must first learn how being diagnosed by the DSM worked.

Until recently, using the DSM worked something like this. There is a list of symptoms associated with some mental disorder. There was also a specified threshold for the number of symptoms that a patient needed to display in order to be diagnosed with the relevant mental disorder. A psychiatric professional would test for those symptoms, and if you had the requisite number of symptoms, then you have the mental disorder. It was an all-or-nothing type thing.

Here's an example. Consider the DSM-IV's diagnostic criteria for autistic disorder. It listed 12 different symptoms which are themselves split into three distinct categories: impairments in social interaction, impairments in communication, and repetitive and stereotyped patterns of behavior. A diagnosis of autism required a professional to identify at least 6 of the 12 symptoms with at least two of them coming from the first category, one from each of the second and third categories, and all appearing before the age of 3. The authors take it from here:

“This all seems more than a bit arbitrary. How can we be sure that the proper cutoff point for Autistic Disorder is six symptoms rather than five? How do we know that the 'two from column A; one from columns B and C' approach is the appropriate one? Are there really 12 symptoms of autism? Maybe we missed one, or maybe two of those listed are really the same thing. Objectively establishing verifiable criteria to divide everyone into they-have-it-or-they-don't categories is so difficult as to be impossible” (Hibbing et al. 2013: 77)

The authors go on to argue that there is no such thing as neurotypical (a typical brain without a neurodevelopmental disorder), and so to make hard demarcation points—as the DSM does—is untenable. There just is no such thing as an average non-autistic brain, qualitatively speaking. It should be noted too that to pretend that there is something like neurotypical brains is harmful. This is especially the case after one considers the contexts under which the DSM is utilized. Institutions like insurance companies, schools, and social service organizations require a diagnosis before agreeing to, say, a reimbursement, allowing the use of certain medications, or providing special tutoring. In the legal domain, things are even more complicated. Any legal decision that requires neatly partitioned psychiatric categories is going to be controversial. Where’s the cutoff for, say, insanity? What’s so magical about age 18? What about people with brain tumors that cause them to break the law? The authors admit that there are no easy answers to these questions, but they make the point that pretending there are neatly organized mental disorder categories appears to be completely unfounded.3

Ghamie's A First-Rate Madness

Last but not least, let me say something about the possibility that non-normal psychologies might actually be advantageous. One fascinating example is known as depressive realism. This is the hypothesis that depressed individuals may be better able to make certain judgments when compared to non-depressed individuals (Moore and Fresco 2012). The idea is as follows. Non-depressed individuals, due to their healthy mental disposition, have a tendency to view the world through optimistic lenses. This is perfectly adaptive in an evolutionary sense. It clearly is better for you to think that you are smarter and better looking than you really are if it helps you engage in social situations more effectively; being depressed, having low self-esteem, and lacking motivation are clearly hindrances in many social settings. However, depressed individuals, while they are not always effective in social settings, are at least free from the biases of non-depressed individuals, like the optimism bias just mentioned as well as the "illusion of control" (the tendency for people to overestimate their ability to control events). Being free from these biases, depressed people can actually perform better at certain tasks, such as self-assessing their performance in a difficult task without being given feedback (ibid.).

Some theorists have taken this idea further. Ghaemi (2012) argues that individuals with non-normal psychologies, such those suffering from depression and bipolar disorder, actually improve their capacity to deal with adverse events. He makes the case that many of history's most notable political figures, like William Tecumseh Sherman, Abraham Lincoln, Winston Churchill, JFK, and Napoleon Bonaparte, had non-normal psychologies, learned how to deal with adversity because of that, and actually managed to be better leaders because of these skills learned from having to live with non-normal psychologies.

Another interesting theory comes from Dutton (2012). In The Wisdom of Psychopaths, Dutton advances his functional psychopath hypothesis, arguing that (in some contexts) psychopathic traits can be advantageous. Stay tuned.

 

 


 

Do Stuff

Welcome to the Do Stuff section. In this section you will find your homework assignments, as well as recommendations for how to prepare for the tests. For today:

  • Read from 327a-336b (p. 1-12) of Republic.
  • Be sure to review the argument form of modus tollens described in the Argument Extraction video. Also, attempt to write the argument found in 334b-334d in standard form, i.e., numbered form. It's about making mistakes about who your friends really are.

 


 

Executive Summary

  • Four important concepts that will be recurring in this course are the notion of logical consistency and the concept of an argument, as well as informal fallacies and cognitive biases.

  • In the first half of Book I of Republic, Socrates easily dispenses with two faulty definitions of justice: that justice is speaking the truth and giving to each what is owed to them and that justice is benefiting your friends and harming your enemies.

  • There is a conceptual difficulty in attempting to define the notions of mental disorder and mental health.

  • It's possible that some non-normal psychological traits might actually be beneficial for reasoning and/or performance in given tasks, as is hypothesized in the theories of depressive realism and the functional psychopath hypothesis.

 


 

FYI

Supplementary Material—

Advanced Material—

  • Reading: Richard Kraut, Stanford Encyclopedia of Philosophy Entry on Plato

 

Footnotes

1. I was informed about this saying by multiple people during my visit to Estonia, a former Soviet satellite. The Russians, it appears, had to have a heavy hand in Estonia to maintain them under Soviet control.

2. Contrary to popular belief, apparently baldness is not all your mother's (gene's) fault. In the very least, smoking and drinking have an effect.

3. For a discussion of the relationship between compromised neural structures and moral responsibility (as well as legal culpability), see What Could've Been.

 

 

 

...for the Stronger

 

In the center you have Plato on the left and Aristotle on the right

 

Nosotros nos hemos educado bajo la influencia humillante de una filosofía ideada por nuestros enemigos, si se quiere de una manera sincera, pero con el propósito de exaltar sus propios fines y anular los nuestros.

[We have been educated under the humiliating influence of a doctrine designed by our enemies, perhaps in good faith, but with the aim of exalting their own goals and nullifying ours.]

~José Vasconcelos1

Important Concepts

 

Distinguishing Deduction and Induction

As you saw in the Important Concepts, I distinguish deduction and induction thus: deduction purports to establish the certainty of the conclusion while induction establishes only that the conclusion is probable.2 So basically, deduction gives you certainty, induction gives you probabilistic conclusions. If you perform an internet search, however, this is not always what you'll find. Some websites define deduction as going from general statements to particular ones, and induction is defined as going from particular statements to general ones. I understand this way of framing the two, but this distinction isn't foolproof. For example, you can write an inductive argument that goes from general principles to particular ones, like only deduction is supposed to do:

  1. Generally speaking, criminals return to the scene of the crime.
  2. Generally speaking, fingerprints have only one likely match.
  3. Thus, since Sam was seen at the scene of the crime and his prints matched, he is likely the culprit.

I know that I really emphasized the general aspect of the premises, and I also know that those statements are debatable. But what isn't debatable is that the conclusion is not certain. It only has a high degree of probability of being true. As such, using my distinction, it is an inductive argument. But clearly we arrived at this conclusion (a particular statement about one guy) from general statements (about the general tendencies of criminals and the general accuracy of fingerprint investigations). All this to say that for this course, we'll be exclusively using the distinction established in the Important Concepts: deduction gives you certainty, induction gives you probability.

In reality, this distinction between deduction and induction is fuzzier than you might think. In fact, recently (historically speaking), Axelrod (1997: 3-4) argues that agent-based models, a new fangled computer modeling approach to solving problems in the social and biological sciences, is a third form of reasoning, neither inductive nor deductive. As you can tell, this story gets complicated, but it's a discussion that belongs in a course on Argument Theory.

In this course we will only focus on deductive reasoning, primarily because of our need to study the relationship between premises and conclusion. Truth be told, inductive logic is a whole course unto itself. In fact, it's more like a whole set of courses. I might add that inductive reasoning might be important to learn if you are pursuing a career in computer science. This is because there is a clear analogy between statistics (a form of inductive reasoning) and machine learning (see Dangeti 2017). Nonetheless, this will be one of the few times we discuss induction. What will be important to know for our purposes, at least for now, is only the basic distinction between the two forms of reasoning—nevermind that the distinction is fuzzy to start with.

Food for Thought...

 

Assessing Arguments

 

Some comments

Validity and soundness are the jargon of deduction. Induction has it's own language of assessment, which we will not cover. These concepts will be with us through the end of the course, so let's make sure we understand them. When first learning the concepts of validity and soundness students often fail to recognize that validity is a concept that is independent of truth. Validity merely means that if the premises are true, the conclusion must be true. So once you've decided that an argument is valid, a necessary first step in the assessment of arguments, then you proceed to assess each individual premise for truth. If all the premises are true, then we can further brand the argument as sound.3 If an argument has achieved this status, then a rational person would accept the conclusion.

Let's take a look at some examples. Here's an argument:

  1. Every painting ever made is in The Library of Babel.
  2. “La Persistencia de la Memoria” is a painting by Salvador Dalí.
  3. Therefore, “La Persistencia de la Memoria” is in The Library of Babel.

 

La Persistencia de la Memoria, by Salvador Dalí

At first glance, some people immediately sense something wrong about this argument, but it is important to specify what is amiss. Let's first assess for validity. If the premises are true, does the conclusion have to be true? Think about it. The answer is yes. If every painting ever is in this library and "La Persistencia de la Memoria" is a painting, then this painting should be housed in this library. So the argument is valid.

But validity is cheap. Anyone who can arrange sentences in the right way can engineer a valid argument. Soundness is what counts. Now that we've assessed the argument as valid, let's assess it for soundness. Are the premises actually true? The answer is: no. The second premise is true (see image). However, there is no such thing as the Library of Babel; it is a fiction invented by a poet. So, the argument is not sound. You are not rationally required to believe it.

Here's one more:

  1. All lawyers are liars.
  2. Jim is a lawyer.
  3. Therefore, Jim is a liar.

You try it!4

Pattern Recognition

 

Sidebar

 

While pattern recognition is useful for assessing for validity, ultimately what we are really looking for is soundness. That means that we need a method for assessing each of the premises for truth. This begs the question: how do we know if the premises in an argument are true? Attempting to answer this question might drag us into the weeds very quickly. If we try to, say, first decide what truth even is, we would need to get into a complicated discussion in the field of epistemology. Epistemology is a branch of Philosophy concerned with the nature and limits of knowledge; e.g., questions like: "What is truth?", "What is the difference between fact and opinion?", "What justifies our knowledge claims?", and "What are the limits of human knowledge?"

In this class, we will largely bypass the finer distinctions of epistemology—although the interested student can take my PHIL 101 course. Instead, we will be pragmatic and operate within a scientific worldview. In other words, we will take for granted that there are some tried and true methods for ascertaining which statements are true and which ones are not. Following the critical thinking textbook written by Jack Lyons and Barry Ward, we can say that “[t]hese seem to be the main ways we have of determining whether premises are true: perception, inference, and testimony” (Lyons and Ward 2018: 246).

Perception is probably the most straightforward way of assessing a claim for truth. You basically just check for yourself. In other words, you use direct sensory information to confirm or disconfirm some claim. Having said that, per Lyons and Ward, we actually tend to not rely on this method when assessing most claims for truth. This is because most of the claims we have to assess for truth aren't the kinds of claims that can be easily verified with just the senses. There is usually some intermediate steps between hearing the claim and putting yourself in a position to be able to directly assess the claim for truth with your senses. Moreover, these intermediate steps are usually too much of a hassle. For example, perhaps you've heard that water boils at a lower temperature in the mountains than at the beach. While you are perfect capable of performing this experiment on your own, you're probably more likely to look it up on Google than actually get direct sensory information with regards to this claim. Similarly, you've heard of the state of Alabama, but I'd wager that most of you haven't visited it. Nonetheless, you still believe it exists—not through direct sensory experience but through some other method.

You can also, if you have all the relevant information, make logical inferences in order to decide whether a given claim is either true or false. For example, here's a logic puzzle from an LSAT prep course:

A company employee generates a series of five-digit product codes in accordance with the following rules: The codes use the digits 0, 1, 2, 3, and 4, and no others. Each digit occurs exactly once in any code. The second digit has a value exactly twice that of the first digit. The value of the third digit is less than the value of the fifth digit. Question: If the last digit of an acceptable product code is 1, it must be true that the:
(A) first digit is 2
(B) second digit is 0
(C) third digit is 3
(D) fourth digit is 4
(E) fourth digit is 0.

In case you're dying to know, the answer is (A). However, most claims that you will assess for truth will not have carefully specified constraints on them like this logic puzzle. So, even if you actually wanted to do some logic puzzles, you would likely not have all the information you need to assess a claim for truth using this method.

That leaves us only with testimony. In other words, most assessments for truth will come by way of deferring to the testimony of others who we have deemed to be competent in the topic we are dealing with. In other other words, we typically defer to experts in a given field when assessing the truth of a claim from said field. However, it's not so simple. Here's a head-scratcher: How can you spot the experts if you’re not an expert? This is, once again, very similar to a problem that Plato considered. In his dialogue called Meno, while the characters are debating the true nature of virtue, Socrates and Meno wonder if it will be possible to find what they’re looking for if they don’t really know what it is. With regards to the problem of identifying experts, we have to ask ourselves the following question: if I don’t know the answer to some query, how am I going to know whether some other person knows what he or she is talking about?

Furthermore, it may be the case that some individuals might not be recognized as experts but really do know their way around some particular domain. For example, it might be the case that some known terrorist, e.g., Ted Kaczynski, can actually answer some complicated mathematical questions, even though you wouldn't expect it. Still, even though some people who are widely thought to know actually don’t really know and some people who aren’t recognized as knowers really do know, it’s often best to go with recognized experts (Lyons and Ward 2018: 248). But watch out: you have to find experts in the relevant field. For example, an astrophysicist is an expert (in astrophysics), but she might not be ideally suited to answer questions about, for example, economics.

Last but not least, beware of the Dunning–Kruger effect. See the Cognitive Bias of the Day below.

 

 

 

 

Argument Extraction: Thrasymachus and Injustice

 

The sociological elements of science

 

Lyons And Ward

At this point we can push back against Socrates a bit. Recall that Socrates makes the point that injustice causes inner conflict in groups. He then further makes the case that inner conflict is not desirable. However, it does seem like some institutions do operate quite well with plenty of inner conflict. In fact, some institutions have mechanisms to deal with and can even thrive on conflict. The most obvious example I can think of is the institution of science. Science today is a complex institution, with various disciplines and subdisciplines, which focuses on testing and updating empirical hypotheses through data-collection and experimentation. Importantly, this is all done within a somewhat adversarial (or competitive) social institution. Scientists compete for grant money, check each other's work through attempted replications, and even attempt to falsify each other's theories. Conflict appears to be built into the system. But it works! In fact, it appears that it is because of this adversarial nature that science tends to be “self-correcting.” That is to say that over time false hypotheses, even if they were at one-time universally accepted, are eventually weeded out (see Lyons and Ward 2018: 274-75).5

Are we sure it's the adversarial element in science that makes it so successful? Intelligent people can disagree on this (see Firestein 2012 for another interesting view). However, we can at this point at least discard some philosophies of science that have fallen out of fashion. For example, some thought that science was about provability. Someone who believes in provability in science might argue that science works because successful scientific theories are ones that we can conclusively prove to be true. But...

“Consider a general claim like, ‘Negative electric charges always repel each other,’... However many test cases you observe that satisfy [this] generalization, there is an untold number that you have not observed” (Lyons and Ward 2018: 278).

In other words, it is impossible to prove with deductive certainty that, say, negative electric charges always repel each other since this would require examining all instances of electric charges interacting with each other. Clearly, this is impossible. This is obviously not a knock on science; it's just that provability in science is not how science actually works.

 

Karl Popper
Karl Popper (1902-1994).

Further, science contains many accepted theories that had (or still have, in some cases) unobserved postulates; in other words, accepted scientific theories contain claims that were not proved for a long time (e.g., genes, germs, electrons) and even cases where perhaps it cannot be proved ever (e.g., string theory).6

There's also the idea of falsifiability in science. It was Karl Popper (2002/1934) who argued that the difference between scientific claims and non-scientific claims is that science exhibits falsifiability, i.e., scientific claims could in principle be refuted via experiment, while non-scientific claims (like the claims of astrology) can typically not be falsified (thus rendering astrology a pseudoscience). This sounds great initially. It does seem like that is what is distinctive about science. But is that really how it works? First off, in practice, scientists do not immediately throw out a theory if it conflicts with data—and that’s a good thing(!). For example, in the late 1700s Uranus was discovered and scientists used it as an opportunity to test Newton’s laws of gravitation. The test failed; Uranus’ motion deviated significantly from their predictions. While Popper’s theory suggests we abandon the theory of gravitation, scientists instead hypothesized (accurately) there was another planet affecting Uranus’ orbit. They were right: it was Neptune. See? We can't toss out a hypothesis that doesn't comport to the data immediately since it might be another part of the theory that is causing the error in prediction. Popper appears to be wrong.

“Popper’s picture of science is too simple… He makes it seem that when we test a hypothesis, we only use that one hypothesis to make the prediction. So, if the prediction, is wrong, there is only one rational response: reject the hypothesis. But that’s clearly false of our example, and of scientific predictions in general. In almost every scientific prediction, there will be many premises required to deduce the prediction. So, when the prediction goes wrong, it is at least possible that the culprit is not the hypothesis we intend to test, but one of the other premises” (Lyons and Ward 2018: 280).

So these philosophies of science, although intuitive initially, appear to be incomplete. It does appear that the adversarial nature of science is a driving force behind its effectiveness. However, notice this is also a hypothesis—namely a hypothesis in the sociology of science. As such, it might get falsified eventually. But that's ok! For now at least, we can see the adversarial nature of science as a possible counterpoint to one of Socrates' assumptions.

 

 

 

 


 

Do Stuff

 


 

Executive Summary

  • The basic distinction, so far as this course goes, between deduction and induction is that deductive arguments attempt to provide full support for the conclusion, so that if the premises are true, then the conclusion must be true. Inductive arguments, on the other hand, provide probabilistic support for a conculsion; i.e., in inductive arguments, if the premises are true, then the conclusion is very like to be true.

  • We will primarily be using deductive arguments in this course. As such, we must master the jargon of deduction. An argument is valid if the premises force the conclusion on you; i.e., if the premises are true the conclusion must be true. An argument is sound if it is a. valid, and b. has true premises.

  • Thus far we can assess for validity either using the imagination method or the pattern recognition method. We will be using testimony as our method of assessing premises for truth (to check for soundness). We will primarily heed to the testimony of academics and scientists.

  • In Republic (336b-354c), Socrates engages with Thrasymachus, who attempts to show that injustice is more beneficial to the individual than being just. This will be a recurring theme in the remainder of the book.

 


 

FYI

Supplemental Material—

Advanced Material—

Related Material—

 

Footnotes

1. Translation by instructor, R.C.M. García.

2. By the way, I'm not alone in using this distinction. One of the main books I'm using in building this course is Herrick (2013) who shares my view on this distinction.

3. Another common mistake that students make is that they think arguments can only have two premises. That's usually just a simplification that we perform in introductory courses. Arguments can have as many premises as the arguer needs.

4. This argument is valid but not sound, since there are some lawyers who are non-liars, although not many.

5. A fascinating idea is the possibility that this might mean that many (all?) hypotheses which are considered fact today will very likely be found to be erroneous in the future (Laudan 1981). This is called pessimistic meta-induction. To make this idea more credible, consider what the state of physics was in the late 19th century. It appears that the famous physicist Max Planck was advised to not go into physics since it seemed like they were pretty much done. “[W]hen the great German physicist Max Planck was beginning his studies in the 1870’s, he was advised by a physics professor to choose some other subject, because everything important had already been discovered—there were just a few holes to fill in” (Lyons and Ward 2018: 283). At the same time, we shouldn't take pessimistic meta-induction further than what it really claims. In other words, we shouldn't take it to say that science is futile. The scientific method is the most robust, successful method for acquiring empirical knowledge ever—although it should be clear that it can answer only empirical questions and we can never be certain that our current empirical knowledge won't eventually be overturned by future findings.

6. As it turns out, there's an active debate about string theory. Woit (2007) argues that string theory is not science (hence, he argues that it’s “not even wrong.”) At the same time, some physicists have high hopes for string theory (see Greene 2010). For a relatively manageable introduction into this debate see Hossenfelder (2020).

 

 

Three Red Flares

 

 

It is no idle question to wonder whether Plato, if he had stayed free of the Socratic spell, might not have found an even higher type of the philosophical man, now lost to us forever.

~Friedrich Nietzsche

Important Concepts

 

Argument Extraction: Glaucon's Challenge

 

In the Argument Extraction video, we saw a massive challenge to the notion that justice is something we should strive for. In other words, we heard a battery of arguments that suggested that, if possible, we should try to be perfectly unjust, creating only the illusion of being a good person—thereby reaping the benefits of having the reputation of being just. This powerful challenge compelled Socrates to examine the nature of justice as closely as possible. Using the analogy between individuals and society, Socrates and friends reasoned that if we can isolate where justice is within society at large, then we can better understand where justice is in an individual. By "magnifying" the problem, looking at society (which is very large) instead of the individual (which is comparatively small), they figured they could better understand justice and—they hoped—why they should strive for it.

 

Barley bread
Barley bread, a
non-luxurious staple
of the diet of the
warrior-state of Sparta.

As they dived into creating the City of Words, things quickly escalated and they realized that their perfect society, if it is to have the luxuries that most people would enjoy having, would require a large size, specialization at different tasks (like farming, weaving, etc.), and a standing army (to protect from threats, both internal and external). Below I've reproduced the portion of the text which makes the case that the army should be composed of professional soldiers, as opposed to citizen-soldiers (i.e., individuals who are trained for some particular profession, like leatherwork, and also for combat). The characters concluded that one can only truly excel at a craft if they focus their entire efforts on just that craft, not diluting their cognitive and physical powers on other skills. This makes intuitive sense, but we must analyze the argument more closely in order to know if it is sound. Let's first read the passage again.

SOCRATES: Well, now, we prevented a shoemaker from trying to be a farmer, weaver, or builder at the same time, instead of just a shoemaker, in order to ensure that the shoemaker’s job was done well. Similarly, we also assigned just the one job for which he had a natural aptitude to each of the other people, and said that he was to work at it his whole life, free from having to do any of the other jobs, so as not to miss the opportune moments for performing it well. But isn’t it of the greatest importance that warfare be carried out well? Or is fighting a war so easy that a farmer, a shoemaker, or any other artisan can be a soldier at the same time, even though no one can become so much as a good checkers player or dice player if he considers it only as a sideline and does not practice it from childhood? Can someone just pick up a shield, or any other weapon or instrument of war and immediately become a competent fighter in an infantry battle or whatever other sort of battle it may be, even though no other tool makes someone who picks it up a craftsman or an athlete, or is even of any service to him unless he has acquired knowledge of it and has had sufficient practice?
(374c-d)

Arguments expressed in conversation are hardly found to already be in standard form—unless you are unlucky enough to be having a conversation with someone trained in analytic philosophy. (Poor you!) Given that Republic is in dialogue form, we'll have to extract the argument from the text, force it into standard form, and then assess for validity and soundness. This passage is particularly rich. Notice that each sentence has a premise embedded into it, sometimes in an implied way. For example, take a look at the first sentence: "Well, now, we prevented a shoemaker from trying to be a farmer, weaver, or builder at the same time, instead of just a shoemaker, in order to ensure that the shoemaker’s job was done well." What is truly being said here is that true excellence in a given domain only comes when someone specializes in that domain, spending countless hours developing the appropriate skills. Let's call this premise 1:

1. True excellence in a given domain only comes when someone specializes in that domain.

 

Genetic lottery
The unfairness of
the genetic lottery?

Take a look now at the second sentence: "Similarly, we also assigned just the one job for which he had a natural aptitude to each of the other people, and said that he was to work at it his whole life, free from having to do any of the other jobs, so as not to miss the opportune moments for performing it well." If you notice, there are two claims being made here—both of which are delightfully controversial(!). The first one is that some people have a natural aptitude for certain tasks or roles. The second claim being made is that individuals should not be compelled to do anything other than the task for which they are most suited. Jointly, these can be interpreted as the controversial idea that genetic factors play a role in our aptitude for certain jobs, and that those who are best suited to certain tasks (like being CEO or playing basketball) should perform those tasks and are the only ones that should do so. (We'll investigate this idea further in a later lesson.) For now, let's recognize that these two claims are what is being said in the passage and let's call these premise 2 and premise 3:

2. Some individuals are predisposed to excel in certain tasks.
3. Individuals should not be compelled to do anything other than the task for which they are most suited.

After these two premises, Socrates launches a barrage of rhetorical questions. Can you figure out the main message here? It seems to be this: excellence in warfare requires a specific skill set that must be trained for—a skill set that some, but not others, are particularly predisposed for. That's premise 4.

4. Excellence in warfare requires a specific skill set that must be trained for, a skill set that some, but not others, are particularly disposed to.

Notice, however, that the conclusion is not explicitly stated in this passage. As it turns out, the conclusion is smeared across several lines of the dialogue. Convince yourself, after reading through the assigned reading (357a-377c), that the following argument captures what Socrates is getting at:

  1. True excellence in a given domain only comes when someone specializes in that domain.
  2. Some individuals are predisposed to excel in certain tasks.
  3. Individuals should not be compelled to do anything other than the task for which they are most suited.
  4. Excellence in warfare requires a specific skill set that must be trained for, a skill set that some, but not others, are particularly disposed to.
  5. Therefore, to produce a fighting force that achieves true excellence in warfare, Guardians should be selected for so that they have a natural aptitude for warfare, and they should be trained with the requisite skill set and not be allowed to perform any other function but that of waging war.

Truth be told, there are some other possible ways of extracting this argument. What I tried to do here is to focus on the important passage at 374c-d. Having said that, it is unmistakable that, once the argument is laid out in this way, it is easier to see that the line of reasoning is arguably valid. Whether or not it is sound is another matter entirely. But that's enough for now.

 

Informal Fallacy of the Day

 

 

 

The Signaling Theory of Education

I've made plenty of people angry by talking about this view, but oh what the heck. Lets start with some Food for Thought...

 

 

The view

Caplan (2018) thinks he can explain this puzzle. In other words, he has a theory for why there is an economic advantage to those who go to college even though they typically don't gain much knowledge while they are there. His explanation requires that you grant the following assumptions:

  1. There are different types of people, e.g., some have higher and lower intelligence levels, some have capacities that others do not, etc.
  2. A person’s type (e.g., high- or low-intelligence) is non-obvious (and self-reporting doesn’t help).
  3. One type of people (e.g., high-intelligence), on average, performs differently than another type of people.

 

Caplan's The Case Against Education

Caplan puts a lot of weight on these assumptions, so we should explain what they mean. First off, claim 1 is stating that it is simply the case that some individuals have different skill-sets and capacities. Caplan doesn't dive into why we differ in our skill-sets and capacities, although as previously mentioned we will dive into some theories as to why this might be the case eventually. (Stay tuned.) Having said that, we've already seen that Socrates makes a very similar claim in premise 2 of the argument from the previous section. Claim 2 states what is probably obvious: you can't really just look at someone and know if they are hard-working, or intelligent, or rebellious, etc. (If you think you can do this in non-obvious situations, teach me your ways!) Claim 3 makes what I think is an equally obvious claim: that people with certain skills-sets and capacities might perform better at a task than other people with different skill-sets and capacities. If there are no objections to these assumptions, let's move into examining the theory.

Caplan suggests that the education premium comes because employers can't just guess someone's type (and hence whether or not a potential employee can actually perform their job well). Perhaps it’s the case that employers are stumped with regards to one’s type through the interview process alone. Sure, they spent an hour with you, but they still don't know whether you'll really do the job well or not. What can they do? Well, it seems like they end up having to rely on signals. For example, perhaps crew cuts signal conformity, while mohawks signal rebelliousness. Rationally speaking, employers are better off hiring by haircut (the signal) then by coin toss (interview performance alone). In other words, signals come in handy.

 

Man with crew cut
Man with crew cut.

Of course, if you are open to looking for signals that a potential employee might do their job well, you shouldn't stop at haircuts. You should see if they have the intelligence to perform their job well, as well as the ability to stay on task and not shake things up too much. These traits—intelligence, conscientiousness, and conformity to mainstream social norms—are clearly the kinds of traits that you'd want in an employee if you were the owner of a business. And this, argues Caplan, is the function of education. If you have a college degree, it signals some minimum level of intelligence. First signal! It also shows that you are willing to sit through boring classes—since most students report being bored through most classes (Arum and Roksa 2011)—and still pass at the end of the semester. This is a good sign of conscientiousness (also known as grit or stick-to-it-iveness). The second signal! Last but not least, having a college degree signals that you display a certain level conformity to mainstream social norms. Even though they might complain along the way, college graduates jumped all the right hoops if and when they were told to jump. How else would you pass college classes taught by stuffy professors like me? In short, employers hire college graduates because they've sent the costly signal (a college degree) that they are a. somewhat clever, b. willing to sit through boring work, and c. conform to the social norms of the institution they are a part of. They're more likely to get hired because of this, and, thus, they are (on average) more likely to have a higher wage (the education premium).1

Who cares?

One possible objection that Caplan entertains is the matter of apathy. Does it really matter that education is mostly signaling? Caplan argues that this is a big deal. If education is all or mostly signaling, society is subsidizing a credentialing system that’s fostering a credentialing arms race with no other measurable benefit. In other words, everyone is now forced to take on debt to get a degree just to be on the same level as everyone else and there's no other discernable benefit. For example, Caplan reviews the evidence and it seems that education does not increase job satisfaction. It also does not increase overall happiness (after a certain financial threshold) nor does it make one healthier (or if it does, it does so only negligibly.) School is also dreadfully boring. As previously stated, most students dislike both school and work, but they dislike work a little less. (You're probably bored to tears right now!) Worse yet, education is not always fruitful. 25% of high school students don’t finish in 4 years. 60% of full-time college students don’t finish in four years. Half of advanced degree students never finish. Debt for nothing.

What to do?

Here are some of Caplan’s suggestions:

  • Take the fat out of k-12 education; e.g., history, social studies, art, music, foreign language. Replace these with more time in the playground or more quiet time in the library. Alternatively, you can end school earlier in the day (once the students are old enough to not need babysitting).
  • Get rid of college majors that are made of fat; eliminate them completely from public institutions and, in private universities, ensure they receive no federal funding. (Bye bye philosophy?)
  • Raise standards for useless subjects to extremely high levels, thereby disincentivizing students from wanting to use those subjects as a status-enhancing activity. For example, haved mandatory auditions for music classes. If someone isn't taking it seriously, don't let them join just to get a free period or to look cool.
  • Most controversially, Caplan argues that we should cut all subsidies to post-k-12 education; i.e., don't make college any cheaper. Only excellent students (either through family funding or scholarships) will go on to a higher education, since any non-excellent students would voluntarily choose not to go to an instution that they will pay an arm and a leg for and which they have very little chance of being successful in. The billions currently spent on education can be used for other things such as funding cancer research, attempting to end food insecurity, and, most importantly, fund vocational programs (e.g., automative repair, welding, machining, etc.).

 

Other objections

 

Roblox
Roblox, one of the
most Google'd words.

Caplan entertains some other objections. For example, one might object that education has the benefit of enriching the student with high culture and the love of learning, two highly-valuable traits of a well-rounded person. Or else one can say that education is the greatest equitizing tool we have. So, if we care about social justice, we must promote the ends of education.

But Caplan dismisses both of these objections. First off, since the dawn of the internet, any enriching content one wants is at one’s fingertips. Those who truly love learning and want to explore poetry, opera, history and ethnic studies can educate themselves for free. Having said that, the searches for non-academic content far outnumber searches for content of an academic nature or relating to high-culture. In other words, people care more about the Kardashians than the Battle of Karbala. Moreover, the social justice objection assumes that students retain what they’re being taught. But we’ve seen that’s mostly false. And even if you did remember, the most popular political theories taught in, for example, Political Science courses (e.g., Rawlsianism) do not address real-world problems nor do they have much to say to non-white, non-male students (see Yaouzis 2019 and Mills 2017).

 

 

 


 

Do Stuff

  • Read from 357a-377c (p. 36-57) of Republic.

 


 

Executive Summary

  • In Republic 357a-377c, we come face to face with Glaucon's challenge to Socrates' view that justice is good for its own sake and for what it brings; we also hear Adeimantus' addendum. They make the case that being unjust might be more beneficial than being just and that conforming to justice is something we agree to only because we can't find the way to be unjust without repercussions.

  • Towards the end of the reading, we learned that Socrates and friends are making the case that specialization will be required in their City of Words and, in particular, they will need to find those most suitable to be Guardians of the state, as well as figure out how to train them.

  • Caplan (2018) gave us a signaling theory of education: higher education doesn't teach you much but it does signal to employers that you have an adequate level of intelligence, that you are conscientious, and you conform to mainstream social norms.

 


 

FYI

Suggested Reading: Bryan Caplan, What students know that experts don’t: School is all about signaling, not skill-building

TL;DR: Econ Duel, Is Education Signaling or Skill Building?

Supplemental Material—

Advanced Material—

Related Material—

 

Footnotes

1. Do you really need all three traits to signal that you'll be a good employee? Caplan argues that candidates that exhibit all three are optimal but that candidates that lack any one might be suboptimal. For example, say you are intelligent and are full of grit but you don't conform to mainstream norms. This might actually make you a liability. If I were considering hiring you, I'd be worried that you are a little rebellious and that you're smart enough and hard-working enough to get away with some shenanigans that is ultimately going to harm my business. No thank you.

 

 

A Certain
Sort of Story

 

 

Where the voice of the people is heard, elite groups must insure their voice says the right things... The less the state is able to employ violence in the defense of the interest of the elite groups that effectively dominate it, the more it becomes necessary to devise techniques of ‘manufacture of consent’... Where obedience is guaranteed by violence, rulers may tend towards a ‘behaviourist’ conception; it is enough that people obey; what they think does not matter too much. Where the state lacks means of coercion, it is important to control what people think.

~Noam Chomsky

Important Concepts

 

Long Chains of Linear Reasoning

You are hopefully by this point, familiar with modus ponens and modus tollens. In case you need a refresher, however, I will reproduce them below in their most abstract form. First, let's cover some new terms. Recall that the modus ponens has two premises. The first is a conditional statement, i.e., an if-then statement, which itself is composed of two more basic sentences—the part associated with the "if" is called the antecedent and the part associated with the "then" is the consequent. Here are some examples of conditionals:

  • "If Lisa is home, then Caroline is picking up the kids."
  • "If the store is open, then the lights will be on."
  • "If you drink coffee after 5pm, (then) you might have a hard time going to bed before 10pm."

Notice that the antecedent and the consequent are themselves stand-alone sentences. In the first sentence above, the sentences "Lisa is home" and "Caroline is picking up the kids" are by themselves perfectly good sentences. In other words, they are grammatically correct and they contain a subject and a predicate part. They are being "put together", however, into a conditional. This means that the conditional is actually a compound sentence, a sentence that is itself composed of one or more sentences that are connected by one or more sentence connectives. The sentence connective in a conditional is the "if...then"; it's what connects the two more basic sentences. (These basic sentences are sometimes called simple sentences or atomic sentences.) Other sentence connectives that you might be familiar are "and" and "or". For example:

  • "Either Roxana is at the office or she went to lunch with Bob."
  • "James lives in Los Angeles and Anthony works in San Diego."

Another connective that you are familiar with is "not". This connective gets added on simple sentences and changes its truth value. The result is technically called a negation. Although it doesn't look like "Either Roxana is at the office or she went to lunch with Bob", it's still a compound sentence. Take another look at the definition of compound sentence and convince yourself of this if you don't believe me. Here are some negations:

  • "It's not the case that Lola's sister is pregnant."
  • "It's false that Adam missed school today."

In both of these, the simple sentence in each is perfectly fine as a stand-alone sentence: "Lola's sister is pregnant" and "Adam missed school today". They are turned into negations, effectively switching their truth value from true to false, by the addition of phrases that deny the truth of the simple sentences: "it's not the case that" and "it's false that".

One more thing: these compound sentences don't have to be composed of only simple sentences. It can be the case that the antecedent portion of a conditional is itself, say, a conjunction, i.e., a compound where the connective is an "and". For example, here's a conditional where there is a conjunction in the antecedent place and a disjunction, i.e., a compound where the connective is an "or", in the consequent place.

"If James lives in Los Angeles and Anthony works in San Diego, then either Roxana is at the office or she went to lunch with Bob."

Never mind that the sentence is probably not true. The important part to notice here is that the antecedent is a conjunction ("James lives in Los Angeles and Anthony works in San Diego") and the consequent is a disjunction ("Either Roxana is at the office or she went to lunch with Bob"). If that makes sense, then we're ready to move on (although the student interested in the logical analysis of language should refer to Bergmann, Moor, and Nelson 1990, especially chapter 7).

With all the setup out of the way, let's return to modus ponens and modus tollens. Modus ponens is composed of two premises: a conditional statement and the antecedent of that conditional. The conclusion of a modus ponens is the consequent of the conditional. Expressed in an abstract way, we can say that a conditional is any argument that takes the following general form:

  1. If P, then Q.
  2. P is true.
  3. Q is true.

The bold "P" and "Q" is simply to remind you that it doesn't have to be a simple sentence in the antecedent or consequent place; it could be another compound. (These bold letters, by the way, are called metavariables.) The "∴" simply means "therefore". I will use the "∴" symbol to mean all possible conclusion indicator words, like "therefore", "consequently", "as a result", etc.

Modus tollens takes the following form:

  1. If P, then Q.
  2. It's not the case that Q is true.
  3. ∴ it's not the case that P is true.

Now that we know what a negation is, we can actually express the valid argument forms listed above in a maximally symbolic way. Although these may be cumbersome to learn at first, it will pay dividends to understand these. Once you learn these argument schemas, you'll have an easier time identifying these "in the wild". So, introducing the "~" to mean "not" (i.e., the connective for negations) and the "→" to mean "if...then" (i.e., the connective we'll be using for conditionals), here are modus ponens and modus tollens in symbols:

Modus ponens

  1. PQ
  2. P
  3. Q

Modus tollens

  1. PQ
  2. ~Q.
  3. ∴ ~P.

You will practice identifying these in the activity below. However, the arguments that will be featured in Republic are not always simply a conditional followed by the antecedent or a negated consequent. Plato makes use of chains of arguments, also known as sorites (pronounced suh-rite-eez). We will begin identifying these chains of linear reasoing in the Argument Extraction video for today.

 


 

Do Stuff

Read the following passages. First, place these arguments into standard form. Then identify which valid argument form is associated with each argument. Lastly, complete Quiz 1.4 to get your points for this assignment.

  1. If there is no reliable way to tell that you are dreaming, you can’t be sure you’re not dreaming right now. There is no reliable way to tell you’re dreaming. Therefore, you can’t be sure you’re not dreaming right now.
  2. If the gods exist, then there would be no unnecessary suffering. But it is not the case that there is no unnecessary suffering. Therefore, the gods must not exist.

 


 

Argument Extraction: The constitution of the gods

 

 

 

Manufacturing consent

 

Herman and Chomsky's Manufacturing Consent

The type of censorship that Socrates appears to be advocating for his well-functioning ideal city is very disturbing to some. It seems a little bit like brainwashing—and it gets worse. However, it's not entirely clear that we're doing much better two thousand years later. As you will recall from the Informal Fallacy of the Day from last lesson, the phrase legacy media applies to newspapers, magazines, and news programs (or channels) that predate the Internet, while the phrase new media refers to the post-Internet media. Before the dawn of new media, almost everyone acquired their information from legacy media. Nowadays, it seems that most people acquire their news from the new media, in particular from news sources with high-visibility in social media networks. But do either legacy media or the new media actually inform us?

Let's first discuss legacy media. (In)famously, in Manufacturing Consent, Herman and Chomsky (2002/1988) argue that the mass communications media of the United States serves primarily a propaganda function. Some clarifications are in order. First, noting that this work was originally published in 1988, what the authors are referring to is, of course, legacy media. What are they really saying? Well, they concluded that, in short, the media of their day didn't inform; it simply was an attempt to persuade the masses to agree with what was in the interest of the political and economic ruling classes. This symbiotic nature between the state’s political and economic ruling classes and the media industry, by the way, is sometimes referred to as the politico-media complex.

If this sounds suspiciously like a conspiracy theory, then you're not alone in thinking this. Some academics have accused Chomsky in particular of conspiratorial thinking (Goertzel 2019). However, in the case of Manufacturing Consent, Chomsky and Herman displayed excellent scholarship. Their methodology was as follows. Herman and Chomsky had two competing models. The first we can call the communicative/informative model. According to this model, people that watched the news should be more informed—obviously. The second model Herman and Chomsky called the propaganda model. The propaganda model would, of course, inculcate individuals with the values/beliefs that promote the interests of the elite, i.e., the political and economic ruling classes. Next, they exhausitvely reviewed the news coverage of major events from their recent past and saw how informative the media actually was. In other words, they had the facts, they had what the media actually reported, and they compared. Stories that were reported with the aim to inform were catalogued under the communicative/informative model; stories that seem to deviate from the truth and veer towards the interests of the elite were catalogued under the propaganda model. Guess which pile was bigger at the end of the analysis! (Hint: It was the propaganda model.) More importantly, according to Chomsky and Herman, where the media deviated from the facts, it was always suspiciously to promote the viewpoints of the ruling classes.

 

Members of the Contras in Nicaragua
Members of "la Contra"
in Nicaragua.

One example of the propaganda function, according to Herman and Chomsky, was the unconditional support given to anti-communist regimes worldwide during the period they studied. This support was unashamedly paired with unremitting criticism of left-leaning administrations (especially in Latin America). In other words, no matter what kind of atrocities anti-communist regimes committed, news coverage of them tended to be favorable, reliably sanitizing their actions. On the other hand, left-leaning regimes were seen as illegitimate, no matter how much more legitimate they were relative to anti-communist regimes.

Consider the case of elections in Guatemala and El Salvador (where right-wing, anti-communist regimes were in power) versus elections in Nicaragua (where a communist regime was in place). Nicaragua was more stable at the time, and electoral conditions were more favorable according to election watchdog agencies. Guatemala and El Salvador were in the middle of great civil conflict; each country’s armies were participating in counterinsurgency and violent repression of the populace—conditions that are pretty clearly not conducive to fair and free elections. Nevertheless, since Guatemala and El Salvador were fighting leftist insurgencies, their elections were covered by the US mass media as legitimate. Nicaragua, since they elected a leftist (a Sandinista), were deemed illegitimate by legacy media.1

Don't get the wrong idea. By presenting this information by Herman and Chomsky, I'm not advocating any particular political philosophy. The merits and drawbacks of capitalist and communist systems are another conversation entirely—and we will have it eventually. What's important to note here is that neither type of political regime had an unblemished record. In other words, both communist and anti-communist governments in Latin America committed atrocities. However, the US media, per Herman and Chomsky, reliably played down the bad behavior of anti-communist regimes while emphasizing the bad behavior of communist regimes. This is a clear double standard: two groups engage in the same behavior, but it's ok for the group that we're friendly with. At that point in history, of course, the US and the Soviet Union were engaging in a "cold" war where the conflict was primarily ideological—capitalism versus communism—even though they did fight several proxy wars (i.e., wars were one side would fund and equip forces to fight against the other side on their behalf). In any case, if the job of the media is to inform, then they would've informed. Instead, however, they promoted the interests of the (capitalist) elite.

 

 

 

Democratizing information. Yay?

 

Damasio's The Strange Order of Things

Is the current state of affairs much better than what Chomsky and Herman were seeing in the latter half of the 20th century? I'd say no, and various scholars agree with me. For example, in chapter 12 of The Strange Order of Things, neuroscientist Antonio Damasio discusses his concerns about the 21st century democratization of information. One potentially detrimental state of affairs, Damasio argues, is that all citizens in industrialized societies have been given access to a tsunami of information but(!) not enough leisure time and wealth for reflection on this information, so as to intelligently sift through and organize the discoveries of science. This is compounded by the proliferation of misinformation in the guise of information and our natural tendency to resist changing our beliefs. In addition to all this(!), the software that runs on our electronic devices is designed to be maximally addictive. This increases the volume of information (and misinformation), further shrinking any hope of having enough time for reflection. Moreover(!), these citizens are encouraged to maximize their autonomy: vote, express themselves on social media, protest, etc. So, you have more information, you have less time to reflect on it, and you are encouraged to regularly express your opinion on this information that you haven't processed carefully. It isn’t difficult to imagine how this could be catastrophic.

Damasio reminds us that if we are faced with uncertainty, we turn to our in-group (i.e., the group that aligns with our social identity) for guidance on what to believe. But this is, in effect, a turn away from science; and this is true even if the in-group that one turns to does in fact endorse some scientific conclusions. This is because the true authority becomes the group and not the scientific process. So, we are perhaps tailspinning towards an increasingly non-scientifically-informed social life.

You might say, though, that social media is actually helping social movements. I'd beg to differ. I'll give you an example. Social media is sometimes lauded as having enabled the so-called Arab Spring, as if there were no Arabic pushes for democratization before 2010. But when protests erupted in Tunisia in December of 2010, Twitter didn't even offer its service in Arabic and there were only around 200 active accounts. It appears that simple text messaging was what enabled the protests—not social media. As Wired contributor Siva Vaidhyanathan reports:

“Overall, fewer than 20 percent of the country’s citizens used social media platforms of any kind. Almost all, however, used cell phones to send text messages. Unsurprisingly and unspectacularly, people used the communication tools that were available to them, just as protesters have always done. The same was true of Egypt. When in January 2011 angry people filled the streets of Cairo, Alexandria, and Port Said, many inaccurately assumed, once again, that Twitter was more than just a specialized tool of that country’s cosmopolitan, urban, educated elites. Egypt in 2011 had fewer than 130,000 Twitter users in all. Yet this movement too would be drafted into the rhetoric of Twitter Revolution. What Facebook, Twitter, and YouTube offered to urban, elite protesters was important, but not decisive, to the revolutions in Tunisia and Egypt. They mostly let the rest of the world know what was going on. In the meantime, the initial success of those revolutions (which would be quickly and brutally reversed in Egypt, and just barely sustained in Tunisia to this day) allowed techno-optimists to ignore all the other factors that played more decisive roles—chiefly decades of organization among activists preparing for such an opportunity, along with some particular economic and political mistakes that weakened the regimes” (Vaidhyanathan 2019).

 

Lanier 2018

It gets worse! Lanier (2018) provides ten arguments for why you should delete your social media account this very second. His 9th argument is that social media is making politics impossible. This is because our political dispositions are being studied and weaponized not so that we fight against each other—although that might be a by-product—but so that we spend as much time as possible glued to our devices, mindlessly scrolling through our social media accounts— entertaining ourselves to death. This is the same point that Damasio made: the algorithms that operate on social media platforms are designed to be maximally addictive. You have a super computer pointed at your face figuring out how to get you to stare at it longer—for the profit of someone that's already a billionaire.

Lanier also describes how social media is affecting truth in his fourth argument. First notice that the only currency in social media is attention. So, the way currency (i.e., attention) is acquired in social media makes one more performative, i.e., more likely to engage in actions with a specific audience in mind so as to elicit a response or garner a reaction. This process makes users lose authenticity, since they're just performing the whole time (sometimes orchestrating massive group efforts so as to carefully choreagraph a scene that looks like fun). In this environment, journalism becomes less truth-oriented as it becomes mere click bait. Bot accounts proliferate platforms with fake people (so as to produce more currency, i.e., attention). Your interactions and content become heavily-engineered, attention-seeking nonsense.

To make things even worse(!), social media giants like Facebook have no problem with co-opting social justice movements (like Black Lives Matter) so as to increase their revenue stream even more. This is because it turns out that negative emotions are much more useful for promoting engagement with the platform than are positive ones. And so, social media algorithms present more content that will give rise to negative emotions, e.g., anger. You know this already: users who provoke other users with mean or nasty comments get the most attention. One sure-fire way to produce negative emotions is with politically-loaded content. And so, Lanier alleges that the very large-scale social movements we see in society today, both on the right and the left, were actually in part generated by social media companies. Lanier explains:

“Black activists and sympathizers were carefully catalogued and studied. What wording got them excited? What annoyed them? What little things, stories, videos, anything, kept them glued to [their social media accounts]? What would snowflake-ify them enough to isolate them, bit by bit, from the rest of society? What made them shift to be more targetable by behavior modification messages over time? The purpose was not to repress the movement but to earn money. The process was automatic, routine, sterile, and ruthless. Meanwhile, automatically, black activism was tested for its ability to preoccupy, annoy, even transfix other populations, who themselves were then automatically catalogued, prodded, and studied. A slice of latent white supremacists and racists, who had previously not been well identified, connected, or empowered, was blindly, mechanically discovered and cultivated, initially only for automatic, unknowing commercial gain. But that would’ve been impossible without first cultivating a slice of [social-media-enabled] black activism, and algorithmically figuring out how to frame it as a provocation.” (Lanier, Argument 9; brackets are mine).

In short, perhaps it was the drive for profit that drove the process which categorized whole sections of the population that were hitherto unidentified. Put straightforwardly, social media companies made social movements as a by-product of their business model. More importantly, once these social movements were identified, they were connected and empowered—through the very same social media platforms that created them. Eventually, they spilled out of social media platforms and into the real world. Since this happened on both sides of the political spectrum, both liberals and conservatives might have a problem with such a mindless process affecting our society the way it has. So I pose to you a question. What's better: the censorship of Plato's City of Words or the information anarchy of the 21st century?

 

 


 

Do Stuff

  • Read from 377c-383c (p. 57-65) of Republic.

 


 

Executive Summary

  • In Republic 377c-383c, Socrates and friends begin to note that the way to create a Guardian class that is maximally friendly to citizens and hostile to aliens these would-be Guardians must be trained from an early age. Importantly, the kinds of stories and poems that they hear must be heavily censored so as to only promote the civic virtues that Socrates and friends argue are needed in Guardians. In short, only a certain sort of story will be accepted: those that teach guardians to be as god-fearing and as godlike as human beings can be. All other stories are banned.

  • Herman and Chomsky argued that legacy media served primarily a propaganda function, as opposed to the function of actually informed the citizenry.

  • New media has its own problems. Damasio argues that it is too much information to process intelligently, while Lanier argues that the very business model of social media platforms is bad for individuals (making us less happy) and bad for society (reducing the quality of journalism and making politics more divisive).

 

FYI

Suggested Reading: Edward Herman, The Propaganda Model: a retrospective

TL;DR: Noam Chomsky - The 5 Filters of the Mass Media Machine

  • Note: This video was produced by Al Jazeera English, which is funded in whole or in part by the Qatari government.

Supplemental Material—

Advanced Material—

Related Material—

 

Footnotes

1. There might be other reasons other than those given by Chomsky and Herman for the propaganda function being served by the mass media. Given the context of the Cold War, it may be that the media outlets were unconsciously biased or they felt that too much was at stake to be unbiased. This appears to be true at least for some state officials (see Talbot 2015).

 

 

The One Great Thing (Pt. I)

 

 

Education is a weapon whose effects depend on who holds it in his hands and at whom it is aimed.

~Joseph Stalin

Molding young minds

In the reading for today, we glance inside the smoke-filled rooms within Plato's kallipolis where decisions about censorship are made. If you've not noticed by now, I'm making some very explicit connections between Plato's Republic and our 21st century world. The reason for this is that when considering statecraft (the skillful management of the inner workings of a political entity), as we are doing for the Ninewells territory, it is helpful to see how other political entities, both real and imaginary, are made to function well. We will see what Plato has in store for us below, but first I wanted to take a look at some events from the 20th century that shed light on how the population is compelled to fall in line with the interests of the political elite. In particular, I'd like to take a look at the dawn of public relations.

 

Kinzer's Overthrow

Edward Bernays (1891-1995) claimed that his specialty was ‘the conscious and intelligent manipulation of the organized habits and opinions of the masses’ (Kinzer 2007: 134), and it was he who is considered the father of public relations. Public relations is the practice of managing and disseminating information, whether it be from an individual or an organization (such as a business or a government agency), out to the public such that the public comes to a particular conclusion about the business or organization and their actions. In his 1928 book Propaganda, Bernays gave an overview in the basics of these public communications techniques. But, while PR is interesting (and controversial) in and of itself, I'd rather not give a detailed rundown of the actual practices of the field. Instead, I want to give you an example of how these practices have been used in the past in political matters. For this we can look at the work of none other than Bernays himself. I've pulled all the following information from Tye (1998), who gives a history of Bernays and his work, and Kinzer (2007).

Let's begin with some notable PR campaigns by Bernays. In the 1920s, Bernays was hired by the Beech-Nut Packing Company to increase consumer demand for pork. Bernays used his uncle’s ideas—his uncle being Sigmund Freud—to promote the idea that bacon and eggs ought to be eaten at breakfast. Basically, he got a physician (who also worked for Beech-Nut Packing Company) to claim that breakfasts should be more dense; this doctor then sent out his "findings" to other doctors to see if they agreed, and many said they did. Bernays used this as part of his marketing campaign to great success. In the late 1920s, American Tobacco Company hired Bernays to promote their Lucky Strike cigarette brand to women, a demographic with which the company struggled. For his campaign, Bernays promoted that women smoke instead of eat by promoting the ideal of thinness (Tye 1998: 23-26). Later, to promote women smoking in public, Bernays rebranded Lucky Strike cigarettes as “torches of freedom” by paying women to smoke them during the 1929 Easter Sunday Parade in New York. He also tried to make the brand’s shade of green a more fashionable color. Interestingly, though, According to Tye (1998: 89), Bernays declined to work with the Nazi Socialist Party, the Spanish dictator Francisco Franco, and Richard Nixon, but did work for the NAACP and various other non-profit organizations. But Bernays was just getting started.

 

Bernays
Edward Bernays (1891-1995).

To understand Bernay's most infamous campaign, we first need a little context. First, you should know that Guatemala turned democratic in 1944. When this happened, Sam Zemurray, the visionary ‘Banana Man’ head of the United Fruit Company (who had organized the overthrow of President Miguel Dávila of Honduras in 1911), sensed that Guatemala's reforming government would hurt his bottom line—that is, give his company financial troubles in the form of fewer profits. And so, although Zemurray was reluctant to make United Fruit the first American corporation to wage a propaganda campaign in the United States against the democratically-elected president of a foreign country, he eventually hired Bernays. Zemurray wanted Bernays to agitate against the Guatemalan president Jacobo Árbenz. Things weren't moving very fast, but “then, in the spring of 1951, Bernays sent him a message with alarming news. The reformist leader of faraway Iran, Mohammad Mossadegh, had just done the unthinkable by nationalizing the Anglo-Iranian Oil Company. ‘Guatemala might follow suit,’ Bernays wrote in his note” (Kinzer 2007: 134). In other words, at least one nationalist reformer was making waves around the world, and Zemurray was worried this might inspire Árbenz to do the same. Kinzer then writes “that was all Zemurray needed to hear. He authorized Bernays to launch his campaign,” which included “glowing dispatches about United Fruit and terrifying ones about the emergence of Marxist dictatorship in Guatemala” (Kinzer 2007: 134).1

The rest of this story is complicated, but here are the highlights:

 

 

 

Sidebar

Students sometimes ask me, after a bit of outrage and disappointment, why Eisenhower approved Operation PBSUCCESS. The most convincing argument that I've read comes from Michael Grow (2008). In U.S. Presidents and Latin American Interventions, Grow explains the motivation felt by several administrations that pursued regime change in Latin America during the Cold War. The underlying motivation behind the interventions, both successful and unsuccessful, in Guatemala, Cuba, British Guiana, Dominican Republic, Chile, Nicaragua, Grenada, and Panama was to posture to the Soviet Union. In other words, American presidents felt they needed to look like they were resolved to stop any encroachment whatsoever by communism in their hemisphere. Put differently, they felt they needed to look tough. Guatemala was an important first step. Grow explains:

“From Washington’s perspective, then, Guatemala represented nothing less than a ‘crucial test’ of superpower strength in the Cold War. Forces on both sides of the Iron Curtain were watching the confrontation closely, U.S. officials believed, and would draw important inferences about U.S. power from Eisenhower’s response to the situation... Guatemala offered an auspicious opportunity for a quick, morale-boosting tactical victory. The successful overthrow of Arbenz’ government would convey an image of U.S. strength and demonstrate to a watchful world that, under the Eisenhower administration, the United States could effectively stanch the tide of international communist expansionism” (Grow 2008: 20-21; emphasis added).

Interestingly, now that the Soviet Union has dissolved and some of its documents have been released, we know that the Soviet Union wasn't terribly concerned about Latin America (Grow 2008; see also Brands 2012). These overthrows and interventions were due, we might be led to believe, primarily to the perception of U.S. officials about the goals of the Soviet Union—perceptions that appear to have been overblown.

What have we learned so far? Well, we might say that PR campaigns are extremely useful in persuading large segments of the populace that a given policy is good. The question I'd like to tackle here, though, is not really about U.S. presidents' and companies' uses of PR; we know that there have been some regrettable instances of this. The question I'd like to pose to you is this: as the (temporary) ruling council in Ninewells, would you be willing to use PR for benevolent reasons, to persuade the population that some unpopular policy is actually good for them if it's good for the functioning of the city as a whole? In other words, are you ok with persuading people to agree with things that are for the good of the polity? Here's some Food for thought...

 

 

Before diving into Republic, let's take a look at the Cognitive Bias of the Day:

 

 

Argument Extraction

 

 

 

Teaching Patriotism

It's perfectly natural, I think, to have a feeling of wariness as one reads the pages of Republic. To modern eyes, there is something sinister about how the kallipolis is turning out to be. What isn't natural, however, is to take a closer look at our own society and question aspects of it that are similarly disconcerting—if one pays close attention. Are there aspects of American society that are the way they are because the population has been spoonfed a carefully crafted misinformation campaign? If there are, are these aspects of American society contributing to the overall well-functioning of society? If so, do we want to reproduce these in Ninewells? These are controversial questions. Some are offended when I merely ask them (and they get too mad to even hear my answer!). So, I'll proceed more cautiously here. Let's start with this seemingly innocuous question: What is the function of primary school education?

 

Lies My Teacher Told Me

One potential answer comes from Loewen (2007/1995). Loewen begins his Lies My Teacher Told Me with an odd admission: college history teachers—which is what he is—have to routinely disabuse students of what they learned in their high school history education. In other words, every semester Loewen and other history professors have to correct all the erroneous information that was imparted in their students while they were in high school. Why is this the case? After systematically and exhaustively reviewing their curriculum, Loewen could not help but come to the following conclusion: high school history classes’ main function is to teach students blind nationalism and patriotism in addition to a mindless optimism.

I cannot possibly summarize all of Loewen's argument here (but please do see the interview in the FYI section and/or read his book). I will simply give you one example of what he found. Let's talk about heroification. Heroification is the process by which history textbooks turn historical figures into national heroes without flaws that exemplify national standards (see Loewen 2007, chapter 1). For example, history textbooks heavily sanitize the first deaf-blind person who earned a Bachelor of Arts degree, Helen Keller. It is absolutely important to mention her plight so that we can empathize more easily with individuals with disabilities. But that cuts off Keller's story too soon. What did she do after graduation? What the history books leave out is that Keller was a radical socialist who joined the Industrial Workers of the World, the syndicalist union persecuted by then-President Woodrow Wilson. Why leave this part out? Because for much of the 21st century, being a socialist was thought of as less-than-American. It's only recently that the label "socialist" has been rehabilitated in at least some political circles. So, Keller's politics are not mentioned in high school, much less all the interesting conversations that would've accompanied that lesson. This is because Keller realized that blindness disproportionately affected the poor; in other words, it was a class issue: those with low socioeconomic status were more likely to become blind. Loewen explains:

“Keller’s commitment to socialism stemmed from her experience as a disabled person… Through research she learned that blindness was not distributed randomly throughout the population but was concentrated in the lower class. Men who were poor might be blinded in industrial accidents or by inadequate medical care; poor women who became prostitutes faced the additional danger of syphilitic blindness” (Loewen 2007: 14).

I can't help but to include this example. It seems that Woodrow Wilson, for his part, was also sanitized. There is usually no mention of his fifteen(!) interventions in Latin America, the secret aid he sent to the anti-communist “Whites” in the Russian Civil War, or his racism. Wilson was, by the way, very racist.

 

 

How does heroification lead to blind nationalism and mindless optimism? We'll have to leave the optimism part for a later lesson, but here's what we can say about nationalism. If you regularly portray your own people as good and courageous and strong and committed to freedom, etc., then you are inculcating in young minds the idea that their side is always right; it's other peoples that makes mistakes, commit attrocities, and in general do bad things. Your side is good; everyone else is, in the very least, not as good. This is a recipe for nationalism and patriotism—just like Plato wrote in Republic.

How does this patriotism get taught? Textbook authors use interesting techniques when they are sanitizing historical figures with immoral or “un-American” positions (like socialism) in their track records. Omission (i.e., leaving parts out) is a commonly used technique. Smiley (2014) argues that the last year of the life of Martin Luther King, Jr. is often ignored, yet this is his most militant and radical period. Consider this excerpt from his Beyond Vietnam speech:

“As I have walked among the desperate, rejected, and angry young men, I have told them that Molotov cocktails and rifles would not solve their problems… But they asked, and rightly so, “What about Vietnam?” They asked if our own nation wasn’t using massive doses of violence to solve its problems, to bring about the changes it wanted... I knew that I could never again raise my voice against the violence of the oppressed in the ghettos without having first spoken clearly to the greatest purveyor of violence in the world today: my own government. For the sake of those boys, for the sake of this government, for the sake of the hundreds of thousands trembling under our violence, I cannot be silent.”

Other times, textbook authors only tell half of the story. For example, with regards to the Wilson administration’s interventions in Mexico, textbook authors “identify Wilson as ordering our forces to withdraw, but nobody is specified as having ordered them in!” (Loewen 2007: 18). I found this pretty jaw-dropping. Wilson somehow gets credit (sorta) for ordering the troops out of Mexico even though he's the one that ordered them in in the first place. It boggles the mind.

 

 

Here are two more examples. Oversimplification is a strategy used to teach moral lessons. For example, Helen Keller, once she is sanitized, is a hero that exemplifies the virtues of self-help and hard work, thereby dispelling the notion that opportunity might be unequal in America (see Loewen 2007: 27). (Stay tuned.) As one last example, consider crafty wording. Loewen explains:

“Words are important… In 1823 Chief Justice John Marshall of the U.S. Supreme Court decreed that Cherokees had certain rights to their land in Georgia by dint of their ‘occupancy’ but that whites had superior rights owing to their ‘discovery.’ How American Indians managed to occupy Georgia without having previously discovered it Marshall neglected to explain” (Loewen 2007: 65).

Don't get me wrong. I'm not necessarily trying to say that teaching blind patriotism is a bad idea. I, as is my custom, am merely raising some questions. Perhaps all nations need national myths that foster a sense of cohesion and patriotism? Perhaps nations that don’t utilize their educational institutions to perform this function dissolve? Perhaps something like this has been occurring in the American system? Perhaps Plato's ideas weren't all that bad?

 

 


 

Do Stuff

  • Read from 386a-400c (p. 66-83) of Republic.

 

To be continued...

 

FYI

Suggested Reading: Edward Bernays, The Marketing of National Policies: A Study of War Propaganda

TL;DR: Eudaimonia, How to Control What People Do | Propaganda - EDWARD BERNAYS | Animated Book Summary

Supplemental Material—

Advanced Material—

  • Book: Edward Bernays, Propaganda

    • Note: Only the first six chapters are available on this link.

Related Material—

 

Footnotes

1. Mario Vargas Llosa concludes his 2019 Tiempos Recios with his thoughts on how counterproductive Washington’s coup d’état against Árbenz was, along with the subsequent rebellions and assassinations. It led to a skyrocketing of anti-Americanism across South America. It impelled many to turn radical in Cuba, eventually leading to Fidel Castro’s victory. It made it clear that conquered armies had to be annihilated, as Che Guevara himself oversaw the execution of the Cuban military. Most importantly, the overthrow is what made revolutionaries across the world feel so compelled to explicitly ally with the Soviet Union—so that Washington would think twice about attempting to interfere in their affairs.

 

 

The One Great Thing (Pt. II)

 

 

Man is born egotistical, a result of the conditioning of nature. Nature fills us with instincts; it is education that fills us with virtues.

~Fidel Castro

Empirical and non-empirical

 

A young scientist in training

It's far too easy to get bogged down in endless debate if you miss the following important distinction: empirical and non-empirical claims. So let's learn this important distinction today. Put simply, an empirical claim is one that makes a statement about the world; these kinds of claims are typically verified either through the senses (i.e., you just check for yourself) or through systematic observation/experimentation (i.e., through the process of science). Empirical claims usually take the form of a descriptive sentence, like "Misha is over six feet tall"—a statement that can either be true or false, depending on how tall Misha actually is. If you think about it, you'll realize that it's really easy to be wrong about a statement like "Misha is over six feet tall." Maybe you just didn't assess Misha's height very well when you met her. In this way, this empirical claim is unlike, say, a value judgment. If you were to say "I think it's good to be over six feet tall", then you are expressing a value claim—a claim about your preferences and values. If you think about it, it's really hard to be wrong about value judgments, at least at the time you express them (see my lesson titled The Trolley (Pt. II)). How can you be wrong about whether you think being tall is a good thing?

Sometimes, though, value claims are suspiciously close to empirical claims. For example, in the sentence we just considered ("It is good to be over six feet tall"), there is a possible interpretation which can be read as something like "It is good to be over six feet tall (since you get more social benefits)". It is definitely likely true that tall people get more social benefits, such as perhaps more attention, more respect, and even higher wages. Those, as I hope you can tell, are empirical matters. If it really is the case that we are making the claim that tall people get more social benefits, we'd have to check our claim by performing a systematic investigation into the lives of tall people. You can't (or shouldn't) just make empirical claims without doing due diligence and checking to see if they're actually true.

So, value judgments about your own personal preferences are one type of non-empirical claim. There plenty others. For example, if you are to make a claim about logical consistency, then this is not commonly conceived of as an empirical matter. This is a conceptual investigation: an investigation into whether a particular concept applies to something. For example, one might wonder whether the concept of logical consistency applies to the following set containing two sentences:

  • If Aristo is home, then Blippo respects himself.
  • Either Aristo is not home or Blippo respects himself.

It's probably the case that no one would ever ask you the question of whether these sentences are logically consistent or not (unless you have the misfortune of taking my logic class). However, if someone were to ask about the consistency of these sentences, you wouldn't have to check the world or read a science journal to find out if the set of sentences is consistent or not. You simply have to consider whether it is possible for both sentences to be true at the same time. In other words, you have to check to see if the concept of consistency is applicable to this set of sentences. It's a type of investigation for sure, but it's not of the empirical variety.1

So there's empirical issues, which can be checked with your five senses or through systematic observation, and non-empirical issues, such as value judgments ("Heavy metal is the best genre of music!") and conceptual investigations (like when one checks to see if a set of sentences is consistent). Here's the important lesson: keep track of which is which! It's happened to me before that the person I was having a conversation with didn't seem to recognize that some issues can only be resolved through empirical approaches while others require something else—say, a conceptual investigation. Not keeping track of which is which will lead to the hollow enterprise of attempting to see if an empirical claim is true through conceptual approaches. (Silly philosophers!) Of course, there's also some disagreements that have no resolution. If one person prefers jazz and another prefers heavy metal and they're both claiming that their preferred genre is "the best", then it's not clear how that would be resolved definitively.

 

Do Stuff

The following mini "debates" are from Lyons and Ward (2018: 346-47). I like these prompts because they show how convoluted disagreements can be. For example, consider this deliciously confused debate:

A: I think abortion ought to be illegal, at least late-term ones.

B: Why would you think that?

A: Because there's no significant difference between an 8-month fetus, for example, and a newborn baby, and it's clearly immoral to kill a newborn baby.

 

Newborn homo sapien

What issues are arising in this debate? Well, clearly the main point of the discussion is a conceptual investigation: the interlocutors (i.e., the speakers) are discussing whether the concept of moral wrongness applies to late-term abortions. I say that this is a conceptual investigation because, as far as I know, there's no way to track moral wrongness with your five senses or with an experiment. You can't see the property of moral wrongness itself; you only recognize that the concept of moral wrongness applies to some things, like murder. You never see the moral wrongness itself, since it is (it seems) not a physical thing that can be seen(!). However, within the debate there's a comparison between an 8-month fetus and a newborn baby. To be honest, I've no idea how similar an 8-month fetus is to a newborn baby. I'm no expert in gestation. However, I do know that I could find out how similar those two are by consulting with a specialist in obstetrics and gynaecology. These specialists have spent years learning the knowledge that has been gathered by medical practitioners on pregnancy and childbirth. In other words, these specialists have amassed empirical knowledge on pregnancy and childbirth. Since Speaker A is using this analogy to fuel his argument, then it seems we must assess his evidence to see if his argument is sound. In other words, he is using an empirical claim to motivate his conceptual conclusion. Moreover, if you really want to pick some nits, Speaker A is assuming that there is a clearcut way of moving from empirical facts to conceptual conclusions. In other words, A is making the assumption that anyone who believes that there's no difference between an 8-month fetus and a newborn baby would naturally also believe that abortion is wrong. This is not as clear as A might think. So, in summary, this debate features both empirical and conceptual investigations.

So here's your assignment. Choose a mini "debate" from the list below. Then discuss what type of investigations are underway. Are they empirical? Are they conceptual? Anything else? Lastly, try to resolve these investigations. In other words, if the issue is empirical, look for the answer; if it's a conceptual investigation, try to figure out whether the concept is applicable in the given situation or not. If you can't resolve the issues, at least point out who or what might might help. For example, in my discussion above, I didn't really come to a conclusion on how similar an 8-month fetus is to a newborn baby, but I didn't point out who might be able to help. Type up your answer (which should be between 250-500 words) and submit it in Quiz 1.7+.

Here are the prompts:

  • People keep asking me, "How do you know you'll make a great president?" And I tell 'em, "I will be a great president. It's just a fact."
  • A: You can't legally say, "It would be a better world if the president were dead." That's a threat, and it's illegal to threaten the president.
    B: That's not a threat; it's a statement of fact. What do you mean by "threat"?
  • A: There's excellent scientific evidence that there's no causal connection between vaccination and autism.
    B: You don't know. You're not a scientist.

Argument Extraction

 

 

 

Teaching Submission

Plato's curriculum for the Guardians, auxiliaries, and citizens—a curriculum that now includes the myth of the metals—appears to be designed to keep each element of the kallipolis firmly in their place. The Guardians do the ruling, the auxiliaries do the enforcing, and the citizens perform their individual roles and duties. Some students have expressed their horror at the very notion of "programming" preferences into young people so that they fall in line. However, as we saw in the last lesson, some thinkers (e.g., Loewen 2007) believe that the American high school history curriculum has been doing just that. In particular, we saw that Loewen argued that history, as it is being taught, teaches students blind nationalism and mindless optimism. Here's another contentious point that he makes: the history curriculum reinforces social stratification since it is primarily composed of anti-working class, pro-boss perspectives.

 

Lies My Teacher Told Me

Ok, so what is Loewen really saying? Well, Loewen is arguing that the history curriculum teaches students to be ok with the massive wealth and income inequality seen in the United States—to not really question it. For example, Loewen (2007: 53) claims that the way the Columbus story is taught, where he suppressed his men’s near-mutiny, reinforces the stereotype “that those who direct social enterprises are more intelligent than those nearer the bottom.” Now I didn't learn the Columbus story in this way (since I went to primary school in a different country), but, per Loewen's review of the literature, young students are taught that Columbus' crew wanted to turn back. It was only due to the courage and resoluteness of Columbus that they stayed on course. Of course, this was mighty fortunate for Columbus and his crew (although not for the Native Americans), since they were able to "discover" a new continent. The way this story is told, argues Loewen, stresses that the crew should stay in their place and just obey their commander.

Here's another example. As discussed previously, Loewen (2007, chapter 1) shows that the story of Helen Keller told in history textbooks usually has a glaring omission: her radical socialism. There is typically no mention whatsoever of the political activities which occupied Keller from young adulthood until late in life. (Keller died in her 80s, by the way.) Given that history textbooks ignore the majority of her life, the way her story is told reinforces the notion that the most important thing she ever did was overcome her disabilities, not protest against her government. Again, the (implied) message is: overcoming your disabilities is good but not agitating and protesting against your government.

Loewen’s most powerful example is probably the fact that textbooks omit or regularly downplay the role of labor movements. I can attest to this. Almost all that I know about the labor movement I've learned outside of the classroom (and I went to an excellent college preparatory school). Loewen summarizes:

“[T]he most recent [labor] event mentioned in most books is the Taft-Hartley Act of sixty years ago… With such omissions, textbook authors can construe labor history as something that happened long ago, like slavery, and that, like slavery, was corrected long ago” (Loewen 2007: 205).

Again, I'm not making the case that we should teach the history of labor so that we can start a socialist movement. I can honestly say that I've never voted for the socialist parties in either of the countries in which I'm a citizen: Socialist Party (USA) and the Partido Mexicano Socialista (México). What I am saying is this: we can agree with Loewen that the history curriculum appears to have some gaps in it that seem to conveniently endorse one viewpoint over another. Plato would be proud.

And if you're still not convinced, here's some Food for thought that will outrage roughly half of you...

 

 

 

Sidebar

One related question that is instructive when discussing the empirical/non-empirical distinction is the following: Why is there such drastic income inequality? This question lends itself to a variety of potential answers. As you saw in the Food for thought, allegiance to a political party corresponds with the kind of answer you are likely to give. Let's consider one of these potential answers: Some jobs get better pay, and only some people are clever enough to go for those high-paying jobs. If you notice, this potential answer makes two claims—both of them empirical. The first states that there's only certain job-types that are associated with wealth on the level of "the one percent". The second claim states that there's a cleverness required for both recognizing these jobs and actually entering into them.

So let's attempt to assess these empirical claims. First off, what are the highest paid jobs? In other words, what are the jobs of the 1%? According to a 2012 study by Bakija, Cole, and Heim, the job-types most featured in the one percent is managerial positions (primarily in investment companies), lawyers (primarily if they work for Wall Street), and medical physicians (primarily with private practices). Some notable mentions are the CEO’s (of investment companies), supervisors (of investment companies), financial specialists, and those in financial services sales.

The second part of the potential answer makes a claim about what the one percent have in common with each other (besides wealth), namely that they're clever. So as to not bias our inquiry, let's think about what the one percent have in common more generally. In particular, let's answer the following question: What do the one percent have in common with each other (besides wealth)? Well, a New York Times article (using census data) found...

“The 1 percent are family-oriented, nearly twice as likely to be married as everyone else. They have more children, but not more cars, than middle- and upper-middle-class families. For them, education is critical. A vast majority of 1 percenters graduated from college, and in a whopping 27 percent of couples, both partners have advanced degrees.”

 

Mishel's graph

So, it does look like the one percent goes to college. This is all well and good. But as we saw in Three Red Flares, it's not clear that college actually imparts any lasting wisdom on graduates. In line with Caplan, consider that Lawrence Mishel, using data and methods from Piketty and Saez (2013), claims that the education premium cannot account for the wealth of the one percent. In particular, if you disaggregate the college wage premium and the income of the one percent and then plot them on a graph, you can see they are not well correlated. Moreover, if you look at the list provided by Bakija, Cole, and Heim, it's pretty clear that the financial sector is well-represented in the one percent—something the NYTimes article seems to overlook. (Manufacturing consent?) Clearly, being in finance has something to do with extreme wealth.

Since investment managers are the most likely to appear in the one percent, here's one more question to consider: Can we actually measure the effect of a “highly-skilled” investment manager? The short answer is no. The longer answer is this. There's no unambiguous factors that we can attribute to an investment manager (as opposed to luck) that have predictive power such that we can know which managers will do well. For example, in one study, researchers attempted to measure the relationship between the success of a company and the quality of the relevant CEO. The result: A generous estimate of the correlation found was .3. A correlation of .3 is not very good. Nobel prize winning psychologist Daniel Kahneman explains:

“A correlation of .3 implies that you would find the stronger CEO leading the stronger firm in about 60% of the pairs—an improvement of a mere 10 percentage points over random guessing, hardly grist for the hero worship of CEO’s we so often witness” (Kahneman 2011: 205; see also Bertrand and Schoar 2003 and Bloom and Van Reenen 2007).

In other words, Kahneman is saying that if you take the best management theories and apply them to the real world, your prediction will be a little better than chance, which is not very good at all. So why is there a kind of a cult of CEO worshipping in the USA? Well, if you believe Chomsky and Herman, it has something to do the media imbuing in you preferences that suit the interests of the elite. If you believe Loewen, your history curriculum taught you to favor people in positions of power and disfavor the working class. And if you ask me, well... I know nobody asked for my opinion, but I think it has something to do with the Cognitive Bias of the Day.

 

 

 


 

Do Stuff

  • Read from 400c-417b (p. 83-102) of Republic.

 


 

Executive Summary

  • Public relations techniques have been used by both companies and governments to persuade lawmakers and citizens of the agreeableness of a particular policy.

  • Loewen (2007) argues that the history curriculum has been designed to teach students blind nationalism and mindless optimism.

  • Critical thinkers distinguish between empirical claims and non-empirical claims.

  • Loewen (2007) also argues that the history curriculum imparts an anti-working class ethos to students so that they end up with a preference for those in a position of power.

 


 

FYI

Suggested Reading: Catalin Partenie, Introduction to Plato’s Myths

  • Note: Most relevant are pages 6-10.

TL;DR: TED-Ed, Plato’s best (and worst) ideas

Supplemental Material—

Advanced Material—

Related Material—

 

Footnotes

1. In case you're dying to know, not only is this set of sentences consistent but the sentences are actually equivalent!

 

 

Stability

 

 

Whoever is careless with the truth in small matters cannot be trusted with important matters.

~Albert Einstein

Important Concepts

 

Knowledge and opinion

Last time, we considered the distinction between empirical claims and non-empirical claims. The main point was to make sure you keep track of what is an empirical claim and what is not, since empirical claims have their own ways of being assessed: either through direct sensory experience or through the process of science. Today we will considered a closely-related distinction: that between knowledge and opinion. I tend to save the finer distinctions of the study of knowledge (known formally as epistemology) for my PHIL 101 course. However, what we can say is this: it is generally accepted that you can only have knowledge about claims that are either true or false. Claims that are either true or false, by the way, have a special name; they're said to be truth-functional. Truth-functional sentences seem to be best expressed as declarative sentences, i.e., sentences that describe some state of affairs, like "Misha is over six feet tall" and "Bianca double-majored in psychology and biology". This is contrast with sentences that take the form of questions or commands; questions and commands seem to not be truth-functional. In other words, it seems to be a misapplication of the concept of truth if you were to label a question as true. For example, if I were to ask you, "Where is your homework?" and you were to respond with "The sentence you just said is true", then everyone listening would suspect that you don't know how to appropriately use the label of truth. It just seems like that you don't use that label on questions. One other thing! It's easier to discuss these truth-functional declarative sentences if we give a name to the thought that is expressed by these sentences. Philosophers have done just that. The thought behind a declarative sentence is called a proposition. In short, it appears that you can only know propositions, which come in the form of declarative sentences (since only declarative sentences are truth-functional).

Avant-garde jazz trio The Bad Plus
Avant-garde jazz trio
The Bad Plus.

On the other hand, there are sentences that hold content which is not the kind of thing you can know, since the claim embedded in the content doesn't seem to be truth-functional. For example, if you were to say "Polyrhythmic jazz is the best kind of music ever", then it's pretty clear you're expressing an opinion. Opinions like these can sometimes be interpreted as a proposition. In other words, one might interpret that sentence as saying that there's such a thing as the property of being the best kind of music ever, just like some objects have the property of being blue or the property of weighing more than five pounds, and that polyrhythmic jazz has this property. But of course, it seems like the property of being the best kind of music ever is completely made up. It certainly doesn't seem to exist in nature independent of the minds of humans; it is mind-dependent. So, even if someone says, "I know that polyrhythmic jazz is the best kind of music ever", we're not really taking them to mean that they have some sort of knowledge about what the best kind of music ever is. Instead, we'll interpret them as expressing that they feel really good about saying that polyrhythmic jazz is pretty awesome.1

So there's claims that are truth-functional, and these (if they're actually true) seem to have a truth-maker—they correspond to the world in the sense that there something that exists in the world that makes the proposition true. You can actually know these. Other claims appear to not be truth-functional, but are instead expressions of emotion, or personal preferences, or of strongly-held convictions. This second group we can call opinions. Here's the tricky part, though. Where do we draw the line?

Firestone
Randy Firestone.

In his book Critical Thinking and Persuasive Argumentation, Randy Firestone argues that the only facts that you can know are those that are ultimately rooted in sensory evidence, everything from that which you checked with your own sensory organs to the products of the systematic observations of communities of scientists—and that ain't nothing. But he makes the case that claims in ethics and metaphysics are simply beyond what any rational person can call "facts". For example, how can one demonstrate that the sentence "Spanking your children is morally abhorrent" is a sentence that actually has a truth-maker somewhere in the world? This is not to say that spanking children is permissible, by the way. What Firestone is saying is that there's no sensory information that could confirm (or disconfirm) the sentence "Spanking your children is morally abhorrent". So, Firestone argues, most ethical claims are best construed as opinions, true only relative to the person saying them.2

On the other hand, in his lectures on critical thinking, Mark Balaguer appears to not be comfortable drawing the line between facts and opinion where Firestone drew it. Balaguer believes that the view that some ethical claims are best construed as opinions is "controversial" at best (see chapter 9 of Balaguer 2016). This is to say that it is not clear that it is true. Balaguer instead opts for not discussing ethical claims at all, leaving it up to the reader to decide for themselves whether moral judgments are best thought of as truth-functional claims or as expressions of emotion and subjective preference.3

So drawing the demarcation line between fact and opinion is hard. Nonetheless, critical thinkers should note that a. there are some claims that are clearly truth-functional, and b. there are some claims that are clearly opinions. In the middle you might run into some problems, of course. My advice, then, is to try to always steer your investigations towards what is clearly truth-functional. In other words, if you are getting bogged down in metaphysical and/or ethical debate, then just steer the conversation towards what can actually be verified through the senses. You might not end up with the same questions you started with, but the pivot away from claims that are not empirically-tractable will pay dividends.4

Argument Extraction

 

 

 

Pressure release

Occupy Protesters
Occupy protesters.

Should we take Plato's apparent advice seriously? Should we try to unify society into a coherent whole as much as possible? Should we emphasize our similarities rather than our differences? If we do so, will there be greater stability?

With regards to wealth, it does seem like wealth inequality is driving a wedge in American society. This is part of the reason why tens of thousands participated in the Occupy Wall Street protests which started in September of 2011. Their slogan "We are the 99%" was rallying cry to all those who felt that income and wealth inequality had grown to be unsustainable and unacceptable. As it turns out, it was not only New Yorkers who felt this way. Occupy Wall Street grew into the Occupy Movement more generally and dozens of cities around the country broke out in protest. In Plato's words, the USA had become two cities: one composed of the one percent and the other made up of the rest. And it was clearly the case that tensions were high. It didn't matter that the sitting president was a Democrat. It didn't matter that there was no real plan being given by the movement for how to move forward. According to one survey, a sizable chunk (35%) of Americans supported the movement—property damage and all. A country divided indeed. Here's some Food for thought...

 

 

Perhaps there's something that can be done, however, as a sort of a "pressure release". Some political candidates have recently advocated for a universal basic income, where everyone gets enough to meet some basic needs, to address income inequality. That's one option. Presumably having a guaranteed income will provide a safety net so that no one lives in abject poverty and feels the need to protest for months on end. Here's another idea. Kai-fu Lee (2018, chapter 9) lays out his vision of a social investment stipend. The universal basic income, Lee argues, will only handle bare minimum necessities but will do nothing to assuage the loss of meaning and social cohesion that will come from a jobless economy as more and more jobs get automated and/or otherwise robotized. (Stay tuned.) This is where Lee's social investment stipend comes in. These stipends can be awarded to compassionate healthcare workers, teachers, artists, students who record oral histories from the elderly, the service sector, botanists who explain indigenous flora and fauna to visitors, etc. By promoting and raising the social status of those that promote social cohesion and emphasize human empathy, Lee argues we can build an empathy-based, post-capitalist economy. Plato might like that one. Or else... What's the alternative?

MLK Jr
Martin Luther King, Jr.

There's another dimension of American society that seems to be the source of tension, and just talking about this one can get me in trouble(!). Let's be honest. Racial relations have never been smooth in the USA. It's been less than rosy forever, basically. I hope we can agree on that. What many are not agreeing on now, however, is how to move forward. One civil rights leader from the middle of the 20th century made the case that who we are is fundamentally rooted in our character and in our actions. We are more than just our skin color—and perhaps we can add gender, sexual orientation, and whether or not we are persons with a disability. This civil rights leader is, of course, Martin Luther King Jr., and you probably remember this line from his most famous speech. "I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character." Let's call this the character view: who you are is a function of your character. However, there's also been a current of thought (also going back to the 20th century) that stresses that who we are is fundamentally linked to our race, gender, sexual orientation, etc. Some (e.g., Murray 2019) call this approach, along with the resulting political mobilization, identity politics.

If we are taking Plato's advice seriously, then identity politics definitely has to go. This is because identity politics, by its very nature, atomizes society into interest groups (e.g., the LGBTQ+ community, women, African Americans, etc.). This is clearly the opposite of "one city". You might say that these interest groups are marginalized and oppressed, and thus need some sort of cohesive movement to help them overcome their social chains. But some (e.g., Murray 2019, Mac Donald 2018) argue that identity politics is not only ineffective, but that it has been causing discord at our universities and lowering the quality of the content in our shared intellectual spaces. Note that beginning in 2014, there was an increase in student demands that controversial speakers be uninvited to speak at their university (even though, as Murray reminds us, attendance was optional). There were also more instances of students shouting down speakers, and there were demands for "safe spaces", "trigger warnings", and regulating of "violent speech" (Lukianoff and Haidt 2019). There were student takeovers of some colleges. Some teachers were fired while others felt compelled to resign (along with their spouses). It's even the case that some professors were injured.

Allison Stanger
Professor of Political
Science Allison Stanger,
who suffered a neck
injury during a protest.

What's going on here? To some, this is a healthy expression of our first amendment rights. However, thinkers like the the ones mentioned in the previous paragraph are alarmed that the university is not being able to perform its function, and they blame it on the growing role that identity politics plays in the minds of students. For example, British author Douglas Murray is concerned that the rise of identity politics has distorted not only education but even the media. In his The Madness of Crowds, Murray reminds us of what utilitarian philosopher and member of Parliament John Stuart Mill said in On Liberty: that we should listen to the views of others, even if we disagree, because they might be partially right (and we can thus learn from them). Obviously, though, shutting down speakers does not allow students to learn anything from speakers (who probably have at least some good points that we can all consider). Furthermore, Murray makes the case that it is making our media environment (even) less informative. Murray argues that the media panders to those who advocate identity politics, making the news less informative as a whole. One domain that he focuses on is how even inconsequential news about LGBTQ+ community makes headline news, bumping off other news stories from the broadcast. One telling example is when a Japanese c-list celebrity came out as gay, and then this news event overshadowed a true tragedy that was occuring at the same time: a natural disaster in Indonesia which killed thousands. It is important to note that Murray, who himself is gay, is not attempting to minimize the plight of the LGBTQ+ community; but he does want it to be put in perspective. Japan, he argues, is not a particularly anti-gay community; Japanese people appear to be generally apathetic about the whole issue. In other words, ideally, the news story about a gay actor (that most people don't know about) shouldn't bump the actual news of a natural disaster elsewhere in Asia. But it did. And it is this kind of thinking, Murray argues, that is not allowing some universities and the media to function properly.

The Coddling of the American Mind

Psychologist Jonathan Haidt's explanation is a little more nuanced. In a recent interview, Haidt first noted that social media plays an important role in the psychological lives of young people. He points out how disanalogous social media is from normal social-communicative venues (i.e., regular communication). In social media, you are publishing for the benefit of a non-general audience (who are actually selected for via algorithms for like-mindedness and are more likely to be the kind of people that are on social media long enough to see your post and who give immediate approval or disapproval). As such, your interactions are inauthentic and forced. You say things merely to get approval. This creates a spiral into normative poles. In other words, you are more like to simply "perform" and say things exclusively for your base—the left, the right, whatever.

It gets more complicated, though. Recall the lesson titled A Certain Sort of Story. In it, we learned that the only currency in social media is attention. So, any subculture that arises out of social media is going to see attention as an intrinsic good, as opposed to, say, truth. This can be seen in call-out culture. A typical strategy of call-out culture is to hinge on one word (or phrase) and interpret it in the worst possible way, not taking intent into consideration at all. The person who called out the “offender” out gets the prestige for identifying a bigot or racist or whatever. Note that this doesn’t require an assessment or discussion about any actual offense; the person who called out the offender gets credit for every call out, regardless of whether or not they called out someone who is actually racist, or bigoted or whatever. It's like getting paid for every bullet you shoot, not every target you hit. This is all, of course, usually accompanied by demands to have the offender fired from their post. Haidt argues that this practice, far from protecting disenfranchised groups, actually leads to making oneself more susceptible to feeling marginalized and feeling targeted more often; it gives negative emotional power to words that would otherwise be benign. It makes you feel like the world is more hostile than it really is. This is part of the reason why Gen-Z has higher rates of depression (although paranoid parenting probably didn't help; see Levine and Levine 2016). See Haidt and Lukianoff's The Coddling of the American Mind for more.

Is it true that the kind of thinking that fuels identity politics (enabled by social media) is making our universities dysfunctional? Is the function of a university social justice or the pursuit of truth? Does identity politics actually move us closer to social justice or does it just divide us even more? What is to be done?

 

 


 

Do Stuff

  • Read from 419a-430c (p. 103-115) of Republic.

 


 

Executive Summary

  • Although the demarcation point is fuzzy, it's important to distinguish between facts (which are truth-functional) and opinions (which are expressions of emotion or subjective preference).

  • In today's reading from Republic, the characters note that cities that are not well-governed are fractious, its constituent parts warring with each other. They also note that in their perfect city, they have found the virtues of wisdom and courage—wisdom in the Guardians and courage in the auxiliaries.

  • The dialogue today prompted us to consider aspects of society that might lead to internal divisions, such as massive wealth/income inequality and divisive politics.

 

FYI

Suggested reading: Jonathan Haidt and Tobias Rose-Stockwell, The Dark Psychology of Social Networks

TL;DR: Jonathan Haidt, Lecture on The Coddling of the American Mind

 

Footnotes

1. Polyrhythmic jazz is pretty awesome, by the way. Check out As This Moment Slips Away by The Bad Plus.

2. Notice that I said "most" when describing Firestone's views on ethics. He actually has a mixed view in the sense that he believes that some moral judgments are objectively true, like "Murder is wrong", but others are only subjectively true, which is a form of relativism.

3. I side more so with Firestone than with Balaguer. Following the historical work of Alasdair MacIntyre (2003, 2013), I tend to agree that phrases like "morally right" and "morally wrong" don't seem to mean anything at all anymore. There used to be a fixed meaning, argues MacIntyre, but that was lost ages ago, and we now live in a kind of linguistic anarchy when it comes to moral terms. This is not to say that I'm a relativist, as in Firestone's mixed view (see Footnote 2). Instead, philosophers generally refer to my view as radical moral skepticism; the interested student should take my PHIL 103 course.

4. Some students ask what the difference is between the distinction between empirical and non-empirical and the distinction between facts and opinions. The long and short of it is that some facts are not empirical. This is because some sentences that are clearly truth-functional are true not by virtue of anything that the sentence corresponds to in the real world, but in virtue of the logical words inside the claim. In other words, there are some true sentences that are just logically true, like "Either I am a banana or I am not a banana". This sentence might not be terribly enlightening but, if you think about it, it's impossible for this sentence to be false. That, obviously, means it's true. More importantly for our purposes, however, it is true regardless of what the world is like. So, it is a true sentence whose truth does not hinge on any empirical observation. Thus, the distinctions between the empirical/non-empirical and facts/opinions are not one-to-one; they are two different distinctions.

 

 

Fragility

 

 

Don’t think outside the box.
Find the box.

~Andrew Hunt and David Thomas

Lazy thinking

The topic for discussion in this first half of the lesson is lazy thinking. This isn't exactly a technical term that you'll find in the literature from the mind sciences, but it's a helpful label that we will use in this course. Before explaining what it is, here's a little context. On occasion, we are faced with a problem and we want to find a solution. So we let the stuff between our ears get to work. Here's the problem, though: the cognitive capacities of the mind seem to sometimes be at odds with each other, as Plato notes in today's reading. There is one part the mind that looks for quick and easy answers but isn't terribly concerned about the quality of the answers. There is another part of the mind that can do some serious thinking, though, and this part of the mind helps to keep you out of the traps that the first part of the mind easily falls into. However, this more rigorous part of the mind is unfortunately extremely lazy; it won't work unless you really force it to (see Kahneman 2011).1

Usually you won't notice when the different parts of your mind come to different conclusions, compete with each other, and ultimately resolve their dispute and engage in some particular course of action, since this is all happening under the hood and outside of your subjective experience (Nisbett and Wilson 1977). Typically, only the course of action that is decided on by your non-conscious mental processes is presented to consciousness (Wegner 2018). If you do feel anything at all, it'll be a feeling of certainty, a feeling that you know what to do now—as opposed to the feeling of not knowing what to do (Burton 2009). This is all very abstract so I'll give you an example from Nobel prize-winning psychologist Daniel Kahneman. As you read the example, try to actually find the answer—really, though.

 

A bat and ball together cost $1.10. The bat costs $1.00 more than the ball.

How much does the ball cost?

 

What did you guess? Did you say ten cents? Well, that can't be right! Because if the ball costs ten cents and the bat costs a dollar more than the ball, then the bat alone costs $1.10(!). So, together the bat and the ball would cost $1.20, which is not $1.10. Do the math this time. What's the real price of the ball? Convince yourself that the ball really costs five cents. If the bat costs a dollar more than the ball, then the bat costs $1.05. And, of course, $1.05 plus $0.05 is $1.10(!).

 

If you are like most people, not only did you get the question wrong at first but it took you a minute to convince yourself that the real price was five cents. This is because that one part of your mind, the one that looks for quick and easy answers, worked out the problem quickly (although incorrectly) and presented it to your consciousness—your subjective experience. Once it's there in your consciousness, it's hard to break free from believing that is the correct answer—confirmation bias. The more rigorous part of your mind eventually did get the right answer. But, because it's lazy, you really had to think about it before you could activate this part of the mind so that it could get to work on finding the right answer. As you can tell, lots of information-processing (including the kind that comes to inaccurate conclusions) happens outside of conscious experience and you're none the wiser (Nisbett and Wilson 1977). What came to you first wasn't any of the non-conscious mental processings but only the (erroneous) conclusion (Wegner 2018). Afterward, when you actually worked out the problem, you finally felt a real feeling of certainty (Burton 2009). The mind is a tricky thing.

And so, with all that setup out of the way, this is what lazy thinking is: when you are satisfied with the quick and easy solution without making sure you're actually engaging the more rigorous information-processing parts of your mind. Now you don't need me to tell you that lazy thinking is rampant. You know it! Although there's no way to actually check this, I'm ok with wagering that most people most of the time go with the easy and quick conclusions that came to their mind (as opposed to actually engaging in rigorous, time-consuming, cognitively-demanding information-processing). I'll go further. I feel that people go out of their way so that they can keep their quick and easy solution. They'll literally push away information that gets in the way of them being able to keep that precious first conclusion that came to mind. (God forbid they actually have to think!) I'll give you an example. It has to do with the Informal Fallacy of the Day:

 

 

So strawman arguments are not the way you want to win arguments, because you're not even addressing the actual line of reasoning of your opponent—only a distorted pseudo-argument. (This does you no favors, obviously.) However, in some settings, it is really hard to engage the more rigorous part of your mind. One particular such setting is in politically charged conversations, especially in the hyper-polarized American political environment (Mason 2018). Emotions run too high to be able to calmly wait while the lazy rigorous part of your mind figures out what the real argument being given is. This is what I've noticed (in the past) when discussing the topics of last lesson: extreme wealth/income inequality and identity politics. These topics were hand-picked (by yours truly) to go against the grain of deeply-held convictions in conservatives and liberals, respectively. As we learned in the Food for thought in The One Great Thing (Pt. II), 55% of Republicans blame the poor for their poverty (Loewen 2007: 209). Similarly, "identity politics" (as Murray 2019 conceives of it) is primarily on the liberal side of things. So, by discussing these two issues, I hope I was able to rev up your feelings of outrage and hence disallow you from thinking straight about these. I did this because there's an important lesson in falling for one of my traps.

On the conservative side of things, I've heard arguments against universal basic income and the social investment stipend which focus on how the giving away of money would be "undeserved". On the liberal side of things, I've heard arguments for the continued use of identity politics since only this approach to politics will make sure that society addresses past/current injustices that have been done (or are being done) to specific interest groups (e.g., African Americans, LGBTQ+ community, etc.). However(!), Plato's characters seem to be discussing what will lead to the greatest good for the city as a whole; they're neither discussing what's best for individuals or groups within the city nor the issue of whether stipends are "deserved" or not. In other words, Plato is thinking about what will lead to stability and, if possible, what would make society anti-fragile. Put differently, Plato appears to be thinking at the level of systems (Miller and Page 2009). He's thinking about how to make society adaptive and robust, not likely to fall apart. Thinking about the other issues is ok, since they certainly relate to how adaptive a society can be. But be careful, since some conclusions that come from lazy thinking lurk nearby.

 

Stability

 

I hope we agree that the topic here is how to make society cohesive. However, some lazy thinkers attempt to move the conversation into whatever pet topic they like to discuss—rather than engaging with the actual topic of discussion. Let me put it bluntly. Just saying "If Plato's view implies (insert pet topic) is false, then Plato must be wrong", i.e., the quick and easy solution, won't work here, because it's a form of lazy thinking. To actually respond to Plato's challenge, you'd need something like an explanation of and evidence for why your pet topic wouldn't interfere with the city's stability. Even better, you can make the case that your pet topic would improve the city's stability! But the point here is that the level of analysis must be at the same level that Plato is thinking at: the systems level. Remember that Plato doesn't know anything about your pet topic. Plato's dead. So, all we can do is see if there's any wisdom in what he wrote (and many people seem to think there is). So when engaging with his work, we actually have to engage with his work.

Lazy thinking, as I had previously mentioned, is all over the place. Heck, I'm sure I engage in it all the time. That's why I'm continually updating my views and lessons to try to eradicate any remnants of lazy thinking. (I'm trying!) Other groups and career-types have been guilty of lazy thinking as well—not just you and me. For example, in a recent interview, psychologist Gordon Pennycook made the case that it isn’t ideology that drives conspiratorial thinking (i.e., beliving strongly in conspiracy theories such that they inform the way you live your life), but rather it is a lack of cognitive reflection (i.e., lazy thinking). What's causing this lazy thinking? Pennycook argues that the media environment, which we've seen is less and less informative, has raised uncertainty to a degree where many feel (non-consciously) that it is ok to process information in a lazy way.

 

Mazzucato's The Entrepreneurial State

 

There may also be some lazy thinking in academia. In The Entrepreneurial State, Mariana Mazzucato argues against the view that the state should not interfere with market processes since it is incompetent and only messes things up—a view endorsed by many mainstream economists. She argues instead that the state has played the main role in the development of various technologies that define the modern era: internet, touch-screen technology, and GPS. It has also granted loans to important companies such as Tesla and Intel. Moreover, the state takes on risks in domains that are wholly novel and in which private interests are not active, such as in space exploration in the 1960s. It is a major player both in the demand and supply side. And it also creates the conditions that allow the market to function, such as in the building of roads during the motor vehicle revolution. In short, the state is entrepreneurial and very good at it. Her explanation, by the way, also comes at the level of the system. As such, it is difficult to grasp at first. Relative to her view, the view that the state is incompetent is simplistic. Choosing the simplistic view over the complicated view may be yet another form of lazy thinking: willfully refusing to understand a newer, more complicated view so that you can keep your prior beliefs.

Lazy thinking takes place in history too. In The House of Wisdom (2011, chapter 15), Al-Khalili makes the case for a long decline in Arab science, as opposed to a sudden collapse. But in order to make his argument, he first dispels “lazy” explanations of this decline such as that it was caused by the Mongol conquest of Baghdad, which implies that Baghdad was the only intellectual hub in the region. Instead, he clarifies that intellectual work was being done well into the 14th century. For example:

  • Ibn al-Nafis (1213–1288) developed his theories on pulmonary transfer of blood, which were improvements on that of Galen.
  • Ibn Khaldun (1332–1406) whose ideas included:
    • the necessity and virtue of a division of labor (before Adam Smith),
    • the principle of labor value (before David Ricardo),
    • a theory of population (before Thomas Malthus),
    • the role of the state in the economy (before John Maynard Keynes), and
    • the first work of sociology.
  • Jamshid Al-kashi (1380-1429) was a great mathematician who progressed the use of decimal notation.

So historians had engaged in lazy thinking, telling a simplistic story about Arab decline and ignoring the important scientific achievements made by Arabs that conflicted with their simplistic narrative. What is the explanation given by the more rigorous part of the mind? Well, Al-Khalili notes that one important reason for the decline in science was the lack of enthusiasm for the printing press among Arab states. In short, Arabic script simply didn't lend itself to mechanizing—something that is necessary for the operation of a printing press. But, of course, the printing press is essential for the vast and fast transmission of ideas. Any state without widespread use of the printing press is necessarily going to progress more slowly than one with widespread use of printing presses. (Another explanation at the level of systems!) As you can see, the better explanation is much harder to arrive at—certainly harder than "it's because they lost a war".2

As you can see, lazy thinking can come in many guises. Watch out. Always ask yourself the following questions when studying an argument: a. what does the author really mean by this? b. what evidence is being given? I think it'll help stave off lazy thinking. For some other tools for thinking, see Dennett (2014).

Argument Extraction

 

 

 

The value alignment problem

Anyone who's spent more than half an hour with me knows of my deep fascination of and interest in artificial intelligence (AI). In fact, in another class (PHIL 101), I give my views on some possible scenarios we might find ourselves in as AI becomes even more ubiquitous in society.3 In this portion of the lecture, however, I'd like to link once more with what we discussed in the last lesson. In particular, I'd like to discuss how, if we aren't careful, rolling out AI across more and more domains of society will increase various forms of inequality, in particular that between the white majority and historically disenfranchised groups.

 

Baptist's The Half Has Never Been Told

 

First off, we must acknowledge that there have been historic wrongs of epic proportions in American history. For example, in The Half Has Never Been Told, historian Edward E. Baptist gives the economic history of slavery and the form of capitalism that it gives rise to. Here are some highlights. In chapter 4, Baptist takes on various issues regarding plantation slave labor during the early 19th century. First off, slave labor was increasingly torturous during this period in the Deep South; handlers endeavored to devise new ways to yield more and more output from the slaves. They were successful. The push and quota system, where slaves were whipped if they didn’t reach their daily goal and the goal is progressively increased as time passes, was the "management development" of this time period—a system that is, in a less brutal and modified form, still used today.4 In fact, Baptist uses this increase in production through coerced labor as an empirical datum against the economists’ view that free and voluntary labor is more productive than coerced labor. In other words, the free-market fundamentalist's claim that a system of rational agents acting out of self-interest is the most efficient system is false, according to Baptist; you can be even more efficient by inflicting unimaginable brutality on workers. Moreover, since cotton was the world’s most coveted commodity in that century, slave labor made plantation owners lavishly rich and made the American empire possible (Beckert 2015, Beckert and Rockman 2016).5

All this to say, some minority groups (e.g., African Americans) have suffered more than their share of injustice. It seems sensible, then, to attempt to avoid any further harm done to these disenfranchised groups—one would think. However, the rolling out of AI might be yet another harm on disenfranchised groups. I'll explain.

 

Brian Christian's The Alignment Problem

 

Today, we are in the age of machine learning. Machine learning (ML) is an approach to artificial intelligence where the machine is allowed to develop its own algorithm through the use of training data, as opposed to being rigorously coded by a computer programmer. Per Sejnowski (2018), ML became the dominant paradigm in AI research in 2000. Since 2012, the dominant method in ML is deep learning. The most distinctive feature of ML (and deep learning) is its use of artificial neural networks (see the FYI section for more info). Sejnowski claims that any breakthroughs that will happen in AI will happen as a result of research into deep learning.

That's all well and good. If you're like me, you're excited about new developments in technology. However, it's not all as awesome as it might appear to be at first. In the opening chapter of The Alignment Problem, AI researcher Brian Christian discusses just what this alignment problem is. In short, it is research into ensuring that AI systems be designed so that their goals and behaviors can be assured to align with human values. But, Christian points out, we are a long way off the mark at this point. For example, machine learning and deep learning algorithms, due to the biased data sets on which they are trained, have built-in biases that may negatively effect historically disenfranchised groups. Case in point, early ML algorithms did very poorly at recognizing the faces of minorities and women. In fact, Christian points out that the bias goes far back in history, all the way back to the mechanisms within cameras themselves, which were tuned on light-skinned models(!). (No wonder our data sets are biased!) It was apparently chocolate manufacturers(!) that demanded the improvement of camera tuning so as to show the details in their product. The lack of training sets containing black faces even led Google’s image classification algorithms to label people of African descent with non-human categories.

Machine learning algorithms also show bias when it comes to word association. For example, some algorithms were designed so as to be able to "compute" combinations of words. The results were pleasing at first, but then things got sour. One such case is when you input the following: “doctor” - “man” + “woman”. The output was “nurse”, as if women can't be doctors.

There's more problems. There is an opacity to neural nets. What this means is that, since they learn at the sub-symbolic level, programmers do not actually know what the algorithm "learned" to do. Here's one example, as told by AI researcher Eliezer Yudkowsky:

“Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back... It turned out that in the researchers’ dataset, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days [not tanks from trees]” (Yudkowsky 2008: 321).

 

The sword of Damocles

This opacity of neural networks makes them dangerous in, say, medical settings, where they are increasingly being used. This is because if they learn an erroneous pattern, it will be difficult to surmise this by looking at the model. Again, the learning happens at the sub-symbolic level, which means humans can't interpret it. It would be easier, obviously, to glean any mistakes from rule-based models, like the hard-coded programs that AI researchers used prior to the ML revolution. In fact, this has been tested. In one trial, both the rule-based and the neural net models learned a false rule: that asthmatics tend to survive pneumonia. (This, by the way, is definitely not true. You probably know if you know anyone with asthma.) The models learned this because asthmatics, due to their underlying condition, typically get sent immediately to the ICU if they get pneumonia. This makes their death rate artificially low on account of the care they receive as a matter of course. In other words, asthmatics tend to not die of pneumonia if they are admitted to the hospital because there is a streamlined process to give them the care they need, since they would very likely die without this treatment. Both the neural net and the rule-based model, though, suggested asthmatics with pneumonia be sent home—a result from erroneously "thinking" that asthmatics don't tend to die from pneumonia. This very bad advice was easily spotted in the rule-based model, since it’s written in (more or less) plain English—any programmer can read it. However, this would’ve been impossible to recognize at the sub-symbolic level of the neural net. If a hospital is using just neural nets to make diagnoses or treatment recommendations, they'd have no easy way of telling how poorly calibrated their neural network is.6

So our neural nets might end up being racially biased and sexist and we would be none the wiser. This is obviously not good. I've been harping on about the potential social disruptions that AI might cause for years, in particular if we roll out this technology without really understanding what it's doing. However, it seems that the American legislature is composed primarily of people with insufficient computer literacy to understand the looming threats. The way I see it, especially if you consider the other threats coming from AI (see The Chinese Room and Turing's Test), we are sitting around with the Sword of Damocles dangling over our heads. Perhaps we can do better in Ninewells?

 

 


 

Do Stuff

  • Read from 430d-437e (p. 116-125) of Republic.

 


 

Executive Summary

  • Lazy thinking occurs when you accept the intuitive answer that comes to mind without rigorously checking the linear reasoning and noting the quality of evidence that led to said conclusion.

  • In today's reading of Republic, the characters note that their kallipolis has temperance in the sense that there is unanimity about who should rule, i.e., those that are most skilled at ruling are the only ones who should rule. The characters also decide that justice is found in the city so long as everyone performs their assigned duty.

  • The alignment problem is an area of research in artificial intelligence that attempts to ensure that our AI systems will actually comport to human values and desires—something that is not currently happening.

FYI

Suggested Reading: Eliezer Yudkowsky, AI Alignment: Why It’s Hard, and Where to Start

TL;DR: UCL Centre for Artificial Intelligence, The Alignment Problem: Brian Christian

Supplementary Material—

Related Material—

 

Footnotes

1. The psychological model endorsed by Nobel laureate Daniel Kahneman, known as dual-process theory, gives a very helpful metaphor for understanding these different parts of the mind (which are sometimes at odds with each other in their conclusions). Although I cannot give a proper summary of his view here, the gist is this. We have two mental systems that operate in concert: a fast, automatic one (System 1) and a slow one that requires cognitive effort to use (System 2). Most of the time, System 1 is in control. You go about your day making rapid, automatic inferences about social behavior, small talk, and the like. System 2 operates in the domain of doubt and uncertainty. You'll know System 2 is activated when you are exerting cognitive effort. This is the type of deliberate reasoning that occurs when you are learning a new skill, doing a complicated math problem, making difficult life choices, etc. The interested student should consult Kahneman's 2011 Thinking, fast and slow. I might add that although I think the theory is very elegant and gives an extremely helpful metaphor for thinking of the mind, I do not think it is ultimately accurate. If you'd like to know my views on the mind, you'll have to buy me a cup of coffee and sit patiently as I explain the predictive coding hypothesis to you. As an alternative, see Clark (2015).

2. Obviously the Mongol conquest did play a major role in the decline of Arab civilization, but it doesn't tell the whole story—especially with regards to Arabic scientific achievements. The interested student should refer to Al-Khalili (2011) and Mackintosh-Smith (2019).

3. Here are four possible scenarios, from best to worst. Scenario 1: We achieve a technological breakthrough. We develop helpful superintelligent AI that solves all human organizational, societal, and production problems. However, we've lost our sense of identity and meaning and now feel obsolete (see Danaher 2019). 2. We avoid the full automation of Scenario 1, but we still have to deal with partial automation. In particular, job-types like management roles are automated such that you are working jobs where you are micro-managed by a supercomputer—a technologically updated version of the push and quota system (see Guendelsberger 2019). Scenario 3: Nanotechnology and machine learning converge, leading to a more war-prone world (see Phoenix and Treder 2012). Scenario 4: Total annihilation of the human race (see Bostrom 2017).

4. Being a picker at an Amazon warehouse is a modern-day job which uses the push and quota system; see Footnote 3.

5. I wish that was the end of it. But Baptist goes on to survey a catalogue of horrors and injustices. In chapter 5, he discusses how slave extemporaneous musicality developed, i.e., the capacity to come up with and improvise on a musical theme, and how this was usurped (i.e., taken) by the whites. In chapter 6, Baptist discusses how slave spirituality developed and is ultimately suppressed by whites, all in the land of religious freedom. The apparent justification was fears of slave insurrections which would lead to the death of whites (which did happen at least once). In chapter 7, Baptist discusses the sexualization of black women. In chapter 8, Baptist discusses how slavery influenced foreign policy. One example of this was what Ulysses S. Grant called the "most wicked" war in American history: the Mexican-American War. As Baptist tells it, slave owners and pro-slavery politicians sought to expand so as to increase the available area for the cotton industry (see also Greenberg 2012). Not to leave out the North, chapter 9 is about how integral slavery was to the rise of industry in the North. By the way, Baptist tells us that after US forces captured the capitol of Mexico, Mexico City, Congress debated whether or not to annex the whole territory of Mexico (i.e., the entire country). Congress opted not to, however, because they saw themselves as a government of the White race. See Baptist 2016 for still more.

6. Neural networks are also used in predictive policing, and this also has some troubling biases built into it; see the section titled "Predict and Surveil" from my PHIL 103 lesson titled Seeing Justice Done.

 

 

The Rule of the Knowledgeable

 

 

It is useless to attempt to reason a man out of a thing he was never reasoned into.

~Jonathan Swift

Argument Extraction

 

Epistocracy

Some have expressed to me a sentiment that is very close to the following: I can believe whatever I want to believe. I politely disagree with them. I try to explain that their view is uncomfortably close to a discredited view in philosophy—a view called alethic relativism. Alethic relativism is the view that what is true (or false) for one individual or social group may not be true (or false) for another; moreover, according to this view, there is no principled way for privileging one group’s claim over the other (see Herrick 2015, chapter 5). In other words, it's possible that we each have our own truth. This, I hope you can see, is a woefully inadequate view. I mean... The view is ridiculous. Here's a Socratic-style argument against alethic relativism.

  1. If alethic relativism is true, then no one (or no culture) has ever been wrong in their belief claims.
  2. But obviously people (and cultures) have been wrong about their beliefs.
  3. Hence, alethic relativism must be false.

Ladder fails
People and their mistakes.

In general, I take what I think is the most defensible view: that you need to have good reasons for believing what you believe. On some days, like days when I'm crabby (which are increasing in number the older I get!), I go a few steps further. I might say that not only should you only believe in things that you have good reason for believing in, but you should do due dilligence and actually explore your beliefs. If you find yourself lacking enough support for one of your beliefs, you don't deserve to hold on to it. Luckily I'm not always so crabby. I'll settle for this in this class: you can keep your beliefs if you can defend them.

In case you haven't noticed, so far in this course we've challenged quite a few beliefs. In City of Words, we wondered whether it makes sense to define discrete categories of mental health and mental illness; we also discussed that it is potentially the case that some non-normal psychological dispositions, like depression, might actually help you see reality more objectively—the so-called depressive realism hypothesis. In ...for the Stronger, we saw that conflict can be a vital part to certain institutions. In particular, we saw that scientific communities are groups of people with shared epistemic norms (i.e., norms about what more or less counts as an acceptable hypothesis, evidence for said hypothesis, evidence against said hypothesis, etc.) that nonetheless compete with each other for grant money, prestige, and faculty positions. Contrary to what Plato says, this is an instance of internal conflict giving rise to something positive: scientific progress. In Three Red Flares, we explored the view that going on to higher education is mostly just about signaling to employers that you are moderately intelligent and willing to conform to mainstream social norms and do boring work without giving up; i.e., college is not about what you learn but what you signal. In A Certain Sort of Story, we saw that, whether it be through legacy media or the new media, most Americans aren't being informed very well; it's either propaganda or click-bait (or both). In part 1 and part 2 of The One Great Thing, we are presented with a worrisome hypothesis: the American educational system, like that of other countries, is designed to instill in young minds blind nationalism, mindless optimism, and subservience to political/economic elites, among other things. In Stability, we considered whether or not certain societal conditions might make one's own community more or less stable. In particular, we saw the possibility that extreme wealth/income inequality might lead to destabilization, and we heard an argument about how identity politics is possibly making politics and getting an education more difficult. Lastly, in Fragility, we covered a pet topic of mine: the possibility that rolling out artificial intelligence in all domains of human life might prove to be extremely disruptive to society.

I have one more proposal for you in this unit. In the video below, I introduce an idea and I plant a seed. Depending on how you look at it, this idea will be either radically un-American or as American as apple pie.1 In short, in this video we will make our first argument against democracy. And we're just getting warmed up.

 

 

So what does Brennan want? He argues for an epistocracy, sort of. What is an epistocracy? This is a form of government very similar to the democratic republic that Americans are used to but that requires of voters that they show competence on political matters before voting. This—whether it be in the form of required civics training before elections for anyone that wants to vote or economics pop quizzes at the ballot box, etc.—might solve the problem that many modern democracies face: voter ignorance. Why does Brennan "sort of" argue for epistocracy? He actually only argues that if an epistocracy can deliver more political goods than a democracy, then we should opt for an epistocracy, and obviously that's a big "if". 

What do you think?

 

 


 

Do Stuff

  • Read from 437e-445e (p. 125-135) of Republic.

 


 

FYI

Suggested Reading: Bryan Caplan, The Myth of the Rational Voter, Chapters 1 and 2

TL;DR: BookTV, After Words: Hobbits, Vulcans, and the flaws of democracy

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. See Klarman 2016 for an argument that the founding father's deliberately sought to minimize the power of the people in political matters.

 

 

 Unit II

Critical Thinking 3.0 Syllabus (T_R) (1).jpg

The Distance of the Planets

 

 

[E]quality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group. ...If we recognize this principle, no one has to spin myths about the indistinguishability of the sexes to justify equality.

~Steven Pinker

Is and Ought

Today we begin to explore an argument that will take several lessons to truly grasp. In fact, I will only hint at it at the very end of this lesson. To begin to work our way towards this conclusion, however, let's take a look at an issue that seems to be wholly unrelated: coming to moral conclusions from purely empirical premises. Don't worry. I'll explain what that means. This will be the order of events. First will look at what this is/ought distinction is all about. Then we'll look at the reading for today, where the difference between is/ought is relevant but never explicitly mentioned. In the next section, we'll return to is/ought with a contemporary example. Lastly, we'll discuss why it might be a good idea to always take note when you are coming to a moral conclusion in one of your arguments, since you will be vulnerable psychologically and might produce an invalid argument.

The is/ought distinction is the thesis that one cannot come to a moral conclusion (i.e., a conclusion which states that you ought or ought not engage in some particular action or have some particular moral belief) from purely empirical premises. Put differently, facts alone cannot get you to a moral conclusion; you need a moral premise somewhere in your argument. Let me give you an example:

1. It is the case that Nicole borrowed $100 from Jac.
2. It is the case that Nicole promised to pay back Jac.
3. It is the case that Nicole has $100 to spare right now.
4. It is the case that Nicole will see Jac later today.
...
n. Therefore, Nicole should pay back Jac.

First off, if you look closely at the argument, the conclusion does not necessarily follow from the premises. Of course, this means this argument is not valid. It goes without saying that it's also not sound. In other words, even if you believed all the premises and they were all true, you wouldn't have to believe the conclusion—at least rationally speaking. Now, before you accused me of being a psychopath, let me explain what's going on here. Let me first show you what the argument would look like if it were valid. That might help.

1. It is the case that Nicole borrowed $100 from Jac.
2. It is the case that Nicole promised to pay back Jac.
3. It is the case that Nicole has $100 to spare right now.
4. It is the case that Nicole will see Jac later today.
5. If you borrow money, you should pay it back.
6. Therefore, Nicole should pay back Jac.

 

A logic gate
An and gate, a basic
digital logic gate.

I hope that now you can see that the conclusion does(!) necessarily follow from the premises in this one. The way I like to sometimes put it is this. Validity has to do with the logical relationship between the premises and the conclusion. In particular, the path from the premises to the conclusion has to be foolproof; it has to be a watertight connection. If you know about computer programming, validity is a lot like writing a computer program. The code you type in is the premises, and the conclusion is the output of your program. As you know if you've tried to write computer code for yourself, you have to get EVERY SINGLE LITTLE DETAIL RIGHT in order for the program to work, i.e., for the conclusion to follow. This is because, on one way of looking at them, computers are extremely literal-minded and, well, dumb. Their competence comes only through the accumulation of hundreds and thousands of easy tasks, like distinguishing between true and false conjunctions (i.e., and-statements), until they can finally perform very difficult tasks, like learning from data (see Dennett 2014: 109-150). So, in your role as a programmer, you have to make computer programs for these (sorta) dumb machines. That's what validity is like. You have to build your case so that there's zero room for doubt, and it's all or nothing2

Let's go back now to the first argument. Let's be honest. Something fishy is going on here. If you're anything like me, when you read the first argument, it does feel like the conclusion follows. In other words, you (like me) read the first argument and say, "Yeah, Nicole should pay back Jac." In fact, even after credible authorities (e.g., a college professor) pointed out to me that such arguments are incomplete, I still didn't believe them. Why is this the case? Well, that's the power of moral arguments. Moral arguments have a way of swaying us in an automatic, non-conscious way. Our brains naturally fill-in the premises so that the argument is valid, especially because paying back our debts is something that (I hope) most of us already believe in. And we don't even notice it. Most people would never notice, to be honest. As far as we can tell, prior to its discovery in the 18th century, no one had previously noticed the is/ought distinction (see Footnote 1). Moreover, even once you learn about it, you have to really think about what validity means before you're able to convince yourself that the first argument isn't valid and that only the second one is. Try to convince yourself of that now.

Why do we "fill-in" the missing premises for arguments whose moral conclusions we agree with? There's honestly too many theories to give an even-handed summary of them here. I will tell you only about two theories. The first is the pretty standard account in psychology today, since many people believe in the dual-process model described in the lesson called Fragility (but see Mercier 2020 for an interesting rebuttal). On this model, we have a confirmation bias, and we selectively seek and interpret information in ways that simply reinforce our pre-existing beliefs. Thus, once we see an argument with a moral conclusion we agree with, we naturally fill-in the blanks and make the argument valid. It helps, I might add, following MacIntyre (2003, 2013), that the terms "right" and "wrong" essentially mean nothing—a sort of linguistic anarchy. As such, since you have some leeway with regards to the meaning of "right", the mind has an easy path towards imagining the argument is actually valid when it's not—confirmation bias at its finest.

 

Dutton's Black and White Thinking

The other theory I will mention goes something like this. Our brain is in the business of making predictions; that's what it evolved for (Clark 2015). These predictions will hopefully keep you alive long enough to find a mate, reproduce, and pass on your genes—good ol' fashioned evolutionary fitness. Of course, making predictions is hard work! There's a lot of information to process! So, your brain takes as many shortcuts as it possibly can; this makes evolutionary sense. Remember, through most of our evolutionary history, if you were to sit there and think out a problem in its entirety, you would've been some predator's lunch. So shortcuts are a must. This, by the way, is referred to as a bounded rationality strategy: you don't process all the information available to you; just the info that tends to in general keep you out of some predator's digestive tract. What kind of shortcuts does the brain take? Well, according to Dutton (2020), we evolved three different filters: the fight versus flight filter (where you process just the info that helps you decide whether you should take off or take a stand), the us versus them filter (where you process just the info that helps you decide whether someone is one of your own "tribe" or not), and the right versus wrong filter (where you process the info that gives you cues as to whether some action will be socially acceptable or not). Under this view, then, you have something like a mental module that helps you categorize some actions as permissible and others as not permissible. Now here's where the analogy with a shortcut dissipates. When you take a shortcut to some destination, you're typically aware of what you're doing. However, these cognitive shortcuts are automatic and under the radar. You won't know that you're not processing all the relevant information. You'll just feel yourself come to a conclusion—not that you came to that conclusion through a questionable process of mental "filling in" missing facts/claims. Oh, Darwin!

If you're getting the feeling that I'm arguing you can't blindly follow your intuitions, then now you're getting it. And if you haven't noticed, I'm throwing a ton of evidence your way (see my Works Cited page). The mind is a tricky thing. Watch yourself.

Argument Extraction

 

 

 

Demonic Males

As we learned in the previous section, there is a distinction that goes back to the 18th century between empirical claims and moral conclusions. In particular, the claim is made that no matter how many empirical claims you pile up, you cannot build a bridge to a moral conclusion. Although this might seem completely unrelated to you, there are some happy consequences about this distinction that might be of interest to you. Let me begin with a story.

 

E.O. Wilson
E.O. Wilson.

In the summer of 1975, the distinguished Harvard entomologist Edward O. Wilson, considered the foremost authority on the study of ants, published Sociobiology: The New Synthesis. In it he made the radical(?) proposal that social behavior has a biological basis, and hence there can be a biological science of social behavior—the science he was attempting to usher in with his new book. In this book, however, he explicitly included Homo sapiens as one of the species that could be studied through sociobiology, suggesting that human sex role divisions, aggressiveness, religious beliefs, and much else ultimately all have a genetic basis, even if genes don't tell the whole story. And so, after the publication of this book, the mild-mannered Wilson, who preferred to spend his time studying ants, came under intense criticism. There were vitriolic articles written against him, and he was called a racist and a sexist—despite not making any explicitly racist or sexist claims. Then, some three years after he published his book, Wilson was about to speak at a symposium sponsored by the American Association for the Advancement of Science when he had a jug of water poured over his head by a group of hecklers, later discovered to be associated with the Marxist Progressive Labor Party. For the full story of the sociobiology controversy, see Segerstråle (2000).

Why would leftists pour a pitcher of water over a biologist's head at a (probably pretty boring) academic conference? Some thinkers (e.g., Pinker 2003, Segerstråle 2000) who have studied Wilson's sociobiology controversy and other similar incidents think they know what's going on in the minds of these leftists. The leftist idea, although expressed differently by different thinkers, seems to be as follows. If there are biological explanations for any kind of social inequality (e.g., gender inequality, racial inequality, etc.), then this legitimizes it. In other words, if one is ideologically committed to the view that the status quo is full of injustice, then any attempts to give an account of social inequality on a biological basis is an attempt to justify the status quo (see Pinker 2003, chapters 6 & 7). As it turns out, many participants in the sociobiology controversy were explicit Marxists whose criticisms of Wilson's work demonstrated that they had something like this in mind (Segerstråle 2000: 199-213).

 

Haier's The Neuroscience of Intelligence

Were these leftists right? With the benefit of hindsight, we can see that there are many aspects of our cognition and social behaviors that are influenced by genes, suggesting that the leftists were if not wrong then at least wrongheaded in their criticism. For example, per Haier (2016, chapter 2), it appears that genes play a major role, more than 50%, in variance in intelligence. Moreover, the genes that play a role in intelligence are of a great number, not an isolated few as some erroneously think. Importantly, environment does play a role in early cognitive development, but(!) it's effect is almost negligible by the teenage years. So, for those who think the nature versus nurture debate is still raging, lets put it bluntly. It's both genes and environment that affect intelligence levels, although their effects fluctuate throughout the lifecycle.

How does this affect the social sphere? Well, there is a strong correlation between measures of intelligence and predictions of job performance (Schmidt & Hunter 2004). In other words, the higher you score on measures of intelligence, the more likely you are to perform well at work. You don't need to be an economist to see how this might lead to greater lifetime earnings for those with high intelligence. There's more. There are three longitudinal studies that establish a correlation between mental abilities and life success: Lewis Terman’s project, Julian Stanley’s Study of Mathematically Precocious Youth at John Hopkin’s, and the Scottish Mental Survey. These studies show a correlation between high mental abilities and physical development and emotional maturity—thankfully dispelling the stereotypes of nerds being puny and socially-maladjusted. Stanley’s study in particular demonstrated the predictive power of a single(!) math-competency test taken at age 13 on lifelong earnings (see Haier 2016: 29-30). The Scottish Mental Survey even found a correlation between mental ability and longevity. So, genes definitely matter for lifetime economic success and even length of life!3

Here's another example of how genes affect our social lives. In Predisposed, the authors make the claim that our political perspectives are driven in large part by genetics. As the authors admit, we like to pretend that our political perspectives are rationally-derived. But by now you should be disabusing yourself of this delusion. Here are the authors:

“Many pretend that politics is a product of citizens taking their civic obligations seriously, sifting through political messages and information, and then carefully and deliberately considering the candidates and issue positions before making a consciously informed decision. Doubtful. In truth, people’s political judgments are affected by all kinds of factors they assume to be wholly irrelevant” (Hibbing, Smith & Alford 2014: 31).

The authors then move to survey all the ways in which genes affect our worldviews and politics. For example, there is evidence that conservatives feel a greater affective magnitude when evaluating positive and negative stimuli. In other words, for conservatives, the highs are higher and the lows are lower. Interestingly, this affects the learning process of liberals and conservatives. Liberals take more chances, even if they suffer occasional negative consequences. Conservatives are more cautious, even if it means limiting the amount of new information acquired. Nonetheless, both perform about the same in school. (Relax, conservatives. Settle down, liberals.) This is because liberal students might take on too much new information such that they can’t process it all.

 

Predisposed

In all, conservatives are more likely to:

  • pay attention to stimuli that signal potential threats, e.g., angry faces;
  • follow instructions (unless the source is obviously bogus;
  • avoid unfamiliar objects and experiences;
  • and in general keep it simple, basic, clear, and decisive.

Liberals:

  • seek out new information (even if they might not like it);
  • follow instructions only when there is no other choice;
  • embrace complexity;
  • and engage in new experiences even if they entail some risk.4

As you can see, liberals and conservatives don't just differ in who they vote for. They differ, from cradle to grave, in their entire worldview and approach to life. This is because, the authors of Predisposed argue, there is individual variation in humans. We vary in how open we are to new information, how prone we are to focusing on the negative, and how much complexity we enjoy. Importantly, these are all influenced by genetic factors. Of course, the authors clarify that their findings are all probabilistic. They admit that there simply is no certainty in the social sciences, and so they trade in words like “determine” for “influences”, “affects”, etc. Nonetheless, the authors make a convincing case that you might've been predisposed to some of your political views before you were even born.

 

 

My favorite examples of how things that are out of our control influence behavior and society come from neuroscientist David Eagleman. In chapter 6 of Incognito, Eagleman discusses how changes to the brain affect changes to behavioral dispositions. For example, Charles Whitman (the infamous mass shooter known as the "Texas Tower Sniper") had a brain tumor putting pressure on his amygdala, a condition which might have made him more prone to violence. It's also the case that frontal temporal dementia patients are prone to a variety of socially-unacceptable behaviors, like stealing and stripping in public. It's even the case that pedophilia can be brought on by a brain tumor and that Parkinson’s patients can develop gambling addictions due to their dopamine medications.

 

Incognito

Most tellingly, Eagleman reminds us of a particular set of genes that really make a difference. If you are a carrier of this particular set of genes, then you are 882% more likely to commit violent crimes. In particular, you’re eight times more likely to commit aggravated assault, ten times more likely to commit murder, thirteen times more likely to commit armed robbery, and 44 times more likely to commit sexual assault. Concerningly, about one half of the human population carries these genes, to the detriment of the other half. As it turns out, the overwhelming majority of convicted prisoners carry these genes, as do 98.4% of those on death row. As Eagleman states, it seems pretty clear that the carriers of these genes are strongly predisposed towards deviance.

Who carries these genes? Have you guessed yet? It's men. Men are much more likely than women to commit violent crimes. It's not even a contest. Moreover, according to the authors of Demonic Males, the male tendency towards violence is not a product of the West, or of patriarchy (like some feminists contend), or settled life (as opposed to some mythical egalitarianism in hunter-gatherer societies); male violence is in our genes.5 In particular, the authors make the case that human males are predisposed to male coalitional violence, i.e., group violence. It is, they say, like a curse.

“Why demonic? In other words, why are human males given to vicious, lethal aggression? Thinking only of war, putting aside for the moment rape and battering and murder, the curse stems from our species’ own special party-gang traits: coalitionary bonds among males, male dominion over an expandable territory, and variable party size. The combination of these traits means that killing a neighboring male is usually worthwhile, and can often be done safely... Why males? Because males coalesce in parties to defend the territory. It might have been different... Hyenas show us that human male violence doesn’t stem merely from maleness [since in groups of hyenas it is the females that dominate the males]” (Wrangham and Peterson 1996: 167).

How does this all relate to the is/ought distinction? Well, it appears that the leftists we started this section with (and some modern ones too) are making an error in reasoning. They seem to believe the following argument is valid.

1. There are innate differences between genders, ethnic groups, those with different political affiliations, etc.
2. Social differences between these groups are naturally-occurring.
3. Therefore, we ought not try to alter this status quo.

Can you see the error? One cannot go from these two empirical premises to the moral conclusion. There's a missing premise. So, if we're taking the is/ought distinction seriously, the logical thing for the leftists would've been to deny the argument is valid. Instead, they seem to be accepting the argument as valid, but merely denying the truth of the premises (i.e., denying the soundness of the argument). However, as we can see with the benefit of hindsight, this amounts to science denialism—denying the truth of scientific evidence merely because it conflicts with one's political/religious worldview. (Don't get excited conservatives. There's plenty of science denialism in your camp too, see Washburn and Skitka 2018.) It goes without saying that science denialism is never a sign of critical thinking.

So what's the point of this whole lesson? It's basically this: if we treat political ideology like a religion, then this obscures our capacity to think rationally, as Caplan argued. It even makes you unable to recognize that some arguments are invalid. What is this costing us? Are there ideas that certain leftists haven't even been able to listen to because their political dogmas prevented them from it? Pinker (2003) and others (e.g., Segerstråle 2000, McWhorter 2021) think so. Are you ready to hear some of these ideas?

Finally, what's the argument I'm working towards? I'll be arguing that political parties should be abolished. Stay tuned.

 

 


 

Do Stuff

  • Read from 449a-460a (p. 136-149) of Republic.

 


 

To be continued...

 

FYI

Suggested Reading: Bruce Goldman, Two minds: The cognitive differences between men and women

TL;DR: TEDTalks, The Differences Between Men and Women: Paul Zak

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. To be sure, the is/ought distinction does not originate with me. Rather, it goes back to the 18th century, to the Scottish Englightenment thinker David Hume. For more information about Hume (and his best friend Adam Smith), see Endless Night (Pt. I).

2. By the way, the connection between validity and computers is not incidental. Developments in logic directly led to the digital revolution (Shenefelt and White 2013). This is why computer science departments still require that their students take an introductory course in logic, and it's helpful if the instructor knows a thing or two about computer science. Might I recommend the PHIL 106 course taught by R.C.M. García?

3. Interestingly, an estimated 84%-95% of the variance in the mortality/IQ correlation may be due to genes (Arden et al., 2016), once again showing that genes play a massive role.

4. One of the most interesting theories in Predisposed is with regards to the evolutionary origins of conservativism and liberalism. The authors speculate that conservative tribalism, with a prominent negativity bias, a fear of the novel, and a penchant for tradition, was actively selected for early on in sapiens’ history. As the species transitioned to sedentism (i.e., settled life), and after all a long learning curve, eventually violence began to decrease. At that point, liberalism came into the picture. So, liberalism is an evolutionary luxury that came about when negative stimuli became less prevalent and less deadly. The authors also speculate as to whether diversity in political dispositions makes the meta-group (i.e., the group composed of both conservatives and liberals) stronger, but make clear that the verdict is out.

5. In chapter 6 of Demonic Males, the authors respond to the thesis held by some feminists and cultural determinists: that patriarchy, sexual subjugation of women, and imperial tendencies are inventions of the West. The authors clarify that these things are found nearly universally (i.e., not just in the West), and then they proceed to give a catalogue of horror perpetrated by groups from: China, Japan, India, Polynesia, Mesoamerica, Arabs, Africa, and various aboriginal tribes. In other words, males are violent across time and space. The authors also retell Friedrich Engels' theory that, prior to civilization, humans lived in communal bliss with equality of the sexes—a view Engels put forward in The Origin of the Family. In that work, Engels theorized that it was only after the invention of animal husbandry, and thus private property, that social relations became unequal; men began to control their wives. Once private property existed, in other words, men wanted to ensure their heir was actually their offspring. To this the authors respond by reviewing tales of the subjugation of women in tribes that are allegedly egalitarian. The authors conclude that patriarchy is worldwide and history-wide, and it is enforced through male violence.

 

 

The Family

 

 

The history of science is the history of successive approximations.

~Robert Burton

A Catalogue of Horrors

One of my favorite fallacies to point out is the red herring. This is a fallacy in which an arguer lends support to his/her conclusion by providing an irrelevant (and diversionary) detail in order to distract people from the issue at hand. In other words, it's a form of throwing the argument "off track" by making a comment or point that is only weakly related to the main point—or maybe not related at all. It's derailing the argument.

A comic containing a red herring

I'm sure you've heard more than a few red herrings during the course of your studies. Given that a large number of my friends are teachers, I certainly have no shortage of stories where one student derails the conversation by bringing up something that is only loosely related to the topic at hand. (Maybe you're the one that has derailed the conversation!) For example, in one of my friend's classrooms, the topic for the lesson was variation in sexual practices in different cultures. One student, however, only wanted to talk about the tendency of one gender—guess which one!—to philander. Per my friend, almost nothing of substance was covered that day.

Examples of red herring arise in various other domains, of course. In his book on critical thinking, Randy Firestone (2019) gives some examples in the political arena. For example, according to Firestone, a politician who argues that we should vote for him/her merely because they have the most experience might be committing a red herring. This is because experience alone is not what (should) count in politics. We obviously need their time in office to have been productive and honorable. In other words, all else being equal, if two politicians have been in office for 10 years but one has done basically nothing of worth and is associated with several scandals, we should prefer the non-scandalous politician. To steer the conversation towards a discussion of experience is to divert the conversation so that it lacks any substance.

Here's one more. I was once at a coffee shop engaging in one of my favorite pastimes, eavesdropping, when I caught wind of the following tragically comical conversation:

A: Raising the minimum wage will only make the cost of everything go up. Pretty soon a carton of milk will cost $20.
B: Wouldn't that only happen if lots of people spent their extra income primarily on milk? Like a supply-and-demand kind of thing?
A: Exactly.
B: But I just don't think that enough people will actually buy more milk than they need so as to drive the price up. Right?
A (realizing he has no idea what he's talking about): Well it's always the same thing: people want something for nothing.

In all honesty, neither one of the speakers had any idea what they were talking about. But in the very least B did raise a good point in his first question: it's not entirely clear that the price of milk in particular will skyrocket due to a rise in minimum wage. And it's also not clear that a whole lot more milk will be purchased if people have more expendable income. What is clear is that A tried to change the subject at a certain point. He tried to transition to a discussion about how people don't want to work but still want money. That is not at all a response to the sensible economic question that B raised. That is merely derailing the conversation. That is, in other words, a red herring.

Begging the question is another one of my faves. This is a fallacy that occurs when an arguer presents an argument for a conclusion and one of the premises supporting the conclusion is the conclusion itself. In other words, this is the fallacy where the support for one's conclusion is just restating the conclusion. I've unfortunately seen this firsthand as well. While at a conference, I heard a talk where the speaker presented evidence from Mazzucato (2015) that argues that the US government has actually been pretty good at spending taxpayer money to develop new technologies that modernized our lives. To this, someone asserted that the government is always inept and always just wastes money. The speaker, looking confused, said something like, "I just gave you a bunch of evidence that what you said is not true. Why do you think the government is always inept?" And then, I kid you not, the person objecting said, "Because they are always inept." Do you see the fallacy? The conclusion is "The government is always inept at spending taxpayer money." When asked for evidence of this, all that was given was "The government is always inept at spending taxpayer money." This is as circular as it gets. Here's the argument in standard form:

1. The government is always inept at spending taxpayer money.
2. ∴ The government is always inept at spending taxpayer money.

 

A comic containing slippery slope

Another good one is the slippery slope fallacy. This is a fallacy in which an arguer claims that if one event (or action) is allowed to occur, then it will inevitably lead to a series of events (or actions) that are much more extreme and undesirable. My favorite example of this betrays my old age. Back in 2008, we had the debate over Proposition 8, a California ballot proposition and state constitutional amendment that was intended to ban same-sex marriage. Fifty-two percent of California voters ultimately said yes to Prop 8, thereby banning gay marriage in the state (until it was overturned in 2010). That's all ancient history. What I can't forget, though, is the arguments I heard against gay marriage. They were preposterous. Some argued—I'm not making this up—that if you allowed gay marriage, then the next logical step would be to allow marriage between a human and a non-human animal, like a goat. Some even said that the state would eventually have to allow marriage between a human and an inanimate object, like a pizza, if gay marriage was allowed. It was bananas. As I hope you can see, these are slippery slopes indeed. The idea that gay marriage would eventually lead to pizza marriage seems ludicrous now. But some genuinely argued this way.

As we move to close this section, let me remind you of the false dichotomy fallacy that we were introduced to all the way back in Three Red Flares. This is a fallacy in which an arguer portrays the issue as if there are only two tenable viewpoints when, in fact, there are other tenable viewpoints. In other words, when someone says, "It's either A or B" and there's clearly an option C, D, etc., then that's a false dichotomy. For example, once I heard (also while eavesdropping) someone say this, "Either I'm right and you're wrong, or you're right and I'm wrong." Having actually heard the conversation, I can tell you that it was option C: they were both wrong. Similarly, given that you might be thinking about how you're going to spend the rest of your life, someone recently might've told you that you can only either have a career you hate but that pays well or a career you love but pays poorly. That's a false dichotomy. I've got news for you: you might get a job you both hate and that pays you poorly (but hopefully not!).

It goes without saying that, as members of the Council of 27, you must attempt to rise above fallacious reasoning. Here. We. Go.

Argument Extraction

 

 

 

Tailored Fit

Recall that in today's reading there was mention of eugenics, the study of how to arrange reproduction so as to increase the occurrence of heritable characteristics regarded as desirable. Eugenics is, for my money, not going to work quite like some of the early practitioners—who, in a weird twisted way, did actually want to help people—thought it would work. It's probably more likely to inspire another genocide, like it did during World War II. Thus, I won't cover it further in this course. However, in this section we will return to the topic of genes. In particular, we'll return to the scientists covered in the last lesson, The Distance of the Planets, and their radical(?) ideas.

Marx
Karl Marx (1818-1883).

Recall that there is a knee-jerk reaction among some leftists to the suggestion that there is individual variation on cognitive traits and traits that might lead to differential performance on fiscal matters. That's all a fancy way of saying that some leftists don't like when theorists provide evidence that some differences in intelligence, temperament, openness to new experiences, etc., are based on our biology rather than our upbringing. The concern for these leftists appears to be that if there is a biological explanation for the differences in the aforementioned traits, then that might be used as an argument to justify the status quo, i.e., the social inequality of the modern age. However, in the last lesson we also covered the is/ought distinction, the view that no amount of empirical claims can ever justify a moral conclusion. So, the leftists' concern is probably unfounded. It appears that we can both accept and learn from science about individual variation and we can reject the social inequality of the modern day. This, by the way, is quite a happy outcome: we can embrace the most up-to-date scientific findings and(!) also be concerned about social inequities (without being inconsistent!).

I'd like to further convince you of this today. If the concern was that the aforementioned biological findings might be used to justify the status quo, then these leftists were flat-out wrong. The very scientists whose work I reported on last time have their own ideas for how to make our institutions and society more rational. In other words, they are concerned about inequity, and they have ideas that they want to share—ideas that the leftists seem to have not even tried to listen to. We can begin with Richard Haier, whose work on the relationship between genes and intelligence I reported on. If a lefist would've made the argument that Haier was trying to justify why some are poor and others are rich, i.e., because the former are less intelligent, then this leftist is wrong. Haier is actually concerned about what he calls neuropoverty. In the following passage, he explains what it is and how it can perhaps be treated with further developments in neuroscience:

“…the normal distribution of IQ scores with a mean 100 and a standard deviation of 15 estimates that 16% of people will score below an IQ of 85 (the minimum for military service in the USA). In the USA, about 51 million people have IQs lower than 85 through no fault of their own. There are many useful, affirming jobs available for these individuals, usually at low wages, but generally they are not strong candidates for college or for technical training in many vocational areas. Sometimes they are referred to as a permanent underclass, although this term is hardly ever explicitly defined by low intelligence. Poverty and near-poverty for them is a condition that may have some roots in the neurobiology of intelligence beyond anyone’s control.

Here is the second most provocative sentence in this book: The uncomfortable concept of ‘treating’ neuro-poverty by enhancing intelligence based on neurobiology, in my view, affords an alternative, optimistic concept for positive change as neuroscience research advances. This is contrasted to the view that programs which target only social/cultural influences on intelligence can diminish cognitive gaps and overcome biological/genetic influences. The weight of the evidence suggests a neuroscience approach might be even more effective as we learn more about the roots of intelligence. I am not arguing that neurobiology alone is the only approach, but it should not be ignored in favor of SES-only approaches” (Haier 2019: 196-198; emphasis in original).

In other words, Haier is making the case that there are necessarily some who are going to score very low on intelligent tests. They are not strong candidates for well-paying jobs, and society (as it is currently organized) gives them no added safety net. Thus, as neuroscience advances, we should attempt to identify and "treat" these individuals, attempting to boost their intelligence in whatever way the latest technology allows us to. Put differently, it is cruel to leave those who are below-average intelligence through no fault of their own to fend for themselves. We must find them and help them, using the science that our taxpayer dollars funds.

But wait! There's more! Haier then discusses the work of Kathryn Asbury and Robert Plomin, both behavioral geneticists. They suggest tailoring the educational environment to help each student learn core material in a way that is likely best suited to that student’s genetic endowment. Basically, genetic research could accomplish the goal of individualized education: neurally-tailored educational programs for each and every student so that they actually learn(!). Haier quotes the authors:

“We aim to treat all children with equal respect and provide them with equal opportunities, but we do not believe that all our pupils are the same. Children come in all shapes and sizes, with all sorts of talents and personalities. It’s time to use the lessons of behavioral genetics to create a school system that celebrates and encourages this wonderful diversity” (Asbury and Plomin 2014, as quoted in Haier 2019: 198).

Haier isn't done yet. What about college? Here's Haier again:

“The idea that every high school student be held to a graduation standard of four-year college-readiness, irrespective of mental ability, is naïve and grossly unfair to those students for whom this expectation is unrealistic. Remember, statistically half of the high school student population has an IQ score of 100 or lower, making college work considerably difficult even in highly motivated individuals. It is similarly naïve and unfair to evaluate teachers by student test score changes when many tests are largely de facto measures of general intelligence rather than of the amount of course material learned over a short time period. Perhaps the greatest disservice to students will come from purposefully increasing the difficulty of evaluation tests by requiring more complex thinking to get the right answers... In principle, there is nothing wrong with evaluation testing or having high expectations and standards. These examples, however, illustrate the consequences of ignoring what we know about intelligence from empirical studies when crafting well-intentioned policies for education, especially those policies that assume thinking skills can be taught to the same degree to all students, or that buying iPads for everyone in the education system will increase school achievement” (Haier 2019: 199; emphasis in original).

 

Haier's The Neuroscience of Intelligence

Haier is saying, in short, that it is unfair and frankly mean-spirited to funnel everyone into college, since some are simply not prepared and can probably not become sufficiently prepared to be successful. It's unfair for students. It's unfair for teachers. It's an unscientific and naive way to operate an educational system.

Now you can agree or disagree with Haier's ideas. (It's more fun if you do.) However(!), here's the main point of this section. Haier is not trying to justify the status quo. Rather, he is recommending that we radically alter it. So, if one wants to object to Haier's idea, one must object to the ideas that Haier actually espouses. In other words, one would have to respond to the ideas in the preceding paragraphs. To respond to something else is to fall right into an informal fallacy, whether it be a strawman argument, a red herring, a slippery slope, you name it.

Last time, we also saw some comments from neuroscientist David Eagleman. In fact his comments on the differences between the genders gave the last lesson its name. Quite the contrary to leftist concerns of justifying the status quo, Eagleman makes some radical recommendations for reforming the social justice system: neurally-tailored sentencing. Here's some Food for thought...

 

 

So, there are some radical ideas for altering society given the latest neuroscience. Moreover, it looks like these topics themselves have been off-limits, shut down by good-hearted but misguided individuals. To be clear, it makes sense to be weary of some of these topics—to be a little guarded. But being so close-minded so as to not even consider some potential social benefits from, say, the neuroscience of intelligence is the epitome of not being a good critical thinker. And it looks like it's political ideology that got in the way of being able to accurately process information.

So much for the leftists. Time to go after the right-wing.

 

 


 

Do Stuff

  • Read from 460a-468e (149-160) of Republic.

 


 

To be continued...

 

FYI

Suggested Reading: Wikipedia entry on the History of the Race and Intelligence Controversy

TL;DR: CrashCourse, Controversy of Intelligence

Supplemental Material—

Related Material—

Advanced Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.

 

 

 

Possibilia

 

 

It is not so much that we, using our brains, spin our yarns, as that our brains, using yarns, spin us.

~Daniel Dennett

Respond to the Argument

Over the next few lessons, we're going to look at methods for how to engage in a dynamic argument, back-and-forth argumentation in an effort to learn what the (likely) truth is about some subject. This process involves respecting your interlocutor (i.e., the person you're speaking with) with the hopes that you might collectively learn from each other. This is a process, by the way, that is sometimes called dialectic. At the same time, however, we will be looking at some ways to not(!) argue. The first half of the next few lessons then will cover methods for engaging in dialectic, while the second half will cover bad ways to reason—if you can even call it "reasoning."

 

Lyons and Ward

I'll be using Lyon and Ward's The New Critical Thinking for the next few lessons. Let me begin with two points that they make about dialectics (p. 327-328). First off, it looks like any kind of constructive argumentation is nearly entirely absent from public discourse. As the authors say, things are looking "abysmal". As it turns out, there is evidence that liberals and conservatives will think they disagree with each other even when they don't really disagree with each other—I'm not kidding (see Mason 2018). All this to say, even if you learn the methods that I'm going to outline here, I'm making no promises. It takes two to tango, and if the person you're having a conversation with doesn't play along and try to engage with your argument constructively, then nothing positive will arise from the exchange. Sorry.

Second, dialectic isn't always what is called for. There are many exchanges that we have with others on a daily basis that are not really for the sake of jointly discovering the truth. It's just two people letting the other talk, basically. This is completely acceptable and serves as a good example of when dialectic is not what is called for. You can call this type of thing "polite conversation". Here's a little sample. Someone will mention something about, say, the governor. This person who's doing the talking about the governor expects of the other person to say something like, "Yeah, totally" or "I know! Crazy, right?" Occasionally, the second person will even say something that they know about the governor. (Gasp!) And then they both say, "Cool" or high five, or whatever. Typical water-cooler conversation. That's completely okay. Not every conversation has to be deep. In fact, preferably, most conversations should not be deep. But(!) you should be prepared for when you can have some serious talk. And in that spirit. Let's move on to how to engage in dynamic argumentation.

The First Golden Rule of Constructive Argumentation:
Respond to the Argument

 

Picture of disagreement
This guy's not listening.

This is probably not news to you by this point: if you want to engage in productive dynamic argumentation, then you need to actually respond to what your interlocutor is saying. Even better, respond to their argument with objections tailored specifically to the points that they made. This is actually harder to do than you might imagine. This is for many reasons. First off, during dynamic argumentation, tempers may flare and you might get emotional. Moreover, you have higher standards for views that you don't agree with than for views you do agree with (Nisbett and Ross 1980). So the points that your opponent makes might not even seem like good points to you, and so you'll dismiss them without really responding to them (or even processing them). So, slow down and really think about how to object. Lyons and Ward (p. 329-334) give some examples of how this might go. Consider this exchange:

Ted: The best way to improve the economy is to reduce taxes on the wealthy. The less they're taxed, the more money they will have to invest, and the investments of the rich are the primary drivers of the economy.
Caitlyn: No way should we reduce taxes on the wealthy. They're rich enough already.

In this exchange, we see Ted serving up an argument (albeit not in our preferred standard form). His conclusion is, "The best way to improve the economy is to reduce taxes on the wealthy." Caitlyn, however, seems to not have engaged directly with the argument. Maybe you agree with the general spirit of her position. But we have to be clear that she did not say anything that directly undermined Ted's argument. Maybe she was working towards a moral argument (about the rich having more than they need already). But this would be a case of not responding to the argument. Instead, it would be avoiding the argument and giving another argument in return.

Here's another way the conversation could've gone:

Ted: The best way to improve the economy is to reduce taxes on the wealthy. The less they're taxed, the more money they will have to invest, and the investments of the rich are the primary drivers of the economy.
Caitlyn: That won't help the economy. There have been untaxed rich people throughout most of human history and most people were dirt poor peasants scrabbling to make a living.

Caitlyn is doing better in this exchange. She is in the very least explicitly disagreeing with Ted's conclusion. This is a good start. But(!) she is not undermining any of the premises that led to that conclusion. She is once again making a wholly new argument—this time using historical data. This might be a good argument, just like the moral argument from earlier might've been a good one. But she's not first engaging with Ted's argument. Lyons and Ward stress that this is a very important point. You must first directly undermine your opponent's line of reasoning and then(!) propose an argument for your position.

Here's one more way the exchange could've gone:

Ted: The best way to improve the economy is to reduce taxes on the wealthy. The less they're taxed, the more money they will have to invest, and the investments of the rich are the primary drivers of the economy.
Caitlyn: That won't help the economy. You know, rich people invest their money where they can get the most profit, and that is not in the domestic economy. Rich investors do nothing for our economy.

Now we're talking! She directly challenged the support that Ted gave for his conclusion. Ted makes the bold claim that investments from the rich are the primary drivers of the economy—emphasis added. To this, Caitlyn responded that rich investors are actually rather unlikely to invest in the domestic economy, opting instead to park their profits in an off-shore bank account, or invest in emerging economies, or build a factory in a country where they can get cheap labor, or whatever. Now whether these claims are true or not is besides the point right now. Of course, they are important. But the main message here is about having constructive exchanges. And with regards to that, Caitlyn did great. The authors summarize the lesson.

“Why address each other's arguments? It's pretty straightforward when you think about it. The alternative is confusion: a back-and-forth between people who disagree with each other but who never critically discuss each other's reasons for their conflicting beliefs: each person is a moving target; never sticking to one line of reasoning long enough for anything to be clarified” (Lyons and Ward 2018: 332).

Well said, fellas. And speaking of moving targets...

 

 

Argument Extraction

 

 

 

On what is possible

Why conspiracies

There are various reasons why we should focus on conspiracy theories in a critical thinking course. First and foremost in my mind is this false notion that distrusting authorities as a matter of course is a form of critical thinking. (This idea was put forward to me on various ocassions, including one very long plane ride circa 2017.) In any case, let me give you my views on this here and defend myself below. Put bluntly, blanket disbelief of any claims made by authorities is not critical thinking. Moreover, being a contrarian and just disbelieving everything you hear is also not critical thinking. Critical thinking is having a method for separating what’s likely to be true from what’s not likely to be true; and it's paying attention to one’s own thought process (something known as meta-cognition) and how it might affect the processing of information. It's also, I might add, keeping up-to-date with the latest science.1

Having said that, it's also not critical thinking to just deny the truth of every conspiracy theory as a matter of course, since sometimes conspiracies are true. That is to say, sometimes individuals do secretly collude so as to bring about some outcome. (Stay tuned.) So, not only do we need a method for distinguishing between what's like true and what's likely not true, but we also need to steer clear of what Juha Räikä and Lee Basham (2019) call conspiracy theory phobia, the instinctive pathologizing or hostility towards conspiracy theories. This attitude is just as epistemically unjustified as the instinct to readily accept any and all conspiracy theories. Put differently, neither accepting conspiracy theories blindly nor denying conspiracy theories blindly is critical thinking. As the authors summarize: “Both ends of the spectrum are irrational in the sense that they have a tendency to accept or reject conclusions based on predispositions rather than evidence” (p. 181; emphasis added). In effect, immediate dismissal of the views of a conspiracy theorist amounts to something like an ad hominem fallacy: an attack on the arguer rather than the argument.

 

The Capitol Hill Putsch
The Capitol Hill Putsch,
6 January 2021.

There are other reasons, of course, for why a critical thinker should pay attention to conspiracy theories (and those that believe them). In particular, we've seen some pretty dramatic effects of belief in conspiracy theories, such as what I call the Capitol Hill Putsch. As I see it, people that believe (without evidence and with plenty of evidence to the contrary) in widespread voter fraud during the 2020 election deciding that storming the Capitol was a good idea, and some even carried zip-ties presumably for taking hostages. I'm not sure what these individuals would've done had they actually taken a congressperson hostage, but I do know that, throughout the world and throughout history, conspiracies and conspiracy theories are ubiquitous. Uscinski, political scientists and expert on conspiracy theories, explains the negative consequences of blindly believing conspiracy theories about, say, whether or not genetically modified foods (GMOs) are safe or not:

“In many parts of the world, conspiracy theories about genetically modified foods (GMOs) have driven detrimental policies. In Europe, conspiracy theories and financial interests have succeeded in convincing governments to enact anti-GMO importation policies. This has been a boon to local producers, but it has inhibited producers, particularly in Africa, who could increase crop yields significantly from the use of modified seeds. There is a cost to health and lives because of these policies. According to recent research, had Kenya adopted GM corn in 2006, between ’440 and 4,000 lives could theoretically have been saved. Similarly, Uganda had the possibility in 2007 to introduce the black Sigatoka-resistant banana, thereby potentially saving between 500 and 5,500 lives over the past decade.’ Prior to this, Africa put millions of lives at risk because it would not accept GMO crop donations from the United States, even though millions of people were facing extreme hunger” (Uscinski 2019: 11).

GMOs are, by the way, perfectly safe (Hollingworth et al. 2003). Despite their safety, however, many around the world—including some in the United States—distrust GMOs and believe in conspiracy theories about them. Conspiratorial thinking about other things is widespread as well. Medicine is a case in point. As many as 300,000 have died in Africa due to conspiracy theories suggesting that medicines for preventing AIDS are part of a global conspiracy to reduce the continent’s population (Uscinski 2019: 11). Here's an American example. Portland residents blocked a bill that would add fluoride to their water supply, believing (falsely) that fluoride is a method of pacifying the population to make them more controllable (Uscinski 2019: 11-12). In fact, federal-level policies that are still with us today have belief in some conspiracy theories lurking around in their origins.2

“President Richard Nixon began the government’s war against drug users, dealers, and smugglers, partially for political reasons and partially due to his conspiracy beliefs. Nixon believed that African Americans, Jews, college students, and antiwar protesters were conspiring against the country (and against him!). So, he decided to conspire against them by starting a bloody and decades-long drug war. The cost is ‘about $40 billion a year at home and abroad… [it] has imprisoned currently up to 400,000 people on drug-related charges—the vast majority of them nonviolent offenders” (Uscinski 2019: 13).

So, conspiratorial thinking is widespread. Even Presidents engage in it. Most concerningly, those most likely to believe in conspiracy theories are also more likely to support violence against the government while simultaneously opposing gun control. In fact, conspiratorial thinkers even accept that it is justified to engage in conspiracies themselves to achieve goals (Uscinski 2019: 13). In other words, they believe in conspiracies without evidence and at the same time feel that it's ok to engage in conspiracies themselves, if the cause is worth it. The Capitol Hill Putsch is a perfect example: this is a recipe for disaster.

Let's narrow the scope a bit and focus only on the United States. Kathryn Olmstead (2019) gives some highlights.

 

Uscinki (2019)

  1. Most Americans believe in conspiracies. 55% believe in at least one; 27% believe in two; 12% believe in three or more.
  2. Both sides of the political aisle believe in conspiracies. Partisans are typically more likely to believe the other side is conspiring, obviously. So, if you are a Republican, you tend to believe Democrats are conspiring and up to no good. You are also more likely to believe in conspiracies if your side just recently lost an election, as what happened in 2020. (Note that Olmstead published her work before the 2020 election and thus before the Capitol Hill Putsch.)
  3. Wording matters. Respondents to surveys gauging for conspiratorial thinking are more likely to say they believe in a conspiracy if the question includes partisan clues—surprise, surprise. In other words, if you blatantly give the subject political cues, as in explicitly mentioning the other party(!), then they are more likely to admit they believe in conspiracy theories. Respondents are also more likely to admit believing a conspiracy if they are given a scale of belief, rather than a “yes” or “no” option. So, if given an option to say they "partially agree", subjects who would've answered "no" in a "yes" or "no" format will in fact admit to believing in conspiracy theories.
  4. Conspiratorial thinking is very much like an ideology. This is to say that if you believe in one conspiracy, you are likely to believe in multiple conspiracies, even if they are inconsistent(!). For example, if you believe that Princess Diana's death was part of a murder plot (as opposed to a car accident), then you are also more likely to believe she is still alive(!). This shows that some have a clear preference for conspiratorial thinking.
  5. Conspiratorial thinking is politically disruptive. Partisans are less likely to accept the outcomes of an election. They are more likely to oppose policies put forward by "the other side", thinking them part of a plot. And they are more likely to engage in political violence.

Clearly, critical thinkers need to take conspiratorial thinking seriously.

Types of conspiracies

Not all conspiracies are the same. In Lecture 2 of his Conspiracies and Conspiracy Theories, science writer, historian of science, and founder of The Skeptics Society Michael Shermer discusses some classification schemes for conspiracy theories. For example, you might lump them by perpetrator: conspiracies perpetrated by the government, non-whites, the Jews, etc. However, we can agree with Shermer that this classification scheme is not very helpful.

Per Shermer, a more helpful scheme is the one proposed by Jesse Walker in his 2013 The United States of Paranoia, Walker classifies conspiracy theories into five categories:

  1. Enemies without: These are foreign agents. Think Russia meddling in US elections, the USA plotting the overthrow of Jacobo Arbenz in Guatemala (see the lesson titled The One Great Thing (Pt. I)), etc.
  2. Enemies within: These are typically referred to as fifth columnists. Think US citizens trying to overthrow their own government.
  3. Enemy above: This is the elite plotting control of the general population. This is the type of conspiratorial thinking that Chomsky has been accused of (see the lesson titled A Certain Sort of Story).
  4. Enemy below: This kind of thinking occurs when the elite think the poor are plotting against them (see Footnote 5 in the lesson titled Fragility).
  5. Benevolent: This is admittedly a rare kind of conspiracy theory. This is when someone believes that there is someone conspiring for the greater good.

Shermer also mentions that classification scheme given in A Culture of Conspiracy by political scientist Michael Barkun (2003). Barkun classifies conspiracy theories into three categories:

  1. Events (e.g., JKF assassination)
  2. Systemic (e.g., conspiracies involving social control, political power, and even world domination)
  3. Super-conspiracies (e.g., conspiracies involving a single individual or force that controls everything)

To these classification schemes, I'd like to add the concept of cocked up conspiracies. These are official state operations that are bungled so badly that it looks like a secret plot. In other words, these are government policies that were executed so poorly that it truly looks like it had to have been planned to happen the way it did, i.e., like a conspiracy. I, unfortunately, cannot take credit for this wonderful concept. The credit goes to the journalist Ioan Grillo, who labels two operations during the War on Drugs as cocked up conspiracies. Here's the story.

 

Grillo (2021)

In chapter 8 of his 2021 Blood Gun Money, Grillo discusses Operation Wide Receiver and Operation Fast and Furious, two operations that were bungled so badly and led to so few arrests (while simultaneously giving quality weapons to Mexican cartels!) that some civilian assets actually speculated that it was all a secret plot to convince the American public that assault rifles should be banned. The general idea was this. You first "walk the guns", that is sell the guns to lower-level members of the cartels, and then track them over time to try to get at cartel members higher up in the chain of command. However, to say that it didn't go as planned is an understatement. First and foremost, very few indictments came of the operations. Shockingly, most weapons sold to cartel members have not been recovered. Make sure you understand that. The US government sold high-grade weapons to drug cartels and then lost track of them. To further show how bungled the strategy was, consider this. The Fast and Furious operation in particular had an instance where a known cartel member bought 700 weapons(!) in Phoenix for half a million dollars(!) without being stopped when crossing the border(!). Who thought that was a good idea? All in all, Grillo comments that it’s hard to track what really happened in Fast and Furious because there are “overlapping fuck ups going in multiple directions.”

Here's another example of a cocked up conspiracy. In chapter 2 of Chasing the Scream, journalist Johann Hari discusses the story of Henry Smith Williams. Here's a little context. In the early 20th century, the Harrison Act was signed into law. This law banned the sale of opiates, but it had a loophole in it that allowed doctors to prescribe opiates to addicts as part of their recovery (so as to avoid something called dopesickness). However, Harry Anslinger, the first commissioner of the Federal Bureau of Narcotics, took it upon himself to wage a personal war on drugs, using the institution he had been put in charge of like a weapon. Among other things, Anslinger targeted doctors that were protected by the Harrison Act and arrested over 20,000 of them, some of which faced jail time (even though they had broken no laws and just tried to help their patients curb addiction). These arrests, by the way, included Henry Smith Williams’ doctor brother, Edward Williams. In any case, the dynamics of the drug criminalization imposed by Anslinger led to two crime waves. First, addicts sought their drugs through non-legal channels, thereby giving rise to a black market controlled by the mafia. Second, the mafia price gouged (i.e., dramatically raised the price), and so addicts had to revert to stealing in order to have enough money for their fix. This all seemed to Smith Williams to implicate Anslinger as a stooge of the mafia. In other words, Anslinger's war on drugs had such negative consequences, effectively empowering the mafia and causing a crime wave, that Williams thought it had to be the case that Anslinger actually worked for the mafia(!)—a top-notch conspiracy. Needless to say, Henry Smith Williams was ultimately wrong. Per Hari, if Anslinger had any links to the mafia, they surely would’ve been known by now—but none were ever found. Now that's a cocked up conspiracy!

So those are some reasons for taking conspiracy theories seriously and some ways of classifying conspiracies. But what gives rise to conspiratorial thinking and how do we prevent it? Stay tuned.

 

 


 

Do Stuff

  • Read from 468a-480a (p. 159-175) of Republic.

 


 

To be continued...

 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy Entry on Conspiracy Theories

TL;DR: The School of Life, How to Resist Conspiracy Theories

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. It was actually Karl Popper, whose theory of falsification we met in ...for the Stronger, that coined the term “conspiracy theory of society.” He did so in his Conjectures and Refutations. Popper believed conspiracy theories were inherently unscientific, since they generally couldn't be falsified, although modern researchers are challenging this claim, since some conspiracy theories have turned out to be true (see chapter 2 of Uscinski 2019). Some modern researchers recommend assessing each theory on a case-by-case basis (ibid., p. 39).

2. Here's a fascinating example of conspiratorial thinking having a massive impact on history. In chapter 4 of The Red Flag, Priestland gives an account of Stalinism (the name for the governing policies which were implemented in the Soviet Union from 1927 to 1953 by Joseph Stalin) and how it came about. In particular, Stalin returned Russia from Lenin’s more pragmatic type of Marxism to a revolutionary, military-style Marxism. Several factors led to the rise and stabilization of Stalinism, including (but not limited to): a Bolshevik party culture of conspiracy(!), the uncertainty created by civil war, disappointment with Lenin’s New Economic Policy, and the constant threat of foreign invasion. Even if all that conspiratorial thinking did was influence the type of government that the USA faced off against during the Cold War, we would have to study it (just to understand that massive conflict). Of course, conspiratorial thinking has done much more than that.

 

 

The Myth of the Magic Bullet

 

 

Be careful. People like to be told what they already know. Remember that. They get uncomfortable when you tell them new things. New things… well, new things aren’t what they expect. They like to know that, say, a dog will bite a man. That is what dogs do. They don’t want to know that a man bites a dog, because the world is not supposed to happen like that. In short, what people think they want is news, but what they really crave is olds… Not news but olds, telling people that what they think they already know is true.

~Terry Pratchett

Track the Burden of Proof

Last time we began to look at some ways for engaging in productive, back-and-forth argumentation with someone else so as to arrive at what's (hopefully) likely to be true. We're calling this dynamic argumentation (or dialectic). The first golden rule of dynamic argumentation was to always respond to the argument of the person you're speaking with. Don't just give an argument for your position. You must first respond directly to their point. Otherwise, nothing will come of the exchange but frustration and hostility.

The second golden rule is this: track the burden of proof. Tracking the burden of proof has to do with keeping track of who made which claims and making sure that they defend those claims. Before moving forward, let's take a look at an example to make this clearer.

Ted: Decisive people typically have the attributes to be a great president. Zephyr Teachout is very decisive. Therefore, she would make a great president.

 

Ted's back. In the argument above, Ted is the claimant, the person claiming to know something. In particular, he is claiming that decisive people typically have the attributes to be a great president, that Zephyr Teachout is very decisive, and that Zephyr Teachout would make a great president. You, then, are the respondent. Per Lyons and Ward (2018: 334-337), your role as a respondent is not to prove that decisive people typically don't have the attributes to be a great president, or that Zephyr Teachout isn't very decisive, or that Zephyr Teachout wouldn't make a great president. In the words of Lyons and Ward, "Ted's the one making the big claims; he's assumed the burden of proof" (335). Put simply, Ted offered the argument. It's up to him to give evidence for the claims (i.e, premises) on which his conclusion rests. That's what it means for Ted to have the burden of proof.

Noting that the burden of proof is on the claimant is not a minor point. In fact, if you unnecessarily assume the burden of proof, and thus have to prove the claimant's premises are false, then you make yourself vulnerable to losing the argument (ibid.).

Having said that, the point here is not to win arguments but to find the truth. Tracking the burden of proof is the best way to do that. Of course, if Ted doesn't have good reasons for believing in his premises, then his argument goes nowhere. If that's the case, you first make that clear and then proceed to give your own argument for your conclusion—hopefully with premises for which you have evidence. That's dialectic done right.

Let's take a look at an example from Lyons and Ward now:

Ted: Scientific creationism is entirely consistent with the data provided by the fossil record. So, it deserves to be taken seriously as a legitimate scientific theory.

Lyons and Ward

Lyons and Ward go over two possible responses: a bad one and a good one. The bad idea is to give an account of how science really works. For example, you might argue that real science has to do with only making falsifiable claims, a notion developed by Karl Popper that we learned about in ...for the Stronger. The good idea is to press the claimant on whether mere consistency with the data is enough for something to count as science. In other words, ask the claimant for evidence that his second premise is true. If you're anything like me, your intuition is to go with the bad one. (That's what I've actually have done in real-life debates.) But let me show you why that's a bad idea.

Attempting to give your own account of how science works is a losing battle. Many have tried and have failed, e.g., Popper (see the lesson titled ...for the Stronger). Science is a really complex institution with lots of moving parts, several overlapping and conflicting methodologies, and tons of specialization. Characterizing how it works is a highly non-trivial task. Don't pretend to know how it really works—please. No one does—or at least, as of this writing, there is no generally accepted theory about how science works (but see Collins 2009 for an interesting theory in the sociology of science).

That leaves us with putting the burden of proof on the person who deserves it. If the claimant (Ted) wants to argue that mere consistency with the data is enough for something to count as science, then it's up to him to defend the claim. By the way, this is a heavy burden. This is a good time to remind you of what logical consistency means: two statements are logically consistent if it is possible for both to be true at the same time. Put bluntly, it just means the statements don't contradict each other (or imply a contradiction). So now, when you think about it, tons of things are consistent with scientific data but they seem to not count as science. For example, I have a theory about how to assemble the best peanut butter and jelly sandwiches. So far as I know, if you were to write out all the steps to my delicious recipe, at no point would you contradict any of the fundamental laws of physics, any of the top theories in the mind sciences or in evolutionary theory, or any of the findings in biochemistry, linguistics, and complexity science. In other words, my recipe is consistent with the data. Doesn't that mean it counts as a science? Of course not. That's nonsense. The claimant appears to have made a false claim. But still, let them try to defend it. Odds are they'll fail.

Really? Mere consistency with the data is enough for something to count as science? So, the hypothesis that the universe was created by the Flying Spaghetti monster is a legitimate scientific theory, right up there with quantum mechanics and relativity theory? It is consistent with the data. What data do we have that entails that the universe was not created by such a thing? None. As we've seen, science is complex... [and] it is clear that merely telling tall stories that are compatible with the data doesn't cut it” (Lyons and Ward 2018: 336; emphasis in original).

 

 

 

On What is Possible, Continued

What gives rise to conspiratorial thinking

Uscinki (2019)

In their chapter of Conspiracy Theories and the People Who Believe Them, Butter and Knight (2019) give a history of the research into conspiracy theories. The authors argue against an early trend in conspiracy theory research—the tendency to see all those who believe in conspiracy theories as necessarily paranoid. This pathologizing approach, intuitive as it may be to some, was not very helpful according to the authors. This is because this attitude towards conspiratorial thinking marginalized conspiracies and conspiracy theorists. However, as the history of conspiracy theories shows, conspiratorial thinking is not a marginal phenomenon; rather, it is widespread (see the lesson titled Possibilia). More to the point, belief in conspiracies is not limited to those that can be easily characterized as paranoid. The authors then report that this pathologizing paradigm of research has (thankfully) been challenged since the 1990s, and so the authors move to review some more recent experimental studies that shed light on why someone might believe in a conspiracy theory. Here is what several of these studies have in common. It appears that people who have been experimentally induced into experiencing emotional uncertainty or a loss of control are more likely to draw on conspiratorial interpretations of events (see also Whitson et al. 2015). The authors conclude that this type of research, which is more sensitive to cultural context and is more methodologically sound in addition to being more reflective of who actually engages in conspiratorial thinking, shows great promise for conspiracy theory research.

So feelings of uncertainty and loss of control have something to do with it. Is that all? In Lecture 3 of Shermer (2019) we are given details about the experimental literature that Butter and Knight think shows so much promise. In general, Shermer makes the case that those who engage in conspiratorial thinking do so for various reasons; i.e., there is no magic-bullet, there is no single cause for belief in conspiracy theories (and hence no single solution for ending rampant conspiratorial thinking). Here are some of the contributing factors.

 

Feeling lack of control makes us more conspiracy-minded

Person feeling lack of control

Sighting Whitson and Galinsky's 2008 Lacking Control Increases Illusory Pattern Perception, (in which people were made to feel a lack of control via a doctored task), Shermer first makes the case that those who have suffered a recent setback (financial, personal, or otherwise), along with those who are naturally disposed to be distrusting and/or paranoid, are prone to conspiratorial thinking.

An example will help. Perhaps you were asked to perform some task at work that you feel is above and beyond what other people in your position are asked to do. In essence, you feel like you are working more than the others for the same pay and with no real justification. This, of course, is a type of setback. It is financial and personal. You are feel that you are working more for the same pay as everyone else and you feel personally slighted—like they have something against you in particular. And so, it is under these conditions that you might start thinking in conspiratorial ways. Perhaps the boss has a crush on the other person in your position and doesn't want to make him (or her) do any extra work, the result being that you have to do the extra work. Or maybe they are trying to fire you, and they're looking for a reason to do so. Maybe if they overwork you, you'll snap and go off on someone. Then they can fire you. Or maybe... You get the picture. You can start dreaming up all kinds of scenarios. Of course, all else being equal, you typically will have no good evidence for believing any of these theories are actually true. But(!), given your emotional disposition, your feeling of loss of control, you are vulnerable to entertaining all kinds of crazy ideas.

 

The way our brains incorporate new beliefs

A web of belief
A web of belief.

Here's another factor that contributes to conspiratorial thinking: epistemic cognition. Epistemic cognition is a mental process that involves the assessment of what we know, how we know what we think we know, what we doubt and why, etc. There are various theories in this subfield of Cognitive Science, but Shermer cites one in particular. Shermer's preferred theory is called global coherence. It posits that our beliefs are like a web that coheres or fits together. This web of beliefs is held together by the beliefs that we believe the most in. All other beliefs are "attached" to these more strongly-held beliefs. Importantly, all incoming beliefs must comport (or fit in with) the pre-existing beliefs. Moreover, if a belief is strong enough (or is somehow strengthened), all less firmly-held beliefs have to comport to that belief. So, with this context in place, Shermer explains how this relates to conspiracy theories. It might be the case that people already believe in nefarious actors. Perhaps they have this firmly held-belief in the existence of bad people because of personal experience (e.g., maybe they're the victim of a conspiracy) or because they’re naturally disposed to being paranoid. Whatever the case may be, their less firmly-held beliefs are shifted/distorted to fit in with the strong beliefs. In other words, if belief in nefarious actors is strong enough, all incoming information will be forced into a web of beliefs where nefarious actors play a dominant role. It's not hard to see how this can lead to conspiracy theories. If you strongly believe that bad people exist and they're always plotting, then all incoming information will only reinforce that belief. Sooner or later, you'll "grow" a conspiracy theory web. In fact, it may even be the case that one might have inconsistent weakly-held beliefs, such as the belief that Princess Diana both was assassinated and faked her death. This is possible and consistent with global coherence theory as long as these inconsistent weakly-held beliefs comport to the strongly-held belief (that nefarious actors exist).

 

The alluring simplicity of conspiracy theories

Here's another factor that contributes to being susceptible to conspiratorial thinking. Perhaps conspiracy theories might be believed due to their simplicity. The world, of course, is extremely complex and we can only even approximate understanding it with complex mathematical models. That's the business of the sciences. Conspiracy theories, on the other hand, are simpler and can simplify our cognitive efforts. As you recall from the lesson titled The Distance of the Planets, one prominent theory in the mind sciences is the view that our brain is in the business of making predictions; that's its evolutionary function (see Clark 2015). These predictions will hopefully keep you alive and mobile long enough to find a mate, reproduce, and pass on your genes. Of course, making predictions is tough and whenever possible the brain will attempt to make its job easier by limiting its analysis to only the most important factors. Conspiracy theories, superficially at least, appear to do this. They explain basically everything in terms of a few nefarious actors. This simplicity, Shermer argues, is what makes them so easy to believe in.

 

The pressure of social identity

Shermer also reminds us, by the way, that some conspiracy theories are believed because they cohere with our social identity. For example, it's no secret that liberals are more likely to believe in conspiracies involving banks, while conservatives are more likely to believe in plots to enact gun control. Shermer adds that this might be the worst contributor to conspiracist tendencies since it usually implies the demonizing of “the other” (see also Douglas et al. 2017).

 

Innate paranoia

Although we had earlier mentioned that the pathologizing paradigm of conspiracy theory research has fallen out of favor, this doesn't mean that paranoia isn't used in explaining some instances of belief in conspiracy theories. In Something’s Going on Here: Psychological Predictors of Belief in Conspiracy Theories, Joshua Hart and Molly Graether (2018) found that “conspiracy believers are relatively untrusting, ideologically eccentric, concerned about personal safety, and prone to perceiving agency in actions.” They were also more likely to be religious, female, and younger in age. They also scored higher in “dangerous world beliefs” and schizo-type personality.1

 

Shermer’s 10 components of belief in conspiracy theories

Overall, then, Shermer (Lecture 5) gives his list of what gives rise to belief in conspiracy theories. As you read this, note that the second half of this list appears to be components of beliefs that ensure that believers maintain their beliefs; the first half is about belief formation.

Michael Shermer
Michael Shermer.
  1. The brain generates beliefs naturally, whereas skepticism is unnatural and even painful (see Harris et al. 2009). In general, there is no penalty for forming false beliefs. So, we tend to take the easy route and stick to our beliefs, rather than opting for the cognitively-demanding task of critical thinking.
  2. Authorities enable belief. There are numerous psychological studies, such as Milgram’s Obedience to Authority, that demonstrate that people are willing to take the word of authorities at face value. Even Plato knew this, since he believed that harmony in the ruling Guardian class would lead to harmony in the lower classes—the lower classes taking their lead from the elite. So, if we see an authority we trust spouting conspiracy theories, or perhaps just failing to denounce them, then we may end up believing them.
  3. Peers reinforce belief. Numerous social psychology experiments lend credence to this, e.g., Asch’s conformity studies). In short, if others (in your in-group) say they believe in something, you're more likely to believe in it too.
  4. We’re more likely to believe people we like or respect. This is why conspiracy theorists play up similarities between the victims of the conspiracy (us) and the evildoers (the "other"). The conspiracy theorist, by putting you in the same camp with them (i.e., the victim camp), makes you feel closer to them. By default, this makes you more antagonistic to the "other", the ones perpetrating the conspiracy. Thus, you are more likely to believe in the conspiracy.2
  5. Beliefs are reinforced by payoffs, success, and happiness. This is why many conspiracy theories promise believers that they will be rewarded, either financially or spiritually. Alternatively, believers might feel empowered through secret knowledge. So, either they think they'll get some payoff, or they can feel superior to others by knowing something they don't know. Either way, belief in conspiracy is reinforced.
  6. Beliefs are reinforced by the confirmation bias.. You know all about this by now. You selectively process information such that your pre-existing beliefs are reinforced. Everything you see is more evidence of the conspiracy, and your brain refuses to see the rest!
  7. Beliefs are reinforced by the optimism bias, our tendency to believe that we are less likely to experience a negative event than others, in particular in complex domains. So, just because we haven't directly suffered any negative outcomes from a conspiracy theory that we believe and supposedly affects a lot of people, that doesn't mean we have to stop believing in how widespread this conspiracy is. It's just that we're lucky! And, it's also the case that we're lucky enough to know "the truth", while others are being duped by the conspirators.
  8. Beliefs are reinforced by the self-justification bias, the tendency to rationalize our decisions post-hoc (i.e., after the fact) in order to justify the belief that what we did was in fact the ideal course of action. Once we commit to a belief, we tend to defend it by arguing that our belief has positive consequences for us. You might even say, "You can go on believing the official story like a sucker." But we feel that we're in the know. We know what's really going on.
  9. Beliefs are reinforced by the sunk cost fallacy, our tendency to stick with a plan of action (even though it might not lead to any benefit) just because we don't want to see our past investments (in the form of time and money) go to waste. Basically, it's this: if we’ve already invested in some particular theory, it’s hard to let it go.
  10. Beliefs are reinforced by the endowment effect, our tendency to overvalue something just because we own it (regardless of its objective market value). We value a conspiracy theory more than truth merely because it’s already our view.

 

Other factors

Although Shermer does not mention them, I'd like to add two more factors that contribute to one being vulnerable to conspiratorial thinking. In chapter 2 of Minds Make Societies, cognitive scientist Pascal Boyer considers how “junk information”, i.e., the low-quality information which is heavily-trafficked in conspiracy-minded groups, cults, etc., gets passed around so readily. Part of it, Boyer argues, is that junk information is typically moralized or has a high-threat element attached to it. In other words, junk information is either given a moral dimension so that it seems more important or it is made to seem like you're putting yourself at risk if you don't pay attention to it. Both of these features of junk information make it so that it is likely to bypass any skeptical vigilance guard and turn itself into a belief rattling around in our head. Boyer explains:

“From this perspective, the moralization of other people’s behavior is an excellent instrument for social coordination, which is required for collective action. Roughly speaking, stating that someone’s behavior is morally repugnant creates consensus more easily than claiming that the behavior results from incompetence. The latter could invite discussions of evidence and performance, more likely to dilute consensus than to strengthen it. This would suggest that our commonsense story about moral panics may be misguided—or at least terribly incomplete. It is not, or not just, that people have beliefs about horrible misdeeds and deduce that they need to mobilize others to stop them. Another factor may be that many people intuitively—and, of course, unconsciously—select beliefs that will have that recruitment potential because of their moralizing content. So, the millennial cults with failed prophecies are only a limiting case of the more general phenomenon whereby the motivation to recruit is an important factor in people’s processing of their beliefs. That is to say, beliefs are pre-selected in an intuitive manner, and those that could not trigger recruitment are simply not considered intuitive and compelling” (Boyer 2018).

 

Boyer's Minds Make Societies

Put simply, we want others on our side. This will make it easier to coordinate collective action. To convince others to be on our side, you can either talk about the competence of your competitors or say they are moral monsters. Given the way our brains evolved, calling someone a moral monster is simply more persuasive. It bypasses any need for looking at evidence, and it moves right to the gut. It makes you feel like the person might be a threat. Thus, people side with you and not with the moral monster, whether or not they are actually a monster. Boyer goes further, though. He is saying that, unconsciously, you select the things that you are going to say such that they have as much oomph as possible. In other words, without knowing it, you generally say things with the most recruitment power; you say what's most likely to get others on your side. Moralized content and high-threat alerts tend to fit the bill pretty well. And this is why conspiracy theories are so alluring. They are highly-moralized, high-threat content. It may be junk information, but it is difficult to ignore junk information.

Last but not least... Interestingly, across the world, higher economic inequality, as measured by the Gini coefficient, is correlated with more belief in conspiracy theories. In other words, the more income and wealth inequality, the greater the belief in conspiracy theories (Drochon 2019).

What's to be done? Stay tuned.

 

 


 

Do Stuff

  • In a post of around 300 words, please address the follow question: How serious of a societal problem is belief in conspiracy theories: very serious, somewhat serious, or not serious at all? In your answer, be sure to give an argument for whatever your position is. For example, if you believe that conspiratorial thinking is a very serious problem, you probably believe a sizable percentage of the population is negatively affected by conspiratorial thinking. So, give an account of how conspiracy-mindedness is negatively affecting your society, as well as an estimate of how many people are being negatively affected. You may also detail a particular conspiracy theory that you believe is causing trouble. Alternatively, if you believe conspiratorial thinking is only a small problem, you might detail how only certain sectors of society are actually negatively affected by conspiracy theories, and give reasons for this position. And, of course, if you believe it's not a problem at all, you can argue that conspiratorial thinking is a harmless pastime that some people spend their time on—with very few people actually suffering any harm.
  • After you post your discussion response, be sure to provide a substantive, 100 word minimum reply to two of your classmates' posts. You may, for example, note similarities or differences in your viewpoints, or you may express disagreement with how your classmate estimated how many people are being negatively affected by conspiratorial thinking, or you may add details to a particular conspiracy theory that your classmate focused on, etc.

 


 

To be continued...

 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy Entry on Conspiracy Theories (same as last time)

TL;DR: The School of Life, How to Resist Conspiracy Theories (same as last time)

Supplemental Material—

Related Material—

Advanced Material—

  • Reading: Stanford Encyclopedia of Philosophy Entry on Evidence

 

Footnotes

1. The link between paranoia and conspiracy-mindedness is very complicated. I'm taking the following discussion from Wood and Douglas (2019). Here's how I understand it. In general, as is also mentioned by Butter and Knight (2019), there was a general tendency in academic circles to pathologize conspiracy theorists—a tendency that is still around in laypeople today. In psychological jargon, they were guilty of the fundamental attribution error, essentially immediately attributing paranoia to conspiracy theorists, as opposed to considering the effect of their environment on them. Now research in this field is a little more nuanced, less pathologizing. Given this new and improved methodology, it does appear that there is a relationship between distrust and conspiracy-mindedness. However, Wood and Douglas argue that the evidence is merely correlational—and remember correlation is not causation. Rather than framing conspiracy-mindedness research in terms of paranoia, the authors then make the case that conspiracy-mindedness should be understood as a form of disbelief, rather than as a belief. Consider, for example, that conspiracy theorists spend most of their time arguing for why the official story is implausible and comparatively little time explaining how the conspiracy came about. This is, by the way, sort of the opposite of die-hard conspiracy skeptics, those that reject all conspiracies outright. In fact, die-hard skeptics about conspiracy theories are actually less likely to know about or believe in actual conspiracies, like MKULTRA and instances of US interventionism, like the overthrows in Hawaii, Cuba, Puerto Rico, the Philippines, Nicaragua, Honduras, Iran, Guatemala, South Vietnam, Chile, Grenada, Panama, Afghanistan, and Iraq (see Kinzer 2007). (Either way, both those prone to conspiratorial thinking and die-hard conspiracy skeptics are failing to be critical thinkers—neither actually assesses the evidence.) In any case, on the issue of paranoia, the authors argue that conspiracy theories are not confined to the fringes of society; so, it's not just those who are paranoid. Moreover, it is possible to believe in just one conspiracy, complicating the narrative that belief in one conspiracy leads to belief in many conspiracies. For example, most Americans believe there was a conspiracy around the JFK assassination, but that doesn't necessarily lead to a belief that, say, 9/11 was an inside job. There’s also partisan conspiracies, as in the case where Republicans are more likely to believe Obama was secretly born in Kenya and Democrats are more likely to believe the George W. Bush administration was involved with 9/11. So, obviously, you might believe just the conspiracy that has to do with the other party, and not necessarily start accepting all political conspiracy theories that you hear. (Coincidentally, conspiracies about vote-rigging are more likely to be believed if one’s preferred party loses, see p. 249.) All in all, paranoia is a cause for conspiracy-mindedness for some people, but it's not the only cause. As we've seen, subjects who were made to feel a lack of control are more likely to believe in conspiracies, exhibit paranoid behaviors, and be mistrusting of others (p. 250n34). Moreover, people who feel powerless are more likely to believe in conspiracy theories, and believing in conspiracy theories is correlated with prejudice against high-power—but not low-power—groups (p252n41). In short, the story of the cause of conspiracy-mindedness is very complicated.

2. Shermer is definitely on to something here. It is a well-attested finding that we are more likely to be influenced by those we like than by those we dislike. In fact, this finding is so prevalent that Robert Cialdini places it as his third principle of influencing in his best-selling book Influence: The psychology of persuasion. In a nutshell, according to Cialdini, you assent to those that you like. One study sheds light on this. First, for some context, Cialdini discusses how few Americans believe that evolution alone gave rise to humans in their present form—depressingly, only about a third. He then discusses how science communicators have clearly missed the lesson: you can’t just throw evidence at someone to update their beliefs if those beliefs were not arrived at through evidence in the first place. And so, Cialdini moves to a discussion of an experiment that was(!) successful in changing minds. In the experiment, researchers tested whether evolutionary theory would be more readily accepted if subjects believed that a well-liked person advocated the view. So, some subjects were made to believe that George Clooney (who else?!) was an advocate of evolutionary theory. And indeed they subsequently were more likely to accept evolutionary theory themselves—at least when contrasted with the control group (see chapter 3 of Cialdini 2021).

 

 

Apt Pupils

 

 

Madmen in authority, who hear voices in the air, are distilling the frenzy of some academic scribbler of a few years back.

~John Maynard Keynes

Argument Extraction

 


 

Perfecting Dialectic

The Third Golden Rule: Demand Overall Consistency

Suppose that you favor candidate A for president, while I favor candidate B. Presumably, we will both have a series of arguments for why our preferred candidate should win the presidency. But when many arguments (along with their premises) are introduced in favor of some candidate, then a whole new problem arises: the question of consistency. In these cases, we have to make sure that the set of all of our premises is logical consistent. In other words, it can't be the case that I say one thing in my first argument for candidate B, and then the exact opposite in a second argument for candidate B. Moreover, it can't be that I see the fact that your candidate took campaign contributions from Wall Street as a sign that he is not qualified for the presidency, but ignore it if my candidate also took contributions from Wall Street. These are both forms of inconsistency, and they violate the third golden rule of dynamic argumentation: the demand for overall consistency (see Lyons and Ward 2018: 337-39).

An example will help clarify, but, as it turns out, we've actually already applied this principle. Recall this argument from last time:

Ted: Scientific creationism is entirely consistent with the data provided by the fossil record. So, it deserves to be taken seriously as a legitimate scientific theory.

 

Lyons and Ward

The response to this argument came by way of noting that the hypothesis that the universe was created by a Flying Spaghetti Monster is entirely consistent with the data we have. In other words, there is no evidence that it didn't happen in the way that Pastafarians say it did. If this is so, then Pastafarianism should also be taught in schools. Typically, someone who argues that scientific creationism should be taught in schools won't be too enthused about the prospect of Pastafarianism also being taught in schools. (That sounds like a guaranteed way of making children realize religious explanations are strange.) But(!), if the creationist is to be consistent, then he can't use consistency with scientific data to argue that scientific creationism should be taught in schools without also implying that Pastafarianism should be taught. That's what overall consistency would look like.

The Fourth Golden Rule: Be Charitable

Lastly, be charitable with your interlocutor. As you might've already realized, your arguments don't come out of your mind fully-formed. They take some time to structure and organize. When you are having a dialogue with someone and you are in the process of reconstructing their argument, don't assume that they're idiots, or have false information, or are trying to manipulate you, etc. Give them the benefit of the doubt. Try to reconstruct their argument in the strongest possible way. To do otherwise would be intellectually dishonest and would make you guilty of the strawman fallacy (Lyons and Ward 2018: 337-39).

 

 

 

On What is Possible, Continued

How to prevent conspiratorial thinking

One of the main reasons why conspiratorial thinking exists—and which we haven't discussed yet—is that conspiracies actually exist. Put bluntly, people think there are nefarious actors manipulating events in the world because there are some nefarious actors manipulating events in the world. Moreover, over time the evidence of these conspiracies is leaked or declassified, and the true nature of what happened comes to light. So, it is rational to believe in some conspiracies.

 

Mohammad Mosaddegh
Mohammad Mosaddegh,
1882-1967.

 

Those who are familiar with the history of the United States know that that country has had many covert operations that easily count as conspiracies. Shermer (2019, Lecture 10) discusses these real conspiracies:

  • the CIA-sponsored overthrow of Mohammad Mosaddegh, the democratically elected Prime Minister of Iran, with the end of strengthening the monarchical rule of the Shah, 1953
  • the CIA-sponsored overthrow of Jacobo Árbenz, the democratically elected president of Guatemala, with the end of installing the military dictatorship of Carlos Castillo Armas, the first in a series of U.S.-backed authoritarian rulers in Guatemala, 1954
  • CIA meddling with elections in Lebanon, the result being that fifty-three out of sixty-six parliamentarians supported the USA-backed and USA-friendly President Camille Chamoun, 1957
  • the CIA-enabled assassination of Patrice Lumumba, the pro-Soviet Congolese Prime Minister, 1961
  • US President Richard Nixon's destabilization campaign against Salvador Allende, the democratically elected President of Chile, which resulted in the 1973 military overthrow of Allende and which included CIA-sponsored economic disruptions such as:
    • the blocking of international loans,
    • the financing of opposition newspapers and labor unions that would work against Allende-friendly corporations,
    • support to an opposition political party,
    • the organizing and paying for a nationwide trucker strike, and
    • military training to the opposition
  • multiple assassination attempts on Fidel Castro as well as Operation Northwoods, a false-flag operation which called for the CIA (or some other U.S. government operatives) to both stage and actually commit acts of terrorism against American military and civilian targets to serve as pretext for the invasion of Cuba and the overthrow of Castro, 1962
    • Note: Operation Northwoods only came to light in a declassification in 1997.

 

Sidney Gottlieb
CIA chemist Sidney
Gottlieb, head of the
MK-ULTRA program.

 

In addition to giving us this laundry list of horror, Shermer also reminds us that the very nature of democratically-inclined nation-states makes it so that they take care of their own first. In other words, just like the Guardians will do anything to preserve their kallipolis, political elites in the USA (if we are giving them the benefit of the doubt, have done what they felt is needed to protect American interests—even if we see those attempts and interests as misguided and unjustified. I'm in no way attempting to justify the overthrows and anti-democratic actions of the USA. Instead, I'm remarking that these political elites have had done what they felt they needed to, both for (what they thought was) the good of the country and to stay in power. This is a byproduct of democracies. In a democracy, if you want to stay in power (and presumably you think you being in power is what's best for the country), then you will do what you think needs to be done to be elected again, or to make your party seem as the stronger one, etc. You either have to give the voters what they want or you have to at least make it like your party knows what it's doing.

Obviously, though, there's a problem here. If election pressures cause political elites to engage in conspiracies like the ones listed above, then sooner or later there will be what the CIA calls blowback, the unintended consequences and unwanted side-effects of covert operations. Terrorist attacks against Americans, such as 9/11, are an example of blowback. Moreover, it's not entirely clear that we want to live in a country, any country, that can perpetrate the acts detailed in the list above. As the Church Committee (formally known as the United States Senate Select Committee to Study Governmental Operations with Respect to Intelligence Activities) found in 1975, the capacities of the US government are very scary indeed. Among the revelations of the committee you can find Operation MK-ULTRA (involving the drugging and torture of unwitting US citizens as part of human experimentation on mind control), Operation COINTELPRO (involving the surveillance and infiltration of American political and civil-rights organizations), Operation Family Jewels (a CIA program to covertly assassinate foreign leaders), and Operation Mockingbird (a systematic propaganda campaign with domestic and foreign journalists operating as CIA assets and dozens of US news organizations providing cover for CIA activity). Here's a quote from their report:

“If this government ever becomes a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government would enable it to impose total tyranny, and there would be no way to fight back because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know.”

Of course, the technological capacities of the US government have not stopped developing since 1975. A case in point is the massive spying apparatus that was developed by the National Security Agency (NSA):

 

 

Unfortunately, there are plenty of other recent reasons for government mistrust. For example, Shermer also discusses several recent revelations which were divulged on Wikileaks. For example:

 

Iran/Contra

 

I hate to go on but... There's also the Pentagon papers (showing the existence of war-related lies and domestic spying, including on civil rights leaders), Watergate, the Iran-Contra scandal, and forced sterilization in the USA (of minorities and people with disabilities) during the first quarter of the 20th century. By the way, American corporations are also guilty of conspiring. Recall that there was a long campaign by Big Tobacco and Big Oil to sow doubt about links between smoking and lung cancer and greenhouse gas emissions and climate change, respectively.1

So, it appears that the existence of powerful entities that are capable of engaging in large-scale conspiracies (e.g., USA, transnational corporations) is, at least in part, what enables conspiratorial thinking to take root in a population. After all, it is irrational to not believe in conspiracies after looking at the lists above. The trick is, however, to discern the true conspiracies from the made-up junk. Well here's a first suggestion as to what can be done: have transparency in government and business. This is where we can turn on Plato. Plato believed that the Guardians can lie for the benefit of the kallipolis. But it is clear that various political/economic elites in recent history have engaged in immoral and illegal secret activity, and it has not been for the benefit of the people. Thus, ensuring that the activities of the government and big corporations are more discoverable to anyone who cares to investigate them is one way to stop conspiratorial thinking from taking root.

 

Uscinki (2019)

 

I am not alone in making this suggestion. Bost (2019) reminds that as the federal government grew during and after World War I, many conspiracy theorists began to see the American state as the biggest threat to their freedom. Bost adds that they were partially right. The growing executive branch did curtail American’s civil liberties, as they did through the Espionage Act of 1917. Later on, the FBI spied on Americans, infiltrated dissident groups, and engaged in entrapment ploys. The CIA drugged American citizens (during MK-ULTRA), they helped in the overthrow of democratically-elected heads of state (e.g., Guatemala, Chile), and they attempted to assassinate Fidel Castro many times, as we've seen. The Watergate scandal and the subsequent investigation revealed to Americans both how Nixon had surveilled his political enemies, with funds raised through extortion and bribery, as well as how he felt that he had dirt on the CIA. In tapes recorded by Nixon and acquired by the FBI, Nixon—when discussing how to get the FBI to stop their investigation into him—states that he had been protecting the CIA from “one hell of a lot of things.” Although what Nixon covered the CIA for has not been made clear, the press did publicize other CIA abuses, such as those mentioned above. Later, the Church committee revealed how the FBI had been spying on American citizens and even tried to blackmail Martin Luther King, Jr into committing suicide. Then came revelations of the Tuskegee experiments, where life-saving treatment was kept from black citizens for decades, and Iran-Contra. Clearly the power of the American state is influencing, in the very least, the kinds of conspiracy theories that are being circulated.

 

Procedural justice diagram

 

Of course, greater transparency can't be the only solution. An important shortcoming of this strategy is that there is necessarily a limit to what can be made transparent. Matters of national security and patented material obviously can't be discoverable. Any political entity needs to keep certain matters secret, although perhaps there should be some oversight over what gets classified—once again going against Plato. In any case, here are two other suggestions.

Van Prooijen (2019) considers some ways to reduce conspiracy mindedness. Reviewing studies that show that a sense of loss of control increases the likelihood that one will engage in conspiratorial thinking, Van Prooijen suggests empowering citizens in multiple ways to reduce conspiracy mindedness. For example, since it appears that higher education is correlated with higher cognitive complexity, well educated citizens ought to be less likely to believe in simple-minded conspiracy theories. Thus, he recommends that we encourage higher education and make it easily accessible. Van Prooijen ultimately gives his strongest support, however, for infusing civic actions with procedural justice (see the FYI section), always ensuring that the citizenry’s voice is heard. This, he believes, is the best method for suppressing conspiracy mindedness.2

Here's the second suggestion—short and to the point. Belief in conspiracy theories can be reduced via having subjects perform a task that increases analytic thinking (Swami et al. 2014). In other words, classes like this one might help;)

 

 


 

Do Stuff

  • Read from 484a-492a (p. 176-185) of Republic.

 


 

Executive Summary

  • The Golden Rules of Dynamic Argumentation are: Respond to the argument, Track the burden of proof, Demand overall consistency, and Be charitable.

  • Conspiratorial thinking has been constant throughout the history of the United States; it appears to impede rational information processing and it is politically disruptive.

  • There are various factors that might make one prone to conspiratorial thinking, such as feeling a lack of control, the alluring simplicity of conspiracy theories, the pressure of social identity, and authority figures which enable conspiratorial belief.

  • Some of the ways to minimize conspiratorial thinking might be to have greater transparency in government, to ensure that there are ways for the political voice of the citizenry to be heard (such as with procedural justice), and greater access to higher education and resources for analytic thinking.

 


 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy Entry on Conspiracy Theories (same as last time)

TL;DR: The School of Life, How to Resist Conspiracy Theories (same as last time)

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. On the issue of the links between smoking and lung cancer as well as greenhouse gas emissions and climate change, Shermer (2019, lecture 12) points out that sowing doubt is easy. This is because science, by its very nature, produces findings that are provisional—they are meant to be updated with more information. But this nuanced and sophisticated approach to knowledge usually goes over the head of most non-scientists.

2. Keeley (2019) juxtaposes conspiratorial explanations with scientific and religious explanations. Noting that conspiratorial explanations often begin with an anomaly in the official story, Keeley notes that this is how various scientific revolutions originated (e.g., Newtonian mechanics, Semmelweis and germ theory, etc.). Another aspect of scientific explanations, and which religious explanations tend to lack, is the assumption of naturalism. However, it is not clear that conspiratorial explanations violate the assumption of naturalism. Conspiracy theories that include aliens (or super-advanced technology, etc.) are technically non-naturalist (at least in practice), Keeley argues, since they cannot be studied using empirical methods. Keeley closes with a discussion of falsification. Falsification is complicated in conspiracies because real conspiracies do have intelligent agents actively trying to divert the inquiry, e.g., Nixon/North during Watergate. Nonetheless, evidence falsifying a theory should also not be taken as evidence proving the conspiracy, as might be argued by a conspiracy theorist.

 

 

In Silico

 

 

Descartes was mistaken. It’s not so much, “I think, therefore I am.”
More, “We are, therefore I think.”

~Kevin Dutton

Coming around the bend

In today's reading, Socrates and friends discuss various threats to the pursuit of truth. The one we will focus on in this lesson is the power of the crowd. In particular, we will look at the power of social identity and how it can determine your beliefs. Put differently, we're going to look at cases where social identity appears to determine one's beliefs (without them realizing it). Just like in The Family and The Distance of the Planets, our case study will involve a political affiliation. In those earlier lessons, the political affiliation was a far-left ideology. In the next two lessons, the example will come from the right wing of the USA: American conservativism. Let's get started.

 

Smith
Adam Smith (1723-1790).

Just like far-left ideologies render one incapable of even entertaining certain ideas—like genetic influence on, say, intelligence (and what can be done about it)—far-right ideologies do the same for other ideas. I will make my case backwards, so to speak. I will first give you the conclusion of my argument, i.e., what ideas far-right conservatives can't seem to begin to process, and only after stating my conclusion will I give my evidence.

So, here's the first thing that far-right ideologies might render one incapable of processing in an even-handed way: the potential of the government to play an important role in market systems. We'll refer to this as free-market fundamentalism. Just like Marxist tendencies are only exhibited by a small minority of the left-wing in the USA, free-market fundamentalism is not representative of the entire right-wing. In fact, for many conservatives, their allegiance to the right-wing comes from their views on social issues, such as in their opposition to abortion and their view that marriage is a stabilizing institution in society. Having said that, it is undoubtedly the case that some conservatives are free-market fundamentalists—and perhaps there's even more of these than there are Marxist leftists.

What beliefs does free-market fundamentalism entail? Typically, fundamentalists of this stripe believe that a. unregulated capitalist policies can solve many if not all social and economic problems, and b. that government intervention in social and economic problems is doomed to fail (either because government is not motivated by the profit-motive, or because it is inept, or because it is too slow and inefficient, etc.). These first two beliefs are generally accepted by most people who understand the label "free-market fundamentalism". I'd like to add one more belief. In my experience, free-market fundamentalists mythologize and/or distort the views of certain intellectual figures so as to make it seem like these thinkers endorse their views. For example, some free-market fundamentalists I've spoken with argue that the philosopher and economist Adam Smith agrees with their views. Let me say this unequivocally: Smith was not a free-market fundamentalist. I've actually read every book that Smith published. (It was easy. It's only two.) Although he does express wonder at how free-market systems operate, he in no way endorses having no regulations put on markets. Moreover, he has quite a few negative things to say about what he calls "commercial societies" (see Endless Night (Pt. I)). Another figure that these dogmatists sometimes lump into their camp is Friedrich Hayek. However, Wapshot (2012, chapter 14) reminds us that Hayek, although a formidable opponent to Keynesian economics and staunch defender of the free market, nonetheless advocated the following state-sponsored interventions: universal health care, unemployment insurance, and state provisions for basic housing.1

Is free-market fundamentalism an untenable view? Probably. First of all, consider that even if it turns out that all actually existing political arrangements are wasteful, inept, inefficient, or whatever, that doesn't mean that all possible political arrangements have to be. Moreover, it appears that there's evidence from the social sciences that at least one of their beliefs is false: belief (b) above. Recall from the lesson titled Fragility the work of Mariana Mazzucato. In The Entrepreneurial State, Mazzucato argues against the view that the state should not interfere with market processes. She argues instead that the state has played the main role in the development of various technologies that define the modern era: internet, touch-screen technology, and GPS. It has also granted loans to important companies such as Tesla and Intel. Moreover, the state takes on risks in domains that are wholly novel and in which private interests are not active, such as in space exploration in the 1960s. It is a major player both in the demand and supply side, since it both makes many purchases from the private sectors as well as supplies many goods and services. It also creates the conditions that allow the market to function, such as in the building of roads during the motor vehicle revolution. In short, the state is entrepreneurial (and very good at it).

 

Mazzucato's The Entrepreneurial State

If you recall, I had discussed Mazzucato's work while discussing lazy thinking. Lazy thinking occurs when when you are satisfied with the quick and easy solution without making sure you're actually engaging the more rigorous information-processing parts of your mind. There are signs that free-market fundamentalists, although they are not necessarily lazy, are engaging in lazy thinking. In my experience, when they respond to objections to their dogma, they'll give simple-minded explanations while avoiding the hard cognitive labor of asking themselves how it is that they truly arrived at their conclusion (a process called meta-cognition).

Here's an example. When facing challenges to free-market fundamentalism, the fundamentalist sometimes gives what I call the doom objection. This is the objection that in the past government has always failed when attempting to solve societal problems, and, even when they haven't failed, the private sector could've done it better, faster, and for less money. Ultimately, they conclude, government just gets in the way. Unfortunately for them, that's not true. Mazzucato (2015, chapter 3) provides evidence that the US government not only plays a vital role in the market (by being the biggest consumer of many products), but also has made new markets(!) for radical new technologies that private firms wouldn’t and couldn’t have developed on their own, including those of nuclear energy, computer science, biotech, nanotechnology, and more. I cannot overemphasize this. Let me just focus on computing—a field near and dear to my heart—for a second to really make this point. The computing revolution that we're currently enjoying was made possible by governments. During World War II, governments poured money into the research and development of computing machines. Then, during the Cold War, the US government poured money into furthering these computing devices, funding the computer science departments that cropped up all over the country. And even after the fall of the Soviet Union (and hence the end of the Cold War), the Defense Advanced Research Projects Agency (DARPA), which is a research and development agency run out of the United States Department of Defense, played a role in the development of weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer, and the internet. In short, not only is the state an effective player in the market, it has literally made new markets. Only an incurable degree of extreme confirmation bias lets the free-market fundamentalist not realize this.

 

Steve Jobs

Free-market fundamentalists, in my experience, also have this odd, cultish perspective towards entrepreneurs—unless, of course, those entrepreneurs are involved with the government somehow. They argue that only completely de-regulated markets are fair. Taxing, minimum wages, and all other regulations are just a burden on the capitalist entrepreneur, who deserves all the credit since his/her idea got the business off the ground. As someone who one day hopes to own a business, I do admire entrepeneurs. However, the examples that free-market fundamentalists have given me during our conversations betray the fact that they have no idea what they're talking about. For example, I've often been told that Steve Jobs is the ultimate entrepreneur—that his ideas revolutionized the world and that's why he was so rich. Unfortunately, again, the story is a little more complicated. In chapter 4 of The Entrepreneurial State, Mazzucato argues that Steve Jobs’ business savvy is impressive, but it’s still a fact that the base technologies that catapulted his platforms (e.g., iPhones, iPads) to success were developed over decades by the state: touch-screen, GPS, and the internet. Apple (merely) integrated these technologies. Again, the computing revolution was set off by state-funded computer science programs in universities and private-public partnerships. Second, Apple itself received state grants for emerging small businesses. Third, the enabling technologies of Apple products were all invented outside of Apple (by state-sponsored programs). Lastly, Apple even called upon the US government to help breakdown international trade barriers (with Japan) and to purchase their products for American public schools. So, even though Jobs was an impressive individual, his story does not support the idea that the free-market should be completely unregulated and the state should play no role in the market.

My friends, the world is hard. You can't operate under simplistic ideologies and expect to get at the truth. The truth is always more nuanced and complicated and frankly boring. Why do so many people go astray so easily? Why do these dogmas get propagated? Stay tuned.

 

 

 

 

In-group bias

In the last section, we took a look at a social identity (far-right conservativism) and then identified a dogma that it is particularly easy for this social identity to subscribe to (free-market fundamentalism). In this section, we will begin to make the case that it is the social identity that caused the belief in the dogma—not the other way around, as one might intuitively suspect. Ultimately, it appears that the capacity of crowds to shortcircuit our information processing systems comes down to the in-group bias. In-group bias is the tendency for people to give preferential treatment to others who belong to the same group that they do. Although the word "tribalism" is amenable to misinterpretation (and is not generally considered politically correct), it is a helpful label for the in-group bias. We generally prefer our "tribe", our people, over "them", whoever "they" may be.

 

A Sphex wasp
A Sphex wasp
(see Sidebar).

Let me first try to persuade you that this is truly important to you and to anyone you care about. I think the best way to do this is to inform you that advertisers, politicians, employers, and many others use in-group bias to manipulate you. In his best-selling Influence, Robert Cialdini reviews the psychology of persuasion, how companies and salespeople try to persuade you, and how to stop yourself from being manipulated. Cialdini begins his survey of the psychological literature by arguing that we have built-in cognitive tools that make us assent to a request, automatically and without thought, once they are engaged. In other words, we appear to be predisposed to assent (whether it be in the form of agreeing, cooperating, buying, or generally responding positively) under certain circumstances, and that these responses happen without conscious thought. We are much like mother turkeys who have their mothering behaviors activated by their chicks cheep cheep sounds (something which is referred to as a fixed action pattern by biologists). If someone presses the right buttons and we aren't paying attention, they can get us to agree with them reflexively. This is part of our cognitive setup—and advertisers and salespeople know this.

Here's an example. Cialdini cites one study where subjects let someone cut in line as long as the cutter gave them a reason when they asked to cut, even if the reason was not really a reason at all. In other words, whether the cutter said, “May I please skip ahead because I am really in a rush” or “May I please skip ahead because I really need to make copies”(!), subjects were inclined to let her cut. Of course, anyone who is standing in line to make copies really needs to make copies. So the experimenter didn’t give the subjects any new information when she said what she said in the second condition. Nonetheless, they let her jump ahead! That's because we predisposed to assent when someone gives a reason for their action. (Try it.)

Of course, Cialdini's principles of persuasion and review of the psychological literature give much more detail than I can here. What is important to note is that built-in tendencies to assent are part of a system of heuristics (or mental shortcuts) that we seem to come pre-loaded with. As we've discussed many times before, processing information is metabolically costly and cognitively demanding. If your brain can, it will skip the processing and arrive at a conclusion from limited information (regardless of the quality of the information). More importantly for our purposes, one of the shortcuts the brain takes has to do with crowds. In fact, the social influence of others appears throughout his book (but see chapters 4 and 8 in particular). Paying attention yet?

 


 

 

Sidebar

The interested student can check out Daniel Dennett’s Intuition Pumps (p. 397-98) for an example of the fixed action pattern of the Sphex wasp. Apparently, the Sphex wasp has to engage in its procedure for laying eggs exactly in the way its genes require it to. In particular, it paralyzes a cricket, drags it outside its burrow, then leaves it at the threshold of the burrow to check that everything is set up inside, and only then drags the cricket back in. When researchers would drag the cricket a few inches while the wasp was checking everything was in order inside, the wasp would drag the cricket back to the threshold and repeat the check-up in the burrow. In other words, the cricket had to be exactly at the threshold where the wasp left it in order for the next behavior to be engaged. Researchers once repeated this little trick on a wasp forty times, and the wasp never thought to drag the cricket straight in. This is the kind of built-in behavioral programs to which Cialdini compares our tendency to assent to: they're automatic. At least in the case of the wasp, they cannot be overridden. It's just like a computer program. You click. It runs. However, things can be different in humans. The first step is learning about the psychological principles of persuasion.

 


 

 

Cialdini's Influence

In chapter 8 of Influence, Cialdini focuses on in-group bias. He reminds us that we all automatically and incessantly categorize those around us into those to whom the pronoun we applies and those to whom it doesn’t. As we learn from Dutton (2020), this might be a method for processing incoming information more quickly and efficiently (see lesson titled The Distance of the Planets). In any case, those who we consider part of our “tribe” get many non-conscious psychological benefits: we consider them more trustworthy, we are naturally more cooperative with them, and we even find them to be more moral and humane. This tendency runs deep. When subjects in an fMRI scanner are asked to imagine “the self”, the same neural circuits light up as when they are asked to imagine a close other, i.e., a member of their "tribe". Cialdini even reports that some researchers have gone as far as claiming that tribalism isn’t a part of our nature—it is(!) human nature. This explains why we can see group activities, like choreographed dances, far back in prehistory, as seen in cave paintings.2

This ingroup bias is made manifest in many ways. Importantly, it is typically non-conscious. This is why compliance professionals use it, and why Cialdini discusses it in his book. For example, in Ghana, where taxi fares are haggled before the ride takes place, taxi drivers give better deals to those from their political party. Referees in international soccer matches are more likely to make favorable calls for teams with a sizable proportion of members of their ethnic group. It's even in the Bible! Recall that the Israelites were instructed to only enslave non-Israelites (Leviticus 25: 39-46); a message that was echoed by Plato who suggested that Greeks only enslave non-Greeks and not fellow Greeks. How can this be used to manipulate you? In general, if you can be made to believe that your in-group favors something, then you're very likely to favor that thing too, whether it be a political candidate, a product, a company, whatever.3

 

Mason's Uncivil Agreement

My favorite example of this comes from Lilliana Mason's Uncivil Agreement, where she makes the case that Americans are dangerously polarized and self-segregated despite the fact that their political positions aren’t really as different as they perceive them to be (at least for the majority). To be clear, at the extremes, there's basically no hope for reconciliation. But(!) most people are not at the extreme political poles. Mason shows that most people, i.e., more than half, are more centrist and actually agree on a lot, whether they identify as Democrat or Republican. However, what's truly concerning is this: people aren't really thinking about their positions. Most of these centrist just side with whatever their party says without reflecting on it very much (i.e., lazy thinking). The results would be funny if they weren't so tragic. Let's consider an example.

In one study, Cohen (2003) found issue positions to be highly dependent on group and party cues. In one experiment, he was able to get liberals to support a harsh welfare program and conservatives to support a lavish welfare program by telling them their in-group party supported the policy. Did you catch that? Liberals agreed to a pretty conservative policy just because they were told the Democrats endorsed it. Ditto for conservatives. If they were told that Republicans endorsed a pretty blatantly liberal policy, they would endorse it too. This is clearly no good. It almost suggests that these subjects don't really know what the labels conservative and liberal mean. All they hear is "Us" and "Not Us". Lazy thinking. And it gets worse:

“Notably, these respondents did not believe that their position had been influenced by their party affiliation. They were capable of coming up with explanations for why they held these beliefs” (Mason 2018: 74; emphasis added).

In other words, they were completely oblivious about their lazy thinking! Mason continues with the sad reports:

“A Pew poll from June 2013 found that, under Republican president George W. Bush, 38 percent more Republicans than Democrats believed that NSA surveillance programs were acceptable, while under Democratic president Barack Obama, Republicans were 12 percent less supportive of NSA surveillance than Democrats” (Mason 2018: 74).

This is the same set of NSA surveillance programs. The primary difference is who's in charge. One could argue that a Democrat might be more comfortable with the NSA's capacities if they know a Democrat is the executive. However, given the social psychology we've been looking at, it's more tempting to say that they don't know anything about the NSA programs. They just know who's in office. And as such, they simply approve of the programs when their person is in office and disapprove when their person is not in office. Lazy thinking.

Mason gives one more example. It even turns out that one is more likely to become politically active about some issue if there is a strong social movement behind it, regardless of how they rank the issue on a political importance scale (Mason 2018: 121). Let me explain. People ranked issues according to how important they thought they were. Then they were given an oppoortunity to participate in some political event. Even if they ranked the issue low in importance, they would agree to participating in the event if there was a sizable amount of people from their in-group participating. So, even after they said it doesn't matter that much, their in-group bias kicks in and they just go with the crowd.

In conclusion, crowds are not good for your brain.

 

 


 

Do Stuff

  • Read from 491d-502d (p. 185-197) of Republic.

 


 

Executive Summary

  • Free-market fundamentalism is the view that a. unregulated capitalist policies can solve many if not all social and economic problems, and b. that government intervention in social and economic problems is doomed to fail.

  • Mariana Mazzucato argues against free-market fundamentalism by making the case that the state is actually a very effective entrepreneurial force.

  • In-group bias is the tendency for people to give preferential treatment to others who belong to the same group that they do.

  • There is evidence that when we are faced with uncertainty, we turn to our in-group for guidance on what to believe, without processing the information at all.

FYI

Suggested Reading: Mariana Mazzucato, The Entrepreneurial State

TL;DR: TEDTalks, Mariana Mazzucato - The Entrepreneurial State

Supplemental Material—

Related Material—

 

Footnotes

1. By the way, Hayek also promoted free movement of labor across state lines, i.e., open borders—an idea that hardly resonates with most conservatives.

2. In chapter 9 of Demonic Males Wrangham and Peterson begin the chapter agnostic about whether violence is genetically-implanted in humans. They then discuss how, although it doesn’t initially seem like it, the male anatomy is designed for fighting. Male arms and shoulders, just like in chimps, are more muscular than the female equivalent and the shoulder joint is suited for punching. Moreover, males and females initially have similar upper bodies, but male muscle begins to set in at puberty, when the female reproductive capacity begins. Interestingly, humans also use their reasoning faculty for violent ends. Most relevant to us here, the authors then move to discuss our in-group bias, and how it only makes sense in a species with an evolutionary history of group fighting. We are so groupish that humans even feel a state of ecstasy when they lose themselves in the collective "Us"—a phenomenon known as deindividuation.

3. Cialdini also discusses how the in-group bias can be used to benefit the species by fostering cross-group cohesion. Cialdini suggests we can begin with children. Members of other ethnic (or religious or political) groups can be invited over for dinners, extended stays, play dates, etc. Importantly, they are not to be treated like guests. They are to be treated like family, such that they are expected to help out, be a part of chores and games, etc. This will develop a feeling of unity with people that are outside of the child’s perceived in-group, thereby extending in the child's mind who counts as we.

 

 

Lines Divided

 

 

All things being equal, you root for your own sex, your own culture, your own locality… and what you want to love is that you are better than the other person.

~Isaac Asimov

Failing to see reality

In-group bias is real, and so is our categorization instinct, our tendency to group everything into categories. And so we have our warring camps, and they render us unable to process reality in a more objective way. Since we are still focusing on the right-wing, here's a few more examples of how partisan politics short-circuits critical thinking: partisans are unable to see the causes of things they object to. In particular, I will make the case that some conservatives are unable to see the full picture with regards to the causes of terrorist attacks on citizens of the USA, immigration to the USA, and racial unrest in the USA.

Terrorism

 

Map of the American Empire
Map of the American Empire
(from Immerwahr 2019).

Chalmers Johnson (2000) argues that terrorist attacks on U.S. citizens are primarily blowback from heavy-handed, and oftentimes covert, U.S. imperial foreign policy. Moreover, because various foreign policies are secret or are poorly publicized, American citizens often don’t understand why they are being attacked. But, Johnson argues, the impetus for many terrorists is to strike back at the American Empire. Here's some context.

Johnson reminds us that blowback is a term coined by the CIA which refers to attacks on US citizens inspired by US covert actions abroad, going back at least to the middle of the 20th century. However, as Johnson uses the term, blowback refers to attacks on American citizens as retaliation for the USA's imperial tendencies. These imperialist policies, whether they be overt land invasions or covert operations, create resentment towards the US and now fuel terrorist activities. Here are some examples of the USA's imperialist policies.

First there old-fashioned land grabs. The late 19th and first half of the 20th century saw a rapid level of conventional territorial acquisitions (including Hawaii, the Philippines, Puerto Rico, Guam, American Samoa, Guantanamo Bay, the Panama Canal Zone, and many islands) as well as the proliferation of military bases (see also Immerwahr 2019). Next came the Cold War. Although it was imperialistic business as usual—more territory, more bases—this conflict seems to have obscured the American imperial project, since the case could be made that the acquisitions and new bases were all part of a strategy of containment of the Soviets. Nonetheless, as stated, the territorial acquisition and base proliferation continued, and in addition, whenever it was convenient, the US would install US-friendly governments in a region.

Other examples of imperialist policy that might cause blowback:

 

Johnson's Blowback

 

  • The training of rightist Nicaraguan militants during the 1980’s (see Byrne 2014 or this lecture by Byrne)
  • Continued US presence in Saudi Arabia after the first Gulf War, which inspired Osama bin Laden's attacks (see Johnson 2000, chapter 1).
  • US-sponsored dictatorships in East Asia (at varying times):
    • Taiwan
    • Philippines
    • Vietnam
    • Cambodia
    • Thailand
    • Indonesia
  • Alliances with brutal authoritarian regimes such as that of:
    • King Khalid of Saudi Arabia
    • King Hassan of Morocco
    • the Persian Shah
    • several Greek colonel dictators
    • General Pinochet in Chile
    • General Franco in Spain
    • General Geisel in Brazil
  • Various overthrows in Latin America

Johnson notes that the American experience of blowback has an element of confusion: the actions taken by the managers of the American empire are usually done covertly, leaving its citizens at a loss as to why some foreign agents hate them. In other words, because the American electorate is ignorant on most of these historical matters, they don't realize why they are being attacked. This is why they resort to insipid comments, like "They hate us cuz they hate us", and asinine descriptions of US soldiers as "fighting for our freedom". The truth, as we can see, is much more complicated.

Immigration

Here's some Food for thought...

 

 

The US southern border does not only see migrants from México attempting to cross; there are also many people fleeing violence in Central and South America. Part of the reason is that the US gun market is actually playing a vital role in destabilizing the regions from which these migrants come. In chapter 5 of his recent Blood Gun Money, Grillo begins by discussing how porous the border actually is, despite rhetoric of building walls and securing the homeland. Greater border security, Grillo argues, doesn’t make crossing the border impossible—just more expensive. And this is the case whether one is crossing the border with contraband (e.g., drugs) or taking part in illegal immigration. Why is it so expensive? It's because much illegal immigration is now orchestrated by the Mexican drug cartels. These cartels, which control smuggling at the border, simply expanded their trade to include smuggling humans—a task to which they easily adapted. This in turn gives them another form of revenue, which makes them more powerful, which makes them more feared, which makes Americans more alarmed, leading to more militarization of the border, which leads to more money for the cartels.

 

Grillo's Blood Gun Money

 

In addition to giving more power to Mexican drug cartels, militarizing the border puts all the focus on the flow of goods and people travelling north, neglecting the flow of goods and people travelling south. But the goods travelling south are a causal factor with regards to the people travelling north. How so? Grillo discusses how both Americans and Mexicans use the private sale loophole in American gun laws, which allows private sellers to sell weapons without a background check, to buy weapons in the US and take them to México. These, by the way, are known as straw buyers, those that legally buy weapons and then sell them into the blackmarket. These weapons, of course, end up in the hands of the now more powerful drug cartels. Why do so many guns flow south? Well, obviously, because there's a demand from drug cartels. But you still need someone to actually purchase the weapons. Why would anyone engage in straw buying? The reasons for why someone might participate in this practice vary. Grillo reports that some undoubtedly like the excitement. Some like the culture. Most, including many veterans, the recently unemployed, those on disability, etc., just really need the money.

As it turns out, no one knows how many guns have been smuggled south across the US-México border. That’s the nature of the black market. One estimate, however, is that over 250,000 weapons per year have been smuggled from the US to México during the 2010s, earning the gun industry more than $125 million per year. The study is called The Way of the Gun. The US firearm black market, or as Grillo calls it The Iron River, makes its way into more than 130 countries. It gets into the hands of gangs and guerrillas in Central and South America, destabilizing those regions, strengthening the gangs by equalizing the firepower of the police forces and the gangs, which in turn lets gangs commit more crimes with impunity. This violence is what drives people to leave their country, becoming refugees, and ultimately making their way to the American border.

Racial Unrest

As a final example, recall from the lesson titled The One Great Thing (Pt. I), that Republicans are more likely to blame those in poverty for their socioeconomic status, as opposed to blaming external factors (i.e., "the system"). However, this has to be untenable. There are some radically asymmetrical wealth dynamics in the United States, and they are in no small part due to the federal government. In other words, a certain group (Whites) have undoubtedly been priviledged in recent history to the detriment of other groups (Blacks, Hispanics, Natives, etc.). It's essentially impossible to deny this. And importantly, these past injustices are still having repercussions today that affect the socioeconomic status in those disenfranchised groups. Put bluntly, some white people are still benefiting from the racial injustices of the past.

Leon Trotsky, Vladimir Lenin and Lev Kamenev, Moscow (1920)
Leon Trotsky, Vladimir Lenin &
Lev Kamenev, Moscow (1920).

Perhaps the most straightforward case of this in recent history is in the housing sector. In The Color of Law, Richard Rothstein reviews how government housing policies in the 20th century explicitly discriminated against African Americans to the benefit of whites. Let me begin first with some context.

After the Bolshevik Revolution, in which Lenin and his group of revolutionaries took power in Russia, the Wilson administration thought it could stave off communism at home by getting as many white Americans as possible to become homeowners—one of the many ways in which the fear of communism shaped the United States during the 20th century. The idea was that if you own property, you will be invested in the capitalist system. And so, the “Own Your Own Home” campaign began. The program, however, was largely ineffectual and the housing crisis only grew.

In the 1930s, then, as part of his New Deal, Franklin D. Roosevelt subsidized various aspects of homeownership, such as insuring mortgage loans and construction on new housing, through the Federal Housing Administration (FHA). But since segregation was still in the books, these programs necessarily had a racial bias. Due to unfounded worries about African American inability to pay loans and “community compatibility”, African Americans were denied housing in white areas and were denied loans, even when they met all the non-racial qualifications.

 

Rothstein's The Color of Law

 

This continued after the conclusion of World War II. Several subsidized housing projects with racial restrictions arose as WWII veterans were returning to the US, providing housing for those returning. California natives will recognize some of these communities: Lakewood, CA; Westchester, CA; and Westwood, CA.

Anyone familiar with the housing market knows that these homes are now extremely lucrative, many of them costing more than a million dollars. So, the federal government subsidized extremely lucrative housing to one group (Whites) and deliberately disallowed other racial groups to benefit from these programs. Moreover, these white homeowners were able to pass on their wealth to their children. As anyone who pays rent in California knows, it'd be really nice to live in a home you already own. So, generational wealth accumulates. Not only did WWII and their cohort enjoy these social benefits, but so did their kids—and their kids' kids.

It doesn't end there. During these time periods, local law enforcement enabled or were at least complicit in the white terrorizing of black citizens which took the form of harassment, protests, lynchings, arson, and assault. Just to see how egregious this negligence by law enforcement was, Rothstein reminds us that this was during a time period when federal and local agencies were expending considerable effort to surveil and arrest leftists and organized crime syndicates (see the lesson titled Apt Pupils). In other words, they certainly had the capabilities to investigate and infiltrate dissident groups. So, it is telling that law enforcement did not extend this effort to curtailing attacks against African Americans; they mostly just let it happen.

 

Oliver and Shapiro 2006

 

It's even the case that the federal government was guilty of suppressing wages for African American workers. When minimum wage requirements were rolled out, it was not uniform across all industries. Namely, no minimum wages were mandated in industries were African Americans predominated. It gets worse. During WWII, FDR mandated that American factories be taken over as military factories, manufacturing war materiel. At first, these factories employed white men, due to segregation. After white male manpower had been exhausted, the factories began to let women into the workforce. It was only when white women were not numerous enough to fill the workforce that African American males were recruited. African American women were last. Until they were hired on to these factories, African Americans had to labor in low-wage industries—ones with no minimum wage. And so, in government-run factories with decent wages, there was a policy of delaying the employment of African Americans as long as possible.

This history puts racial inequality in perspective, but it is a history that some want to suppress. But if this history is suppressed, then we won't learn the economic lessons. In their research, Oliver and Shapiro found that “[white] family assets are more than mere money; they also provide a pathway for handing down racial legacies from generation to generation” (Oliver and Shapiro 2006: 26). Put bluntly, the wealth that you inherit (whether it be in the form of an inherited home, or stocks, or cash) clearly plays a determining role in your social status for life. And, importantly, not everyone gets to inherit. This is because not everyone's parents got a leg up from the government. Here are Oliver and Shapiro again:

“Whites in general, but well-off whites in particular, were able to amass assets and use their secure economic status to pass their wealth from generation to generation. What is often not acknowledged is that the accumulation of wealth for some whites is intimately tied to the poverty of wealth for most blacks. Just as blacks have had ‘cumulative disadvantages,’ whites had had ‘cumulative advantages.’ Practically, every circumstance of bias and discrimination against blacks has produced a circumstance and opportunity of positive gain for whites. When black workers were paid less than white workers, white workers gained a benefit; when black businesses were confined to the segregated black market, white businesses received the benefit of diminished competition; when FHA policies denied loans to blacks, whites were the beneficiaries of the spectacular growth of good housing and housing equity in the suburbs. The cumulative effect of such a process has been to sediment blacks at the bottom of the social hierarchy and to artificially raise the relative position of some whites in society” (Oliver and Shapiro 2006: 51).

Terrorism, immigration, racial inequality. The explanations are above are well-sourced, social scientific explanations of these phenomena. But these are not the kinds of explanations that conservatives tend to give on these topics. I wonder if truly entrenched conservative would even be able to process these theories. My gut says no. If you're at least thinking that I might be right on this, then I've accomplished what I set out to accomplish.

 

 

 

Lines Divided

As we saw last time, crowds are not good for your brain. If we are faced with uncertainty, we turn to our in-group for guidance on what to believe (Damasio 2019, chapter 12). Let's continue with this line of thinking, since it relates to today's reading. Dutton (2020) reminds us of two important things: it is inevitable that we draw lines between Us and Them (given how our brains evolved) and it's typically counter-productive. Let's begin first with our innate need to categorize everything, including ourselves, even if there's no corresponding categories in the real world. Here's Dutton:

“Our brains come equipped with a formatting palette. We’re hardwired to draw lines by our rich evolutionary past. But how can we be sure that the lines we are drawing are accurate? And how do we know where to place them? The answer, quite simply, is: we can’t. We have no way of knowing, no means by which to be certain, that the lines we are drawing are true. And yet still we are compelled to draw them. Because the world is a complicated place and lines make stuff easy and doable. And ‘doable’ is something we crave” (Dutton 2020: 3).

So, it looks like we can't help it. But putting up walls between Us and Them only leads to more conflict and less concord. Studies show, as Dutton reminds us, that pointing out differences actually makes us less willing to work together—and much else. For example, in one study:


“[T]wo groups of participants comprising four members each (AAAA and BBBB) convened in separate rooms to discuss the solution to a problem (the so-called ‘Winter Survival Problem’: their plane had crash-landed in the woods in the middle of January and they had to rank order items salvaged from the plane—a gun, a newspaper and a tub of lard, for example—in terms of their importance for survival). Both groups then came together as one around a conveniently octagonal table to hammer out a joint proposal. But there was, as always, a catch. This time with the seating plan. In one variation of the study, the groups remained fully segregated (AAAABBBB). In another they were partially segregated (AABABBAB), while in a third they were fully integrated (ABABABAB). The effects were incredible. Not only did this rudimentary game of psychological musical chairs significantly reduce in-group bias among those who were fully integrated, it also increased cooperation levels, friendliness ratings and respective member confidence in the jointly proposed solution” (Dutton 2020: 298).

In short, minimizing group boundaries gets results: greater openness to hearing the ideas of out-group members, reduced animosity, greater perceived friendliness across the board, and even a better feeling over the plan of action that the group decided on. But again, we seem to not be able to not divide up into warring camps, at least not politically. And so, we cannot reep the benefits of our collective wisdom. As we saw last time, partisans actually agree on a lot, but they don't seem to realize since they can't see past their team colors—which is why Mason's book is called Uncivil Agreement.

This, however, is no good. Remember that, according to Caplan (2008) and Brennan (2017), our actual living standards—the material and social wealth we get to enjoy—are being artificially suppressed by the bad policies of our government, and the bad policies of our government are a result of catering to irrational, ignorant voters. Even our media is reduced to mere "infotainment" in order to secure good ratings (since presumably real news coverage would bore people and they would watch something else). Given that most are irrational and ignorant of truth, Plato argues only those with true knowledge and special minds should be able to rule...

Argument Extraction

 

 


 

Do Stuff

  • Read from 502d-511e (p. 197-207) of Republic.

 


 

Executive Summary

  • There are well-sourced, social scientific explanations given for the phenomena of terrorism, immigration, and racial inequality.

  • The following question is raised: Would die-hard conservatives actually be able to process these theories accurately? Given the evidence from last lesson, I wager the answer is no.

  • Dutton gives us evidence that dividing ourselves into groups makes us less capable of learning from society's collective wisdom.

  • In the dialogue, Socrates and friends begin to discuss Plato's Theory of the Forms and Platonic Heaven.

 


 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy, Entry on Plato (Section 6)

TL;DR: The School of Life, Plato's On: The Forms

Supplemental Material—

Related Material—

Advanced Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.

 

 

 

The Party Is Over

 

 

So if I could change one thing in Washington—this is totally radical—but I would maybe get rid of political parties altogether. I think our founders envisioned factions, not parties, and it’s the most destructive thing we have right now.

~Eric Garcetti

Breaking News!

 

Argument Extraction

 

The Party Is Over

In this lesson, we finally see the argument for the abolition of political parties. To remind you, the lessons in this unit all had a bearing on the conclusion we argue for today. For example, in this unit we looked at science denialism from both far-left thinkers (in that they have denied the validity of studies into individual genetic variation based on ideology alone) and far-right thinkers (in that they have denied the existence of what appears to be some non-wasteful, very entreprenurial actions taken by the state). We also saw how partisan thinking (i.e., Democrat versus Republican type thinking) is part of the fuel that feeds conspiratorial thinking in the USA. Lastly, we've also seen that partisan thinking seems to enable lazy thinking, such that partisans don't even really process what's being asked of them; they merely look for party cues on how to vote (and then invent a rationale for their action). This is all not good.

 

 

The Argument

  1. If the political party system of the USA enables non-rational and lazy thinking that puts the country at risk, then it should be abolished.

  2. The political party system does seem to enable non-rational and lazy thinking in that partisans are likely to deny scientific findings that don't cohere with their ideology.

  3. The political party system also enables non-rational and lazy thinking in that it fuels easy-to-digest conpiratorial thinking, such that perfectly fair elections seem fraudulent and the same government programs are either seen with suspicion or with praise depending on who's in power.

  4. The political party system also provides simplistic talking points that inhibit more complex social scientific explanations to take hold in the minds of partisans.

  5. So, given 2-4, the political party system does enable non-rational, lazy thinking.

  6. Moreover, this non-rational and lazy thinking has put the country at risk, e.g., the Capitol Hill Putsch, science denialism, political polarization, politicization of vaccines, ineffective legislative bodies, gerrymandered elections, etc.

  7. Therefore, the political party system should be abolished.

 

 


 

Do Stuff

  • Read from 514a-523c (p. 208-217) of Republic.

 


 

FYI

Suggested Reading: Eleanor Glor, Why Are American Politics Extreme and What Can Be Done About It?

TL;DR: Lilliana Mason: Social Polarization and the 2016 Elections

Supplemental Material—

Related Material—

Advanced Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.

 

 

 

 Unit III

Critical Thinking 3.0 Syllabus (T_R).jpg

The Third Realm

 

 

The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.

~Eugene Wigner

Plato on Math

Although Plato stresses the importance of mathematical training throughout Republic, in Book VI, he gave us a metaphor for understanding how mathematics is a gateway for a deeper understanding of reality: the divided line. Today's reading reminds us about just how important mathematics was to Plato. In fact, mathematician and historian of mathematics Morris Kline sees that mathematics is central to Plato’s entire conception of knowledge and points out that Plato may have even been part of the quasi-religious sect of mathematicians headed by Pythagoras (see Kline 1967: 62-63). Notice, for example, the cryptic language that Plato uses when discussing the subject matter of mathematicians:

"These very things that they [the mathematicians] model and draw, which also have their own shadows and images in water, they are now using as images in their turn, in an attempt to see those things themselves that one could not see in any other way than by the power of thinking"" (Republic, 510e-511a; emphasis added).

 

Plato's Divided Line

Recall that, as a metaphor for how he conceived of the world and the objects within, Plato gave us The Divided Line. What Plato has laid out for us is his hiearchy for the reality of objects. In other words, Plato is organizing the types of objects that exist from less fundamental to more fundamental. At the bottom are mere copies of things: reflections in the mirror, paintings, and the like. These are only copies of the real thing; for example, your reflection is a mere copy of the real you. The next level up is the realm of physical objects. This is where you and I live, along with all the physical things that we interact with on a day-to-day basis. We might think that this is the ultimate level of reality, but Plato disagrees. Upon reflection, we might come to think that Plato might have a point. After all, the reality that we see with our senses can't be the ultimate reality. We know that physicists tell us about a world of atoms and smaller subatomic particles that we can't see with the naked eye.

 

Platonic Solids
Platonic solids, which
exist independent of humans.

 

As you can see in the diagram, the next level up consists of mathematical objects. This means that all mathematical objects (like numbers), as well as mathematical relations (like equality) and functions (like addition, subtraction, and all the more complicated functions) exist in this realm, independent of all human thought. In other words, numbers are real and they are more fundamental than the reality we inhabit. This shouldn't sound too strange if you believe that mathematics has the power to help us understand our world: mathematics has this power because it is upon mathematics that our world is ordered. This is why mathematics is always involved in the sciences and in any other enterprise that involves knowing our world in a deeper way.

Above mathematical objects are The Forms on which our reality is based. These are the actual properties of the universe upon which everything we see is based. To see this realm is to have a "god's-eye-view". In other words, it is to understand reality as it really is—something that might come in handy if you are a Guardian. Try to imagine combining all the knowledge of all the disciplines and then extending that knowledge until it is complete, where everything that can be known is known. That is what it is to know The Forms. And of course, The Forms are ordered too, just like everything below it. At the top of the hierarchy is The Good.

And so Plato believed that this realm of The Forms could be understood via the realm of mathematical objects. One way we could interpret this is that if we want to understand reality as it really is, we have to go through mathematics. All of our knowledge, then, will ultimately be based on mathematics; mathematics is the medium—or the gateway—through which we can understand the basis of all reality (The Forms). As such, understanding some basics about math, including some of the jargon, is extremely useful for understanding the world we live in.

Argument Extraction

 

 

 

Mathematical Thinking

 

Stats Basics

 

Scatter Plot

 

Understanding the basics of statistics is key for a critical thinker. Many arguments are sometimes couched in statistical language, and if you are not fully cognizant of what is being stated, then you might call an invalid argument valid. Some of the most basic jargon has to do with the distinction between associated values (also referred to as dependent values) and independent values. Associated values have some kind of connection. For example, perhaps they are both caused by some third factor, or perhaps they are just regularly correlated. Take a look at the scatter plot pictured. Clearly there is a relationship here. It appears that cities with a high percentage of homeownership are cities with a low percentage of multi-unit buildings. This makes intuitive sense. If a majority of people in a city own a home, then there won't be much demand for apartment buildings.

Whereas we've previously discussed so-called natural experiments (or observational studies), statisticians can also experiment. Researchers use these experiments to check if there is a causal connection between two variables, namely to see if an increase (or decrease) in one variable (the explanatory variable) causes an increase (or decrease) in the other variable (the response variable). Experimentation includes various forms of sampling. As long as the experiment utilizes random assignment, we are allowed to make causal conclusions, i.e., random assignment allows us discover relationships between explanatory variables and response variables. For example, if a new treatment is being used to treat disease X at some hospital and the staff chooses which patients with disease X get the new treatment via a coin flip, then this would be random assignment (Diez et al. 2019: 32). However, if one would want to generalize some medication for the general public, then random assignment is not enough. This is because for generalizability we need random sampling, i.e., we need it to be the case that everyone in the population had an equal chance of being included in the study. This is not the case in this example, since the coin toss was only performed for those with disease X—clearly not random sampling.

Things to Watch Out For

 

GDP

 

Misusing Regression Analysis

If you're familiar with statistics or have taken a stats course, then you are probably familiar with regression analysis. Regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables (see Diez et al. 2019, chapter 8; or watch this helpful video. There is a potential problem with regression analysis: you can essentially have a computer run this analysis on data but draw the wrong inferences.

How can this happen? One scenario where this might occur is if you are using regression analysis on data sets that do not have a linear relationship. For example, you might decide to analyze the relationship between K-12 funding and gross domestic product (GDP). It might be the case that both (1) having a high GDP opens up funds for more education funding and (2) having a well-funded k-12 system tends to promote GDP growth. However, this relationship might be non-linear. In other words, the system of which both k-12 funding and GDP are a part of might be a system in which the change of the output is not proportional to the change of the input of a given variable. Put bluntly, the system is too complex and regression analysis is not the right tool.1

 

Breaking the cardinal rule: correlation does not equal causation.

What would happen if you tried to find a correlation coefficient for the rise in autism in the US and the rise of the GDP in China? You’ll find one if you look for one, but you know this isn’t a theoretically sound relationship. Don't go thinking the rise in autism caused the rise in China's GDP!

 

The Biased Sample Fallacy

 

Informal Fallacy of the Day

 

The biased sample fallacy occurs when an arguer draws a conclusion from an inadequate sample pool; i.e. it is a fallacy that occurs when there is an insufficient amount of evidence to satisfactorily draw a conclusion.

One of my favorite examples of this is a conversation between two survivors of a shipwreck. One says to the other, "It looks like everyone that survived prayed to be rescued. So, that must mean that prayer saved them." The other person responds, "But what about all the people who prayed and died anyway?" This second individual is, of course, aware that they are looking at a biased sample. Anyone who prayed and still died isn't there to report that prayer didn't work!

Some common sampling biases in statistics:

  1. 1. Convenience sampling: This occurs when researchers only gather data that is easy to gather but that isn't representative of the general population.
  2. 2. Non-response: This occurs when researchers employ some survey where a large proportion of those surveyed did not or could not respond. This is what happened in the shipwreck example above.
  3. 3. Voluntary response: This occurs when the method of survey allows for too much self-selection. For example, in a voluntary survey on capital punishment, perhaps only those with strong feelings (either for or against) will take the time to actually respond, thereby skewing the data.

Another interesting example of biased sampling is that of the poll done by The Literary Digest of its own readership (plus registered car owners) for the 1936 presidential election between FDR and Alf Landon. The poll suggested that Landon would win; however FDR won by a landslide (60% to Landon’s 36%). Of course, the sample for the poll was biased. You must recall that it was the Great Depression and so those who could afford subscriptions to literary digests and cars were of higher wealth. As such, they were less likely to feel the economic hardship of the times (and hence more likely to vote Republican).

 

 

Golf

 

Omitted Variable Bias

This happens when you don't note the presence of a very important variable in some statistical relationship. For example, consider a study that links playing golf to increased risk of heart attack. If the study didn’t carefully control for age (since older people tend to play more golf), then the study is methodologically shoddy!

 

Too many variables!

Your analysis should only include the variables that are necessary. For example, if you're trying to explain some rot in the plants in your garden, you really shouldn't include details about how much you spend on Starbucks every month. It's just not relevant. That might be a silly example, but the point stands: keep it simple. “Any regression analysis needs a theoretical underpinning. Why are the explanatory variables in the equation? What phenomena from other disciplines can explain the observed results?” (Wheelan 2013: 147).

 

 

Cognitive Bias of the Day

 

The Recall Bias

The recall bias occurs when participants do not remember previous events or experiences accurately or omit details. In other words, the accuracy and volume of memories may be influenced by subsequent events and experiences. Put bluntly, what you're experiencing in the moment affects how you remember the past, sometimes distorting memory completely. For example, Wheelan (2013, chapter 6) reminds us that getting diagnosed with breast cancer changes one’s retrospective assessment of their eating habits. In particular, it makes them remember eating more high-fat foods when compared to those who were not diagnosed with breast cancer. In other words, getting diagnosed with a disease makes you more likely to remember your unhealthy habits!

 

 


 

Do Stuff

  • Read from 523d-532d (p. 218-227) of Republic.

 


 

Executive Summary

  • Math is central to Plato's philosophy. Not only does he consider it the most fundamental aspect of the training of the Guardians, but he considers it to be a gateway to the fundamental nature of reality.

  • Critical thinkers should be well-versed in the basics of statistics, since many arguments are couched in statistical language.

  • There are various common errors in statistical reasoning that critical thinkers should watch out for, e.g., biased samples, omitted variables, etc.

 


 

FYI

Suggested Reading: David Diez, Mine Cetinkaya-Rundel, and Christopher Barr, OpenIntro Statistics, Chapter 1

TL;DR: TED-ED, Why do people fear the wrong things? - Gerd Gigerenzer

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. These kinds of systems sometimes referred to as complex adaptive systems, and there's a whole field dedicated to their study.

 

 

No Gods...

 

 

Everything new is understood through the filter of the old.

~David Eagleman

Ockham's Razor

Dennett's Intuition Pumps and Other Tools for Thinking

In his Intuition Pumps and Other Tools for Thinking, Daniel Dennett invites the reader to learn and use the cognitive toolkit that he's developed over his long career as a philosopher, writer, and cognitive scientist. While there are many notable tools for thinking in the book, I've chosen in this course to only focus on a few. Nonetheless, the one we are covering today is an essential one for any critical thinker. Dan Dennett, as a matter of fact, mentions it by page 38 of a nearly 500 page book. We are talking about Ockham's razor.

Ockham's razor (sometimes spelled "Occam's razor") is a methodological principle attributed to the 14th century logician William of Ockham, although it is probably much older (Dennett 2014: 38-39). This principle states that given competing theories/explanations, if there is equal explanatory power (i.e., if the theories explain the phenomenon in question equally well), one should select the one with the fewest assumptions. Framed slightly differently, Ockham's Razor is the view that, all things being equal, the simplest explanation (i.e., the explanation with the fewest assumptions) is probably the right one. Dan Dennett puts it this way:

“[D]on't concoct a complicated, extravagant theory if you've got a simpler one (containing fewer ingredients, fewer entities) that handles the phenomenon just as well. If exposure to extremely cold air can account for all the symptoms of frostbite, don't postulate unobservable 'snow germs' or 'arctic microbes.' Kepler's laws explain the orbits of the planets; we have no need to hypothesize pilots guiding the planets from control panels hidden under the surface” (Dennett 2014: 38).

 

Tomato plant root rot
Tomato plant root rot.

We've actually already seen this methodological principle put to use. Recall the lesson titled The Third Realm. In it, Charles Wheelan reminds us that "any regression analysis needs a theoretical underpinning. Why are the explanatory variables in the equation? What phenomena from other disciplines can explain the observed results?” This all relates to Ockham’s Razor. The general idea is that if you are attempting to find a relationship between two data sets, there should be some acceptable theory that links those two data sets. Moreover, this theory must abide by Ockham's razor. You shouldn't make extravagant connections. If you have to resort to extravagant connections, you probably have no business linking those two data sets. The example I gave in that lesson was the following: if you're trying to explain some rot in the plants in your garden, you really shouldn't include details about how much you spend on Starbucks every month (because it's just not relevant). If you were to try to link your Starbucks spending and the rot in your garden, then you'd probably end up adding more and more variables to make the connection—since there appears to be no obvious, straightforward connection. Ah, but now you're adding variables! Thus, you'd be violating Ockham's razor. This is what the razor does: it shaves off extra assumptions, extra connections, extra links, extra entities. You gotta keep it simple!

There is also evidence from the field of machine learning (ML) that Ockham’s razor is a formidable approach to learning from data. For those unfamiliar with ML, model selection is the process by which one decides what type of model to use on a particular set of data, a decision which includes choosing how many and what type of free parameters the model should feature. One would assume that very flexible models (with many free parameters) would be superior to simpler models, but these actually tend to overfit the data, subsequently doing poorly on data that the model has not seen. In other words, highly-flexible models simply don’t generalize well. As a result, best practices in ML make it so that model selection is largely the task of finding the simplest model that explains the data reasonably well (Deisenroth et al. 2020: 254-258). Put bluntly, machines learn from data better if you keep it simple, and, needless to say, better is good.

Sidebar

By the way, Dennett (2014: 40-41) also informs us of a new concept invented by molecular biologist Sydney Brenner: Ockham's broom. Ockham's broom is the process by which inconvenient facts are swept under the rug by the intellectually dishonest. So, whereas Ockham's razor is a theoretically sound tool, Ockham's broom is a weapon for the advocates of anti-thinking. The most malicious aspect of Ockham's broom is that it can only be noticed by experts. This is because it is primarily used during an explanation. An expert who is secretly a champion for some theory or other will present the issue to you such that facts that are detrimental to his/her view are conveniently left out. Thus, the issue is presented such that you are more likely to agree with him/her. Dennett is quick to add, however, that this is also a favorite tool of conspiracy theorists(!). How can you keep an eye out for something that is likely invisible to you? You have to keep up with the experts. In other words, you'll have to see what other experts say on the matter too. It's so much work to think critically!

So Ockham's razor is awesome. But we need to make some comments before you really understand it. It's not only a razor, but it's also an anti-razor (Keele 2010). Look closely at the definitions and description of the razor given above. Notice phrases like "equal explanatory power" and "all things being equal". This means that there is a lower limit to how much we can simplify our theories. We shouldn't simplify them to the point where they no longer explain some phenomenon well. Here's an example that I use in my 101 course and which Dennett also discusses. Some try to say that the simplest explanation for the universe, intelligent life, etc. is God. However, this thesis violates the anti-razor clause in Ockham's razor. If we were to reduce our whole explanation to "God did it", then we would actually lower our explanatory power. How so? It's because God is a supernatural entity, and literally no one is capable of understanding supernatural entities. In fact, that's the basic definition of 'supernatural'. So, if one were to pose God as an explanation for some natural phenomenon (like the universe or intelligent life or whatever), then one would be positing something they don't understand as an explanation. Of course, that's the complete opposite of an explanation. Put bluntly, to attempt to explain something by saying it has a supernatural cause is to explain nothing. If anything, it is more so an admission of ignorance than anything else.

So that's Ockham's razor. It has two parts: the razor and the anti-razor. We will use it again in this lesson.

Argument Extraction

 

 

 

The End of Faith

 

A child

 

In today's reading, Socrates and friends note that bringing about their beautiful city might only be possible if they hit the reset button, culturally speaking. What they propose is to vanish from the city everyone older than 10 years old. It is in these young minds that are left that they can put in the necessary beliefs for the optimal functioning of their kallipolis. This, I'm sure, might spark all kinds of ideas in your mind—quite a few of them being troublesome. But Socrates and friends might have a point. It may very well be the case that you only believe some things because you were taught them when you were young and that you would never have come to believe those things had they not been inculcated in you from childhood. And so, inspired by Plato's Republic, I'd like to take the next four lessons to explore some beliefs that, once they've grown their roots inside you, are unlikely to be eradicated. I want to begin with religious belief, a type of belief that evolutionary biologist Richard Dawkins (2006) said is akin to child abuse if you force it on your children.

Let me begin by stating what I won't do in this section. I will not survey various arguments for the Judeo-Christian God's existence and then respond to them, while simultaneously peppering in the various arguments I know of that are against God's existence; I've already done that in Aquinas and the Razor, the two-part lesson composed of The Problem of Evil (Pt. I) and The Problem of Evil (Pt. II), Pascal's Wager, Death in the Clouds (Pt. I), and Death in the Clouds (Pt. II). Rather, what I want to do here is give an entirely naturalistic account of how religion arose and then make an argument using Ockham's razor. In other words, I will give a purely non-supernatural account of how it is that belief in the Judeo-Christian God came to be (despite the Judeo-Christian God not actually existing) and then I'll propose a dilemma. What's more likely: that my naturalistic explanation is true or that the Judeo-Christian God actually exists?

Here's my naturalistic explanation. A version of the theory I'm about to give goes back to Charles Darwin, although I'll be using the updated version from Torrey (2019). There's two parts to this explanation. The first has to do with the cognitive mechanisms that hominids evolved due to evolutionary pressures. It is these cognitive capacities that allowed belief in deities to inhabit the minds of hominids like us. The second part has to do with cultural evolution. Let's begin with the evolution of the cognitive capacities that enable religious belief. Per Torrey, religious belief is a by-product of the evolution of the following cognitive abilities:

 

Torrey's Evolving Brains, Emerging Gods

 

  1. Beginning with Homo habilis, about 2 million years ago, hominids underwent an increase in brain size and general intelligence (chapter 1).

  2. As Homo erectus, about 1.8 million years ago, hominids developed self-awareness (chapter 2).

  3. As archaic Homo sapiens, beginning around 200,000 years ago, hominids developed an awareness of each other’s thoughts, albeit it is a projection (or prediction) of what another is thinking; this ability is sometimes referred to as a Theory of Mind (chapter 3).

  4. Beginning around 100,000 years ago, Homo sapiens acquired a second-order Theory of Mind. They were able to project minds not only unto others (to help coordinate social interactions) but also unto themselves. They could, in a sense, objectify themselves. They could consider how others perceive them. For example, Annie could think about what Sally thinks about her. Moreover, Annie could think about what Sally thinks that Annie thinks about her, and so on and so forth (chapter 4).

  5. As modern Homo sapiens, beginning about 40,000 years ago, we developed an autobiographical memory, along with the capacity to project ourselves backward and forward in time. So, for the first time in human history, we could understand death. Also for the first time we could envision alternatives to death, such as notions of the afterlife (chapter 5).

These cognitive capacities either build on each other or are required for conceiving of religious beliefs. For example, the capacities listed in 2-5 wouldn't have been possible without an increase in brain size and general intelligence. That's an example of how some capacities built on others. Now consider the capacity for autobiographical memory: memory for one's personal history. This capacity coupled with a Theory of Mind and a second-order Theory of Mind allows one to project onself into the past and the future. It is these cognitive capacities that enabled Homo sapiens to even be able to think about death (and eventually what happens after death).

How did autobiographical memory give rise to belief in deities? Although hominids had been dying for millions of years, only Sapiens could make the connection to their own life. Not only did their friends die, but they realized they would die too. Moreover, the decomposition of bodies (e.g., brains liquefying and oozing out of the ears and nose) must’ve been startling to Sapiens who could imagine the same thing eventually happening to them. This, by the way, is much contrasted with how chimps deal with death: they mostly just ignore dead bodies. So, Sapiens wondered about death and perhaps also how to avoid it. Inevitably, understanding death causes fear, and thinkers from Ancient Rome to Thomas Hobbes and beyond have found the genesis of religious belief in the fear of death. Tellingly, many versions of the afterlife are just extensions of life (with less bad stuff). For example, for Austrian aborigines, the afterlife is similar to the territory in which they lived their lives, except there’s more kangaroos and other game for hunting. Thus, one can see a natural progression from the acquisition of autobiographical memory to the beginnings of belief in an afterlife.1

Dia de los muertos pic
Día de los Muertos
paraphernalia,
modern-day
ancestor worship?

 

The second half of the story has to do with cultural evolution, although there's a bonus cognitive evolution involved. This process starts with agriculture. Although different groups began the transitions at different times, agriculture and settled life began at around the same time, relatively speaking: ~14,000 years ago. The impetus, per Torrey, was the conjunction of a milder climate and the newly developed cognitive capacity for planning. In other words, it is only this new cognitive capacity to plan coupled with a milder climate that enabled the settling of Homo sapiens. Only the conjunction of these two explains settling because there had been many milder climate cycles in the past but no settled societies. In any case, soon after this, Sapiens began ancestor worship. This is because Sapiens now stayed in the same region year-round and buried their dead relatives nearby. The constant presence of their relatives' graves began a tendency towards asking dead relatives, whom they felt were somehow still alive in a way, for favors and help on Earth. And so belief in supernatural spirits began. These ancestors eventually morphed into bigger gods, perhaps through embellishment or perhaps by chance. For example, perhaps one exceptional person dies and over time their story becomes more mythical. Or perhaps an exceptional person dies and is asked by his descendants for help and serendipitously (i.e., by luck) they actually do receive good fortune, as in their crop yield is large or they win a battle or whatever. This ancestor, then, begins to accrue the characteristics of a god.2

Let's move forward a couple of thousand years. Continuous with the rise of big states came the development of more powerful gods. These were wild and crazy times, religiously speaking. The gods, which were now statues, were still heavily anthropomorphic—a relic of their ancestor worship genesis. They needed food two times a day, and they liked to have gifts given to them on their special holidays. The gods, which again were statues, were even thought to be related to each other and so were taken to different cities to visit with relatives. During this time, the gods were also involved in warfare. Although the cause of many conflicts is known to be purely secular, e.g., land disputes, resource disputes, etc., the conflict was made to seem as conflict between deities—a sign that the god personified the city-state. Interestingly, at the same time that the gods were being more involved in secular matters (like war), the rulers (i.e., kings) were gaining divine attributes—a synthesis of church and state seen in Mesopotamia and Egypt. The fact that major religions rose in conjunction with adoption from a major empire, e.g., Christianity and Constantine's Rome, Buddhism and Ashoka's Maurya Dynasty, Islam and Muhammad's conquest of the Levant, supports Torrey's view. In other words, it looks like all the major world religions were successful only because they were at one point adopted by some powerful empire—not because they're actually true.

Wright's The Evolution of God

And so one can begin to see how slowly but surely Sapiens acquired the capacities to hold religious beliefs and how ancestor worship began to morph into something that is recognizable as religion. And this brings us to modern times. If anyone cares to read the histories of various religions, one can see that the evolution of the religion's beliefs are pretty straightforward. For example, in The Evolution of God, Robin Wright (2010) pieces together the rise of Judaism, Christianity, and Islam. Time and time again, Wright shows that it is the facts on the ground, what was happening historically, that gave rise to certain beliefs and not others. In short, there's no need to believe in a god; the facts on the ground explain why the belief in said god arose (despite the god not existing).

Here's an example of this. In chapters 6 and 7, Wright gives a history of the transition from monolatry (belief that there are many gods but only yours deserves worship) to monotheism (belief in only one god). Early in its history, the religion of the Israelites (now known as Judaism) was monolatrous. Then, at a certain point in history, when the Assyrian Empire was on the ascendency, Israel fell under economic hardship and was compelled to form alliances with other states—allies which imposed humiliating taxes on the Israelites. At this point, the Hebrew writings begin to turn on the aristocracy and the wealthy, as well as foreign powers, blaming the elite for their troubles. In other words, it looks like rejection of foreign gods was inspired by a resentment of great powers (like Assyria) and opposition to forced/unequal alliances (and the elites that entered into them). Put bluntly, bad vibes against their oppressors and their not-so-good friends led to xenophobia and rejection of other's cultures and beliefs.

 

Cyrus the Great
Cyrus the Great.

Then, in 587 BCE, Babylon conquers Jerusalem and burns down the temple. This is the catalyst for the transition into monotheism. The Judean theologians spend half a century under Babylonian rule, until Cyrus the Great of the Achaemenid Persian Empire liberates them—which is why he is the only non-Jew to be labeled “messiah” in Hebrew scripture (see Isaiah 45:1). During this time, the Israelite theologians had to rationalize how their great god would allow his people to be conquered in such a humiliating fashion. Out of their shame grew a justification: their subjugation was punishment for their own misdeeds, such as worshipping gods besides Yahweh. This, notes Wright, is a common theological “divine wrath” interpretation for geopolitical catastrophe during the time period and in the region (e.g., the Moabites reasoned the same way when the Israelites subjugated them). But this was not enough to force the transition from monolatry to monotheism. What did this in the case of the Israelites is the magnitude of the subjugation, which included the destruction of the temple, forced migration, and servitude. At a time when national identity, religious identity, and ethnic identity were all intertwined, the only theological-logical solution to this degree of subordination was to argue that Yahweh is allowing this to happen as punishment. But this leads to the next logical conclusion: Yahweh uses the Babylonian god Marduk as his puppet to meet out his punishments. Yahweh must be very great indeed. And so, this line of reasoning progressed and out of Israelite resentment grew theological revenge. It was more than Marduk being a puppet. It was that only their god (Yahweh) really existed; all other gods were false. And that's how monotheism came about in Judaism.

So that's the story. Now what's more likely: the naturalistic explanations given above or that the Judeo-Christian God actually exists? I think you know the answer. However, I know there's many impediments to accepting the naturalistic explanation. First of all, the story is complicated and it spans several disciplines that most students aren't familiar with: evolutionary theory, paleoanthropology, deep history, etc. Second, it's difficult to accept that a series of small steps can eventually take one all the way down to a faraway conclusion. This is similar to visualizing how evolution can start off with single-cell organisms and end up with complex animals like us. The key, of course, is gradualism: you gradually make changes to an organism for billions of years and you end up with something totally different. But it is difficult to truly grasp these super long perspectives. The third reason for why it's hard to accept the naturalistic explanation is what got us started down this path. Many of us were taught our religion when we were children. Many of us will never let it go. Those of us who do let go invariably suffer through a moment of crisis. Beliefs that are deep-rooted are, as we can see, special. They tend to stick around, and if you try to remove them, there's a lot of pain involved.

Does it matter? Should you just leave these old ideologies intact in your brain? I don't think so. The neuroscientist David Eagleman (2020, chapter 10) reminds us that older neural networks (i.e., those that were wired earlier on in our lives) have an impact on how newer neural networks are connected. If critical thinking is important to you at all, you might have to dig out the roots of old, outdated beliefs.

And what if you are all-in with the naturalistic explanation? What if you really do think that there's no chance (or very little chance) that any of the world religions are actually true? What does that mean for Ninewells? Should we ban parents from teaching their children religion? Is it child abuse for parents to force their religion on their children?3

 

 


 

Do Stuff

  • Read from 532d-541b (p. 227-237) of Republic.

 


 

Executive Summary

  • Ockham's razor is a methodological principle which states that given competing theories/explanations, if there is equal explanatory power (i.e., if the theories explain the phenomenon in question equally well), one should select the one with the fewest assumptions.

  • According to some, the principle known as Ockham's razor also includes an anti-razor, ensuring that one should try to get maximal explanatory power in their theories and avoid adding assumptions that underine the explanatory power of the theory.

  • A fully naturalistic account of the emergence of religious beliefs, a project begun by Charles Darwin himself, is possible.

  • If one is assuming Ockham's razor, then one should select a naturalistic explanation of religion over belief that the Judeo-Christian God actually exists.

 


 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy, Entry on William of Ockham, Section on The Razor

TL;DR: Occam's Answers, What is Occam's Razor?

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. Even satirical religious note this aspect of conceptions of the afterlife: that it is typically just an extension of life on Earth. The Church of the Flying Spaghetti Monster claims that their "Heaven" is much like life on Earth except there’s a beer volcano and a stripper factory; their "Hell" is similar to their Heaven but the beer is stale and the strippers have STIs.

2. There are many explanations of how ancestor worship began. I even came across one in the horror mini-series Midnight Mass. In it, one of the protagonists wagers that belief in deities came from thinking about, of all things, campfires. According to his view, when our ancestors were still hunter-gatherers, they would see campfires out in the distance and wonder what that tribe was like, whether they were trouble or potential allies, etc. The character says that we then looked to the sky and gazed at the stars. The stars are like campfires and our ancestors must've marveled at the kind of people that could live in the sky. As such, the notion of a supernatural being that lives in the sky began.

3. Of relevance here might be a discussion on what rights children have. I take up this topic in Towards Kallipolis.

 

 

...No Masters

 

 

Originally, I set out to understand why the state has always seemed to be the enemy of 'people who move around'... Efforts to permanently settle these mobile peoples (sedentarization) seemed to be a perennial state project...

The more I examined these efforts at sedentarization, the more I came to see them as a state's attempt to make a society legible, to arrange the population in ways that simplified the classic state functions of taxation, conscription, and prevention of rebellion...

I began to see legibility as a central problem in statecraft.

~James C Scott

Argument Extraction

 

On the categorization instinct

A fallacy(?) that I find to be particularly vexing is the so-called sorites argument, also known as the Sorites paradox, heap argument, or ancestral argument. To be clear, I can't say with authority that this is a fallacy. Some philosophers actually bite the bullet and say the reasoning within it as valid (see Dennett 2014: 395-396). This is why I'm not calling this the Informal Fallacy of the Day. So, let me give you the context and you can decide for yourself.

A heap of tannin

Take a look at the heap of tannin pictured here. (Tannin, by the way, is used for a bunch of things including winemaking, making ink, and tanning leather.) Clearly, that's a heap (or pile or whatever). Now consider one or two grains of tannin powder. Would you call that a heap? I hope not. Two grains of tannin powder (or sand or wheat or whatever) are not a heap. Now add a grain to the two grains you have in your mind's eye. Is this a heap? Nope. It doesn't look like adding a single grain turns a non-heap into a heap. But(!), this seems to make the existence of heaps impossible. How does a non-heap turn into a heap? Here is the argument in (sorta) standard form:

  1. 1 grain of wheat does not make a heap.
  2. If 1 grain doesn’t make a heap, then 2 grains don’t.
  3. If 2 grains don’t make a heap, then 3 grains don’t.
    ...
  4. If 999,999 grains don’t make a heap, then 1 million grains don’t.
    ...Therefore, 1 million grains don’t make a heap.
    ...

You can see that an implied assumption here is that adding a single grain does not turn a non-heap into a heap. And it doesn't matter how high you go. We stopped at 1 million, but we clearly could've continued to 1 billion or more.

The argument appears to be valid. That is, if we grant the assumption stated in the previous paragraph, then the conclusion does logically follow. Moreover, this assumption does seem to be reasonable in the earlier premises. For example, it really does seem like two grains is definitely not a heap. However, this doesn't mean that the argument is sound. Recall that soundness requires both validity as well as premises that are actually true. In other words, granting that this argument is valid, we have to argue that some premise is false if we want to fight off the conclusion of this argument. But which one? At what point does a non-heap become a heap? It appears that there's no conceivable demarcation point that doesn't seem arbitrary. Maybe we can draw the line at 1883 grains of salt. But why? What's so special about 1883? You can apply this paradox to any composite object (an object that is composed of smaller units). For example, at what point does a get-together turn into a party? At what point do you go from having some similarities with someone to having "a lot" in common? How many hairs does it take to make a beard?

Dutton's Black and White Thinking

You might think this is philosophical nonsense. Maybe it is. But it does reveal to us an important feature of our cognition. There appears to be just noticeable differences, the minimum degree by which a stimulus must change in order for a before-and-after difference to be detectable to the human mind (Dutton 2020: 44-45). Put differently, there can be objective changes to some stimulus—say, a tone, a shade of blue, a physical sensation—that the human mind won't register until it reaches a certain magnitude. In other words, some sensory experience might change but you won't notice until the change surpasses some minimum requirement set by your cognitive system. The psychologist Kevin Dutton puts it well. "Life proceeds in grains. But our attention is drawn only to heaps" (Dutton 2020: 43).1

Of course, our cognition is built this way because, at least at some point, this was adaptive, i.e., evolutionarily advantageous. In fact, it looks like we need our cognition to be this way so that we can process the world more efficiently and effectively. At the same time, however, our capacity to organize might be overactive occasionally. We might project categories unto the world that aren't necessary and don't really make sense. For example, according to Dutton (43), Facebook (now Meta) has over 70 different gender options for users to choose from in its dropdown menu, including genderqueer, pangender, and two-spirit. It's seems very unlikely that so many gender categories exist. For one, how would one learn all of them so they can select which one they are? Students complain when I give them 20 different words to learn for a test. They would be in an uproar if I gave them 70(!). Why is there this proliferation of categories? Dutton, like me, blames social media (263-267). It looks like social media algorithms circulate these new category concepts based on user demand rather than veracity or usefulness. Again, Dutton summarizes for us:

“Categorization [is] innate. That much seem[s] indisputable. And with good reason. If we lacked the ability to organize the world, to sort our experiences into cognitive clumps of shared semantic meaning, then everything around us would be chaos and nothing would be certain or predictable. We'd be forever caught up inside an eternal Sorites matrix. Then again... there's a cap on the usefulness of such neurocognitive housekeeping. If those existential piles are arranged too neatly, are tidied up too fastidiously, our capacity to generalize from one context to another will be limited” (Dutton 2020: 56).

The moral of the story is: You naturally, effortlessly, and automatically categorize the world around you; but that doesn't mean that the categories you dream up actually refer to anything in the real world. All critical thinkers should ensure that they check their categories—to make sure that either they actually correspond to the world or that they are at least useful.

So is the sorites argument valid? Do heaps exist or not? I'm not sure. I only know that I can't help but see heaps.

“[O]ur black-and-white brains might not be able to draw the line between a heap and a non-heap when we're adding one grain at a time. We might not be able to track and determine the granular, incremental, chimeric transition between them. But put a heap and a non-heap side by side in front of us and we instantly notice the difference. it's sufficiently black on the one hand. And sufficiently white on the other” (Dutton 2020: 53).

 

 

 

On police abolition

Question: If we are starting society anew, what role should we give to the police? Plato's thinking that it might be easier to just start with children opens up a world of possibilities. It's already been the case that thinking about what sorts of policies Ninewells should have has allowed us to rethink some practices that are flawed in our current society. But we really can go farther. Perhaps we can abolish entire institutions. We considered just that—moving towards abolishing religion—in the last lesson. Instead of outright banning of religion, though, we considered simply banning the imparting of religion onto children. I'd wager that this ban alone would drastically reduce the incidence of religious belief, paving the way for eventual extinction. Today we put another institution on the endangered species list—one that you've been raised to think is essential but that perhaps might not be so necessary after all.

Here's a little context for setup. Nation-states have usually begun through conquest, and so from the beginning there are various associated injustices: slaughter, enslavement, genocide, etc. To supress rebellion and population leakage, nation states have used all sorts of carrots and sticks; but in the beginning, it was mostly just sticks (Scott 2017). Ninewells, however, could truly be a city begun via contract. We can lay out the policies, and those who want to stay can stay. If there is too much of a population loss we can open up the borders until we fill our carrying capacity. In fact, we might be the first city-state that's ever done this. Moreover, if everyone in Ninewells is voluntarily there (a true first for nation-states, I think), then typical tools for coercion might not be necessary, as they are in typical nation states.

Vitale's The End of Policing

Having said that, consider this. The police is, generally speaking, an apparatus of coercion: their function is to enforce laws (keyword: enforce). That is, they coerce the citizenry into obedience of the law either by implicit threat or by arresting those who break the law. Above I posed a question: if we are starting society anew, what role should we give to the police? One controversial answer is none.

Why would we want to get rid of the police? Why not just reform? Well, Vitale (2017) argues that the basic nature of the law and the police is to be a tool for managing inequality and maintaining the status quo. In other words, the function of law enforcement is to maintain our current level of social inequality. Increasingly, this has been done in more militaristic ways (Balko 2013). And so, police reforms that don’t address these realities are doomed to reproduce and intensify them. So we can reform the police with this reality in mind. Or we can just get rid of the institution as a whole, since it may no longer be essential.

Here's a history lesson to make the point. Vitale reminds us that policing came about in an era of slavery (and hence slave revolts), colonialism (and hence anti-colonialist revolts, e.g., in Ireland), and industrialization (and hence labor activism and strikes, e.g., the Luddites and the British Jacobins). This legacy continued in the first professional police department, established by Robert Peel in the UK, which regularly broke up strikes. In fact, Peel himself made his name in suppressing Irish revolts. Per Vitale, a culture of corruption and incompetence permeated most major departments. For example, departments were usually filled by political appointment, so officers would feel loyal to the person who appointed them—almost like a private army. In the American South, officers were even recruited from slave patrols—not exactly a pool of candidates ideal for impartial administering of the law. It's also the case that gamblers and then bootleggers were an important source of revenue for these early departments, since police would regularly accept bribes from individuals trafficking contraband (like moonshine). The history of detectives, by the way, is not much better. Their primary role early on was spying on labor activists and serving as agent provocateurs that would incite violence to legitimize violent suppression of activist movements. In other words, detectives would secretly infiltrate dissident groups and then incite violence so that their department can claim that a violent suppression of that group was legitimate, even though it was their own agents that had started the violence.

Shahawar Matin Siraj
Shahawar Matin Siraj.

In the 20th century, authorities spied on anarchists and communists, anti-war activists, civil rights leaders (such as Martin Luther King, Jr and Malcolm X), and, more recently, anti-police-violence groups, environmentalists, Muslims, Occupy Wall Street protesters, animal rights activists (who violate ag-gag rules), and even anti-death-penalty activists. Moreover, both federal agencies and local police departments engage in questionable entrapment programs, e.g., the case of Shahawar Matin Siraj. Siraj, who is likely a person with mental illness, was egged on by undercover law enforcement agents to conspire to plant bombs; reportedly, Siraj told the conspiring agent that he’d have to ask his mom for permission first. He was sentenced to 30 years in prison. A relevant question here might be: having read all this, do you feel any safer?

Most concerning perhaps is the conjunction of policing and spying that has given rise to “parallel construction” techniques of investigation. Essentially, parallel construction works like this. First, law enforcement agencies illegally spy on Americans who are committing crimes. Sure, these are (typically) actual criminals, but it is still warrantless—hence illegal—spying. Then, policing agencies construct an alternative, legal (but false) account of how they found the evidence against them. In other words, they invent a story about how they legally caught the culprit, even though it was actually done through warrantless spying. Finally, the false (but legal) version of how they caught the culprit is used in a court of law against the defendant. And this is all fair game under current law.

This is all, I think, very concerning. Police are supposed to protect us, and they do. But they are also engaging in highly questionable practices. And notice, by the way, that I haven't even brought up death-by-police and police overuse of force. Add those two to the list above, and we can begin to see that the function of police perhaps isn't to serve and protect but to maintain the status quo of social inequality. Perhaps there's no fixing that institution. As Amna Akbar argued in a recent seminar: because racialized, gendered, and capitalist violence is fundamental to police, the result is that the police cannot be fixed and so must be abolished.

But what would happen? Well, key to the argument I'm proposing is not a sudden extirpation of police, à la the defund the police movement (which apparently most Americans, including African Americans, don't agree with). The argument here is this. If it is possible to relieve law enforcement of their duties through technology and smart policies, then we should do so (for the safety of citizens, to save money, to end an institution with a less-than-rosy track record, etc.). Further, it is possible to relieve law enforcement of their duties through technology and smart policies. Therefore, we should relieve law enforcement of their duties. Thus, there will be no need for police. How can we relieve law enforcement of their duties? Here's some Food for thought...

 

 

Objections to police abolition

 

 


 

Do Stuff

  • Read from 543a-559d (p. 238-256) of Republic.

 


 

Executive Summary

  • Humans have a categorization instinct, a tendency to create categories for similar objects or composite objects and project them unto the world.

  • Our projected categories are sometimes accurate and sometimes not. Critical thinkers should have a method for discerning when their categories actually correspond to the world itself.

  • Vitale (2017) argues that the basic nature of the law and the police is to be a tool for managing inequality and maintaining the status quo.

  • It appears to be possible to relieve law enforcement of their duties through technology and smart policies, thereby extinguishing any rationale for the existence of police departments.

 


 

FYI

Suggested Reading: Kelsey Griffin, Harvard Law School Holds Lecture on Police Abolition

TL;DR: Has policing in America gone too far?

Supplemental Material—

Related Material—

Advanced Material—

  • Reading: Dominic Hyde and Diana Raffman, Stanford Encyclopedia of Philosophy Entry on Sorites Paradox

 

Footnotes

1. If you've taken my logic course, then you know that Aristotle believed logic was fundamentally just reasoning about categories. Everything was either in some category or not. There was no in-between. Moreover, there was a set of necessary and sufficient conditions for inclusion in a category. Aristotle appears to have been wrong. There is room for fuzzy logic, logic where you can be more or less in a category (Dutton 2020: 51-54).

 

 

I Alone

 

There’s only one winner. Only one trophy. Only one way of looking at it. Don’t let them take it away from you.

~Lawrence Dallaglio

Deliberate Practice

Today's reading covers Plato's views on democracy and tyranny. Although Plato's views on democracy aren't very flattering and deserve to be pondered over carefully, in this lesson I want to focus on what Plato thinks is the absolute worst form of government: tyranny. Once democracy has devolved into complete chaos, one man will take power and restore order. Of course, this job won't be pretty, and his policies will generate great resentment. However, the people will realize that the tyrant has amassed too much power and he is impossible to remove from office.

How does the tyrant convince the citizenry to give him absolute power? It must be the case that civil society has truly degenerated to a level of unlivable disorder. Once the people can no longer stand the mayhem, it is at that point that the tyrant says, "I alone can fix this."

Never one to make normal connections, this part of the dialogue makes me think about things that ought to be done alone as well as things that are best not done alone. Let's first think about things that really ought to be done alone. Let me start with some context.

Ericsson's Peak

Earlier in this course, I reported on the work of various scientists who are finding that there are genetic influences on many of our character traits and abilities, including political preferences, intelligence, and disposition towards violence. It should really be emphasized that this is no way some sort of "determinism" where genes determine with absolute certainty the course of one's life. Rather, genes influence or predispose one towards certain behaviors and preferences. The reason for mentioning this here is twofold. First of all, many right-wing ideologues attempt to use these findings to justify social inequality. This is far from the only interpretation. One can just as easily use the same findings to justify left-leaning social interventions (e.g., Harden 2021).1 Second, we actually do know what the most important factor is if you want to achieve excellence in some particular field. And it's not genes—thankfully. It's deliberate practice.

What is deliberate practice? Honing in on what deliberate practice is was the life goal of Anders Ericsson (1947-2020), the expert on expertise. Let's just say that you want to become an expert at something: you want to be a concert pianist, a world-famous surgeon, a chess grandmaster, an olympic gold medalist, whatever. What's the most important ingredient? Although in some domains, like basketball, genes really do matter, in most domains the most important factor is actually the kind of practice that you put into mastering the relevant skill. In particular, one must engage in deliberate practice: a type of practice where one engages in highly-focused, well-regimented practice sessions such that quality feedback is produced and then new practice activities are developed to hone in on the skills that are still lacking (based on the feedback from the previous practice sessions). Ericsson summarized his life's work in his 2017 work Peak. After studying experts in various fields for decades, he figured out what they all did in common: deliberate practice.

“Deliberate practice develops skills that other people have already figured out how to do and for which effective training techniques have been established. The practice regimen should be designed and overseen by a teacher or coach who is familiar with the abilities of expert performers and with how those abilities can best be developed. Deliberate practice takes place outside of one's comfort zone and requires a student to constantly try things that are just beyond his or her current abilities. Thus it demands near-maximal effort, which is generally not enjoyable. Deliberate practice involves well-defined, specific goals and often involves improving some aspect of the target performance; it is not aimed at some vague overall improvement... Deliberate practice is deliberate, that is, it reqires a person's full attention and conscious actions... Deliberate practice involves feedback and modification of efforts in response to that feedback” (Ericsson and Pool 2017: 99; emphasis added).

World Athletics Championships 2007 in Osaka; Men's High Jump champion Donald Thomas
Donald Thomas.

To me at least, it is a relief to hear that one's own work ethic and the time that one puts into a skill are more important than genes to the overall outcome. Having said that, deliberate practice is extremely difficult. I emphasized a couple of aspects of deliberate practice in the quote above. Notice in particular that Ericsson admits that it is not enjoyable. Having worked as a jazz pianist for some time, I can tell you that working up to a level where you can play for money is extremely difficult and not at all fun. Rehearsing and mastering certain pieces is extremely exhausting, time-consuming, and frustrating. Before you say that this was just my personal experience, Ericsson confirms for us that 100% of the experts he studied arrived at their expertise through deliberate practice and that none of them reported enjoying practicing. Put differently, if you're enjoying your rehearsal, you're not engaging in deliberate practice. In fact, in chapter 8 Ericsson takes the time to demolish the notion of prodigies (e.g., Mozart, the high jumper Donald Thomas, etc.) and so-called “idiot savants”. It is deliberate practice in all cases that leads to excellence and elite level performance. In other words, according to the expert on experts, exactly zero experts in a domain got there without the grueling process of deliberate practice—and Ericsson looked for an exception for 30 years!2

You might be thinking to yourself: Why is this being brought up here? It's for one very important reason: solitary practice is key to deliberate practice. As it turns out, deliberate practice is one of those things that requires individual study. In other words, you just have to seclude yourself, get rid of all distractions, and put in the work. Sure, have your group practice sessions and talk with a coach. But the actual practice itself is a very lonesome experience. Alone. That's how you master a skill.

“[A]t its core, deliberate practice is a lonely pursuit. While you may collect a group of like-minded individuals for support and encouragement, still much of your improvement will depend on practice you do on your own” (Ericsson and Pool 2017: 176-177).

Something to think about next time you're planning a group study session...

 

Argument Extraction

 

 

 

Democracy at Work

Tyranny sounds like a bad deal. I'd wager that most Americans don't want to submit to a tyrant—despite what some liberals say about Trump supporters (see Hibbing 2020). So here's my question: Why do people regularly submit to a tyrant at work? Maybe you like your job, and maybe you have a nice boss. But if you're like most employees in a capitalist enterprise, you have no ultimate say in any essential aspects of the business. You don't get to decide what to produce, at least not unilaterally. You don't get to decide where and how to produce it. And you certainly don't get to decide what to do with the profits. Worse yet, thanks to innovations in scientific management (like Taylorism) and efficiency algorithms which use computational power to micromanage your every move (like those used at Amazon), many jobs are extremely unfulfilling and set unreasonably high expectations for the human body and mind. Let's start with Taylorism. Here's a helpful video:

 

 

Let's talk briefly about efficiency algorithms. As many know, one of my primary philosophical interests has been artificial intelligence and its effects on society. I am not—I don't think—an alarmist, like some writers on the subject who think that AI will bring about the end of humans. However, I do think that there is a high likelihood that a. automation will ameliorate many jobs which won't be replaced with other jobs, and b. that those jobs that are left over will incorporate more and more efficiency algorithms that will make the worklife of employees increasingly miserable (see the lesson titled The Chinese Room from my 101 course). In particular, I believe certain job-types (like management roles) will be more easily automated than others, the result being that there'll be more and more jobs where you are micro-managed by a supercomputer—a technologically updated version of the push and quota system used during slavery (where slaves were whipped if they didn’t reach their daily goal and the goal is progressively increased as time passes). Put differently, many jobs are already difficult enough as it is with a human manager micromanaging your tasks all day. This would get much worse, both psychologically and physically, if your manager is an all-seeing, all-documenting AI. One person who's gone through this, Emily Guendelsberger, writes about it in her book On the Clock.

“I was hired for picking, which is generally regarded I think as the least desirable job at warehouses. We would get a cart and we’d have the scanner. There were about, I think it was four or five steps to going out to locate the coordinates that it gave you and find the actual, whatever the thing was. You would just walk around all day and do that. Every single step of this was accompanied by a little countdown. At the bottom of the screen, there is a blue bar. It says how many seconds you have left to do it, and then it would start ticking those seconds down. So it’s kind of constantly reminding you like, ‘Hey, move. Keep moving. Keep moving. You are not keeping up’” (Guendelsberger in an interview with The Intercept; see also Guendelsberger 2019).

 

Graeber's Bullshit Jobs

By the way, there's also a class of jobs called bullshit jobs, which are not the same as shit jobs (Graeber 2019). According to Graeber, shit jobs are often blue collar, low hourly wage, and socially looked down upon (at least by some). Bullshit jobs, on the other hand, are usually salary white collar jobs. A job is a bullshit job if the employees themselves consider it to be a bullshit job (and even if they profess otherwise to their coworkers). In other words, bullshit jobs are those jobs where even the employees who perform the job can’t justify the job to themselves. Bullshit jobs, by the way, are on the rise, according to Graeber. Why? In chapter 5 of Bullshit Jobs, Graeber gives his theory: the proliferation of bullshit jobs came about through a change of perspective in what’s morally required of corporations. In the 1950s, 60s, and 70s, there was an implied Keynesian bargain, where basically increased profits which were due to increased productivity would be at least partially redistributed to the workers in the form of higher wages and better benefits. In the 1970s, however, the phenomenon of the great decoupling took place: productivity kept rising but wages stagnated. Where did the profits go? Much of it, obviously, lined the pockets of the owners and CEOs of firms. But a considerable amount went towards the hiring of middle management and their administrative staff, i.e., bullshit jobs.

Maybe you think that this is just the way work is or has to be. But some have actually dreamed up a different possibility. The trouble is that it has a worrisome name. It's called Marxism.

 

Important Concepts

 

The capitalist class-process


An admittedly oversimplified
account of Marx's
capitalist class-process.

It is far beyond the scope of this lesson (and class) to cover in detail the views of Karl Marx. (For a bit more detail than what is given here, the interested student can refer to the lesson titled The Game from my 103 course. For a lot more detail, take PHIL 117.) Here's what you need to know for our purposes in this course. For Marx, class doesn't necessarily have to do with how much money you earn; rather, your class has to do with your role in the production process. Those who work for private industry, like maybe you do, labor to produce profits for their company. This labor may be in the form of manufacturing some product, of selling some product, or perhaps it has to do with transporting some product so that it can be sold. Whatever the case may be, we know that, for workers, the value that they produce with their labor is less than what they get paid. In other words, if you work in the private sector, you only have a job because you make your boss more money than what he/she pays you. That's simply how it works. So, there's the value that you produced with your labor that you actually get paid for, and there's the value that you produced with your labor which your boss keeps. Marx has labels for these. According to Marx, necessary labor is performed during the portion of the day in which workers produce goods and services the value of which is equal to the wages they receive (i.e., the value that you produced with your labor that you actually get paid for). Then there's surplus labor: labor performed during the portion of the day in which workers continue to work over and beyond the paid portion of the day (i.e., the value that you produced with your labor which your boss keeps).

Of course, Marx argues against capitalism, which he defines as a social order where it is ok for the owners of businesses and firms to pay their employees less than what they produce. Put more formally, what Marx calls the capitalist class-process—that is to say the class-process that is prevalent in the United States—is the legalized yet ‘criminal’ activity in which the products of the laborers’ creative efforts are appropriated (i.e., taken) by those who have nothing to do with their production and who return only a portion of those fruits to the workers (wages), keeping the remainder (the surplus) for themselves. Put yet another way, the capitalist class-process is the state of affairs where the employers regularly and reliably exploit their employees by paying them less than the value they produce. Marx's contention is that this is exploitation. So, he concludes, the capitalist class-process must be abolished.

When Marx makes the case against capitalism, however, what he really means is the capitalist class-process—the exploitative relationship between employers and employees. That's at least the way that Amherst School of Marxism interprets Marx. This school, which pays special attention to volumes 2 and 3 of Marx’s Das Kapital, claims that the fundamental Marxist argument is that firms should be run by employee-owners (see Burczak, Garnett & McIntyre 2017). The way to do this is to convert society into one in which the workers are the owners of the enterprises in which they labor; in other words, build society so that the norm is worker-owned cooperatives.3


Amherst College,
home of the Amherst
School of Marxism.

Worker cooperatives are a type of firm where the workers play two roles: 1. their normal labor function; but also 2. an administrative function which allows them to vote on what the firm makes, where the firm makes it, and what the firm does with the profits. In a word, this is democracy at work. These, in case you didn't know, already exist. In these enterprises, surplus labor is still created, but it is appropriated by those who created it: the workers themselves. There could still be pay differentials, with some getting paid more than others, but the key difference is that workers had a say in establishing the pay differentials—through their vote. The result would be that, instead of the tyranny that we submit to at our places of work today, we can instead achieve democracy at work—assuming you don't think democracy is non-optimal (like Plato does).4

 

Food for thought...

 

So?

What does this mean for Ninewells? There is both acknowledged positive consequences as well as an intuitive fairness about worker-owned firms (see footnotes 3 and 4). If this truly is a means of reducing exploitation in the workplace and improving the mental health and the quality of life of workers, then this is perhaps a organizational arrangement that we can promote in our new city. However, I assume that at least some of you will be uncomfortable with the notion. Why? This might, after all, be one of those things that are best not done alone. It does seem like worker-owned cooperatives work at least as well as traditional firms (again, see footnote 3). So why do some have a knee-jerk reaction against this interpretation of Marxism? I think it might have to do with what we were taught as children. We've been taught, since childhood, that work is supposed to be a certain way—and it's definitely not the way Marx envisioned. So, just like other topics we've covered recently (the ban on teaching religion to children and the abolition of police), the main reason for why we are opposed to something like worker-owned cooperatives is that we have been conditioned to be naturally predisposed towards rejecting the idea. And that's not exactly a good reason.

In the lesson titled No Gods... I promised I'd touch on four topics related to beliefs that, once they take hold in childhood, are hard to let go. We've now discussed three of those four beliefs: the belief that teaching their religion to children is a parent's right, the belief that police are necessary, and the belief that the workplace has to be as it is under capitalism. I hope these discussions have sparked some questions in your mind about the power of those beliefs imparted on children. Perhaps they've also made you think about what beliefs you have that don't let you process information in an even-handed, unbiased way. One more belief to go...

 

 


 

Do Stuff

  • Read from 559d-569c (p. 256-269) of Republic.
  • Start preparing for Quiz 3.4+.

 


 

Executive Summary

  • Plato considers democracy to be the second worst form of government. Because it values all desires equally, rather than on a hierarchy (as Plato does), it does not produce optimal results and eventually devolves into tyranny.

  • There are some things that are better done alone. For example, if you'd like to master some particular skill, you should engage in deliberate practice—a method that involves a lot of solitary practice.

  • There's also some things that are best done as a group. Utilizing Marxism of the kind advocated by the Amherst School, we saw an argument for conceiving of the capitalist class-process as a form of legalized exploitation. The solution advocated is to transition to a society of worker-owned enterprises, of which there is some evidence that they function at least just as well as traditional firms—or perhaps even better.

 


 

FYI

Suggested Viewing: Workplace Democracy, Workers' Self-Directed Enterprises (WSDE)—by Richard Wolff

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. In her recent book The Genetic Lottery: Why DNA Matters for Social Equality, Harden’s goal is to peel away the interpretation of genetic studies from white nationalists and those attempting to justify the status quo. Instead, she makes the case for how to use genetics to make society more just. Her basic argument is this. You can either be a eugenecist (which is not an option for non-racists), attempt to be "gene blind" (where one pretends that genes don't matter for social outcomes and that everyone is born the same), or hold the anti-eugenics position (where one uses genetic markers to find individuals who need interventions so that they have more positive life outcomes). She argues for the anti-eugenics position. This, she argues, is better than the "gene blind" position since approaches to equity that don’t take into consideration genetic variants shown to give students an academic advantage are likely to be either a failure or inconclusive. For example, Harden considers the initiative to ameliorate the word gap. In case you haven't heard of the word gap, researchers found that the average child in a high-income family hears 2,153 words per waking hour, the average child in a working-class family hears 1,251 words per hour, and an average child in a welfare family only 616 words per hour. The authors of this study and subsequent researchers have posited that the word gap—or certainly the differing rates of vocabulary acquisition—partially explains the achievement gap in the United States. But, Harden interjects, what if the achievement gap (as well as the high communicative nature of parents/children) are actually best explained by genes. In other words, instead of blindly assuming that hearing more words leads to better educational outcomes, maybe both being more chatty and doing better in school are actually the products of the same underlying cause: certain gene variants. On this point, Harden argues that unless a study is done controlling for gene variants, say, on high-income parents with adopted children, then the word gap data is merely correlational. Basing a social intervention on this merely correlational foundation would be a waste of money and time.

2. What if you want to master a skill in which there isn't already a well-established training regimen? That's ok. Ericsson extrapolates from the notion of deliberate practice to give a more generic approach to mastering a skill. The key is to follow the three Fs: focus, feedback, fix. First, make sure you really focus on performing some task as well as you can possibly do it. Make sure to have some objective means by which to measure how well you performed the task. Then study your feedback. What did you do right and what did you do wrong? Then attempt fixing your shortcomings by coming up with exercises to improve your ability on just those aspects in which you need improvement. Do that and then repeat the whole process again. And then again. And then again. You catch my drift. For more info, see chapters 5 and 6 of Peak.

3. In her chapter in González-Ricoy & Gosseries (2016), Virginie Pérotin makes the case that, contrary to popular opinion, worker cooperatives are larger than conventional businesses, are not less capital intensive, survive at least as long as other businesses, have more stable employment, are more productive than conventional businesses (with staff working “better and smarter” and production organised more efficiently), retain a larger share of their profits than other business models, and exhibit much narrower pay differentials between executives and non-executives.

4. By the way, for reasons that are completely beyond me, Ronald Reagan once advocated for worker-owned enterprises, going as far as stating that this is "the next logical step."

 

 

The Wisdom of Psychopaths

 

 

Nothing in the world is harder than speaking the truth and nothing easier than flattery. If there’s the hundredth part of a false note in speaking the truth, it leads to a discord, and that leads to trouble. But if all, to the last note, is false in flattery, it is just as agreeable, and is heard not without satisfaction. It may be a coarse satisfaction, but still a satisfaction. And however coarse the flattery, at least half will be sure to seem true. That’s so for all stages of development and classes of society.

~Fyodor Dostoyevsky

Argument Construction

Great Violence

By this point you've been exposed to and studied over a dozen arguments. It's worth taking time to reflect on the nature of philosophical argumentation. Recall that if an argument is valid, then the premises force the conclusion upon you. In other words, if you accept the premises as true and the conclusion does necessarily follow from these premises, then you must accept the conclusion. Notice that there is a violence to argumentation. Things are "forced" upon you and you "must" accept them. Things are no different when you are responding to, say, analogical arguments. If someone makes an argument to you by way of analogy, the way to undermine their position is by breaking the analogy. If you are successful, you have refuted their argument—the term refute coming from the Latin refutare, which means "to repel".

Forcing beliefs upon someone, breaking their analogies and repelling their conclusions—these are not friendly endeavors. But now that it's time to construct your own arguments, you must psychologically prepare yourself to impose your views on another, rational as they may be. Here are some simple guidelines:

A history of violence
An individual
with a history
of violence.

  • When you are constructing an argument, be sure to walk the reader through every step of your reasoning. This is harder than it seems. We make many assumptions that we are not consciously aware of when we arrive at our beliefs (if we do in fact arrive at our beliefs in some sort of rational way). Your task is to think about and discover all these small assumptions that you make. Make them explicit first. Then defend them. Make it so that you walk the reader—although perhaps "push" is a better word—down your train of thought, making them accept all your assumptions along the way, so that, when you arrive at your conclusion, they are forced to accept it. Give them no elbow room. Make them walk in a straight line, all the way to your view.

  • Use logical indicator words, like "if...then", "and", and "or". Using "if...then" chains is extremely helpful. Someone who believes in A will be forced to believe D if you first show them that "If A then B", then "If B then C", and finally "If C then D".

  • Be very specific in your claim. Disambiguate what you're saying as much as possible. Do not(!) leave room for misinterpretation. A master of dialectic argumentation, Socrates, showed that if you have any holes in your argument a good critical thinker will find them. Per Herrick (2015), Socrates' favorite questions were basically "What do you mean by that?" and "What's your evidence for that?" If you are unclear about what you mean, you leave yourself vulnerable to refutation.

  • Argue for every single part of your argument. Don't assume that some general claim that you make is widely accepted. Apparently, people fall into this trap all the time (see the Cognitive Bias of the Day). So watch out!

  • Lastly, use only the best evidence. If you find your support from some news website that no one's ever heard of before, then these will be weakly-supported premises. Chances are that your opponents will have these weak premises in their crosshairs. Don't let your whole argument hinge on these weakly-supported premises. Get rid of them. Have only strong premises. Use peer-reviewed articles. Use books that are published by publishing houses based in universities. Whenever possible, ensure that there are multiple corroborating sources, i.e., try to find multiple sources that support your claim. (By the way, if you can't find good support for your view, ask yourself if you might be wrong, or at least wrong-headed, in your inquiry. Maybe your position is wrong? Maybe you're not being specific enough? Maybe there isn't good data on this issue because it is hard to study? Critical thinking isn't about defending your views no matter what; it's about having a method for distinguishing between claims that are likely to be true from those that are likely to be false. This process of demarcating the true from the false applies to our own views as well.)

 

 

Case Study

One of my favorite examples of crafty argumentation comes from LaFollete (1980; see the FYI section). In this paper, LaFollete argues that parents should require parenting licenses. How did he arrive at his conclusion? Here's his reasoning. His initial assumption is this: any activity that is potentially harmful to others and requires certain demonstrated competence for its safe performance ought to be regulated. It seems very difficul to disagree with this. For example, one way of looking at a car is as a two ton weapon. Of course, we require that drivers pass a minimum competency test before they can legally get behind the wheel. Moreover, if someone is caught behind the wheel without a license, there are fines and potential jail time. The same goes for firearms. Clearly firearms can be dangerous both to the gun owner and those around him/her. Intuitively, LaFollete would argue, there should be some regulation about who can own firearms. By the way, it looks like most Americans agree that there should at least be universal background checks if one wants to own a firearm. LaFollete continues. If we also have a reliable procedure for determining whether someone has the requisite competence, then the action is not only subject to regulation but ought, all things considered, to be regulated. In other words, if we do have some kind of test or assessment that can distinguish between those who are, say, competent drivers or weapons-carriers and those who are not, then it seems like it is optimal to use this test.

A family

With these two assumptions established, LaFollete makes the claim on which the whole argument rests: parenting can be harmful to children. Although this may be obvious, one should still defend this claim. Unfortunately, there is plenty of evidence that parents have harmed their children, even without knowing it. For example, Heimlich (2011) documents cases of religious child maltreatment, i.e., instances of psychological and physical harm inflicted on children by their parents for religious reasons. Heimlich mentions these examples of maltreatment, among others: withholding necessary medical treatment (which resulted in death), psychological abuse through fear-based parenting and religious practices, and physical abuse (resulting in death) due to belief that a child is possessed by a demon. This is both depressing and excellent support for LaFollete's claim. Of course, it doesn't even have to be this extreme. There are many instances of parents harming their kids, whether it be through neglect, physical abuse, or psychological torment.

LaFollete then argues that a parent must be competent if he or she is to avoid harming his/her children. Moreover, even greater competence is required if he/she is to do the "job" well. But not everyone has this minimal competence (as was argued for in the previous paragraph). Beyond child abuse, many people lack the knowledge needed to rear children adequately. Many others lack the requisite energy, temperament, or stability. In short, good parenting is hard.

Now here's the punchline. We actually do have tests for distinguishing between who is a minimally competent parent and who is not: the licensing process for adoption. Adopting a child appears to require some persistence. The process can take up to a year, per LaFollete, and it is somewhat invasive, since adoption agencies have to make sure your home is safe for a child. Of course, the irony is that "natural" parents don't have to go through any of this. LaFollete makes the claim, which by this point seems less bold, that "natural" parents should also require a license—not just adoptive parents.

Here's his argument in standard form:

  1. Any activity that is potentially harmful to others and requires certain demonstrated competence for its safe performance, ought to be regulated (e.g., driving a car).
  2. Parenting is potentially harmful to others, namely children.
  3. Moreover, parenting requires certain demonstrated competence for its safe performance—safe for the children that is.
  4. Therefore, parenting ought to be regulated.
  5. Further, if there is a reliable procedure for determining whether someone has the requisite competence to perform some action that is being regulated, then the action ought to be, all things considered, regulated using this reliable procedure for determining whether someone has the requisite competence to perform said action—or something like this reliable procedure.
  6. There is a reliable procedure for determining whether someone has the requisite competence to be a parent, namely the adoptive parent licensing process.
  7. Therefore, biological parenting (along with adoptive parenting) should be regulated using this reliable procedure for determining whether someone has the requisite competence to rear a child; i.e., there should be parent licensing.

Activity for the reader: Analyze this argument!

 

Argument Extraction

 

 

 

Note: The following views are those expressed by Dutton (2012). He has one (very popular) way of characterizing psychopathy, but it is not the only one. There have been different, contradictory conceptions of psychopathy have been used throughout history. For more info, see Skeem et al. 2011.

The Wisdom of Psychopaths

Dutton's The Wisdom of Psychopaths

In The Wisdom of Psychopaths, research psychologist Kevin Dutton advances a controversial thesis: the functional psychopath hypothesis. His view is, in effect, that in some contexts psychopathic traits can be advantageous. In other words, there are situations and social roles in which psychopaths excel, to the benefit of those around them. This idea ruffles all kinds of feathers, since we've been taught to think psychopaths are the worst of the worst. So, to understand Dutton's hypothesis, let's begin with some context.

First off, Dutton tries to get at just what psychopathy is. He notes that there are some neural abnormalities in psychopaths. For example, normal brains (i.e., the brains of non-psychopaths) tend to react quickly to emotionally-laden words, like “cancer”. Thus, these neurotypical brains tend to recognize these emotionally-laden words more quickly. Psychopaths, however, don’t. Apparently, the emotional processing parts of the brains of psychopaths do not work as they do for non-psychopaths. In other words, psychopaths don't feel emotions as strongly as non-psychopaths—sometimes not at all. For this reason, psychopath brains don't recognize emotionally-laden words as quickly. They read them just as they would any other word, like "dog" and "bus". They remain cool and collected, even under emotionally-charged situations—a skill Dutton notes might be good for a surgeon.

Second, psychopaths appear to have special psychopath powers—and I'm only slightly kidding about this. For example, psychopaths seem to be better at recognizing vulnerability from the way a person walks, via their gait. A gait is a person's pattern of walking. And just by watching how someone walks, individuals who score high on a test for psychopathy can guess who's more emotionally fragile and vulnerable. Also, interestingly enough, psychopaths are better at faking emotions. They can also sense when someone is hiding something—a skill Dutton muses might be good for a customs agent.

 


 

Sidebar

By the way, just like psychopaths can pick out the weak among the general population, the general population also seems to have a psychopath radar. Those who have had confirmed contact with psychopaths, e.g. those who have had close encounters with serial killers, attest to a feeling of dread that comes over them. Why would we evolve this psychopath radar? Dutton reviews one theory from evolutionary game theory that might provide an answer. The theory hypothesizes that psychopathic traits might be adaptive to certain groups. In other words, having individuals with psychopathic traits in a group might be advantageous to the group as a whole, since they would excel in essential tribal activities (like war and hunting). (This theory, I might add, is advocated by none other than Robin Dunbar, the famous British anthropologist and evolutionary psychologist responsible for the idea behind Dunbar's number.) So, psychopaths might be useful for social groups. But, of course, these same psychopaths could also be a menace to society, since they are capable of killing without remorse. One example of this sort of individual that Dutton brings up is the berserker from Nordic cultures. These berserkers would apparently go into an uncontrollable rage during battle and kill everyone around them, both enemies and allies. In any case, as a result of having individuals with psychopathic traits within their group, although group members need some psychopaths in their group, they also need to know who they are—to protect themselves. Hence, we developed a sense for detecting psychopaths—or so goes the theory.

 


 

So how is being a psychopath advantageous in today's society? Before fleshing out his idea, Dutton moves to disentangling antisocial personality disorder (ASPD) from psychopathy, two personality disorders which were considered synonymous at the time he was writing. He makes the case that a hallmark of ASPD is socially-deviant behavior, while the hallmark of psychopathy is affective impairment, i.e., non-normal emotional processing. So, in layman's terms, the defining characteristic of people with ASPD is that they violate social rules, while what distinguishes psychopaths is that they simply don't process emotions like the general population. In fact, not only do psychopaths not(!) show emotional arousal in very emotionally-charged situations, some subjects even show a decrease(!) in heart rate when engaging in risky or violent behavior. So, it is not necessarily risky or violent behavior that sets apart psychopaths from the rest; it is their lack of emotionality.

Neil Armstrong
Neil Armstrong,
total psychopath.

Quick aside: It might be the case that the psychopaths that are most famous also have ASPD. If, according to some estimates, psychopaths make up around 1% of the population, then it is only a small fraction of this %1 that you have to worry about: those that are both psychopathic and have ASPD. As an aside to this aside, it almost seems it's an even smaller fraction of psychopaths that we have to be weary of, since to be the kind of secret sadistic killer that we are afraid to fall prey to, these individuals have to have both psychopathic traits and ASPD, as well as high intelligence, imposing physical strength, and perhaps even charisma. Only with this constellation of traits can one truly be an enormous covert threat to society, like some famous serial killers.

So, here is the functional psychopath hypothesis. There are emotionally-charged social situations where being in control of your emotions, being able to remain calm and collected, is clearly advantageous. These roles include that of surgeons, police officers, customs agents, lawyers, and even athletes. Psychopaths have this ability to remain calm. Thus, they are disposed to performing these social functions well, since they can keep a clarity of mind during the task at hand.

So Dutton's hypothesis becomes more clear: being a psychopath isn’t in and of itself a leg up in society. It is only in certain social roles where being a psychopath might be advantageous. Dutton hastens to add, however, that psychopaths are well-represented in the class of CEOs. He reports on the work of Board and Fritzon, two researchers who found a higher proportion of psychopaths among the CEOs they surveyed than among the inmates they interviewed. In other words, according to these researchers, there's a higher percentage of individuals with psychopathic traits in board rooms than in prisons. Clearly being calm and collected pays dividends. This finding helps Dutton refine his hypothesis further. The type of psychopathy that is ideal is one that is in the moderate range: too psychopathic and you have no impulse control; too non-psychopathic and you don’t get the benefits, e.g., remaining cool-headed so that one can recognize and take advantage of high-payoff risks and keeping calm during emotionally-charged situations.

In closing, there might actually be some famous hero psychopaths. For example, Dutton makes the case that perhaps even Neil Armstrong, the first man on the moon, was a psychopath. According to logs of the Apollo 11 mission, Armstrong and his team were very close to a violent, lonely space death multiple times when attempting to safely land on the moon. But through it all, Armstrong appeared to remain completely unmoved. This complete imperturbability is, of course, the classic psychopathic trait.

 

 


 

Do Stuff

  • Read from 571a-583a (p. 270-284) of Republic.
  • Complete Quiz 3.5+.

 


 

Executive Summary

  • There is a sort of violence to argumentation.

  • When you are constructing an argument, be sure to walk the reader through every step of your reasoning. Use logical indicator words and be very specific about your claims. Argue for every single part of your argument using only the best evidence.

  • Psychopathy, which is distinguished from antisocial personality disorder (ASPD), is associated with decreased emotional processing. Psychopaths are also better than the general population at recognizing vulnerability, faking emotions, and sensing when someone is hiding something.

  • Per Kevin Dutton, there are emotionally-charged social situations where being in control of your emotions, being able to remain calm and collected, is clearly advantageous. These roles include that of surgeons, police officers, customs agents, lawyers, and even athletes. Psychopaths have this ability to remain calm. Thus, psychopaths have a disposition which might allow them to perform these roles well.

 


 

FYI

Suggested Reading: BBC, How to build an argument

TL;DR: CrashCourse, How to argue

Supplemental Material—

Related Material—

Advanced Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.

 

 

 

Know Yourself

 

 

Knowing others is wisdom.
Knowing oneself is enlightenment.

~Laozi

Metacognition

Key to critical thinking is understanding your own thought-processes, something known as metacognition. Many of us are satisfied with accepting our desires and beliefs as they arise in consciousness. Many of my beliefs, like my belief that my home is still where it was when I left the house this morning, are held and/or formed outside of conscious awareness. Here's what I mean by that. I'm not sure that I believe that my home is still where I left it this morning. In other words, if you were to ask me "What do you believe?", I'm not sure that the sentence "I believe that my home is where I left it this morning" would be one of the first ten things to come out of my mouth. I'm not even sure it'd be one of the first 100 beliefs that I would list. (Heck, I'm not even sure I would take your question seriously!) If I'm being honest, if you were to ask me if my home is still where it was this morning, I'm not sure that I actually "held" that belief (whatever that means) prior to being asked about it; it's perfectly possible that I didn't "form" the belief until right after you asked me. So maybe there's some beliefs that I "hold" at the present moment, that I didn't really "form" until they were brought up in conversation.

If you followed what was going on in the paragraph above, then you can see what one type of metacognition is like: it is simply the tracking of belief and the monitoring of belief formation. It's actually kind of cool when you start trying it out. Unfortunatley, metacognition is a whole class unto itself, most adequately taught not by yours truly but by a psychologist. Nonetheless, we can mention some recent findings from the mind sciences that relate both to metacognition as well as to good critical thinking. They have to do with the function of reason.

A chair

Take a look at the chair pictured here. It looks uncomfortable, doesn't it? First off, the seat is way too low. Imagine having to do a deep squat every time you want to sit down, not to mention getting up. Moreover, because the seat is too low, it would be impractical to use this chair at the dinner table. You probably wouldn't be able to reach your food. It also just looks weird. It looks like someone cut off a regular chair's legs. Overall then, it's not a good chair. That is to say, it does not satisfactorily perform the typical functions of a chair.

What if I told you it's not a chair? Let me tell you what it really is: it's a kneeler (also known as a prayer chair). You don't sit on it; you kneel on it. The cushion on the top of the chair is for resting your arms during, say, the praying of the rosary. When looked at in this way, clearly this prayer chair performs its function rather well. It actually looks like an excellent kneeler—if only a little old.

Now that you see what this chair is for, you can see that it does its job very well. When you thought it was for sitting, you thought it didn't do its job very well. We can generalize from this. Only when you understand the intended function of an artifact can you assess whether or not the artifact is performing its function well.

This is at least what Mercier and Sperber (2017) argue in The Enigma of Reason. But they are not making arguments about kneelers and chairs. They are arguing about our capacity to reason. Why do we reason? Why did the ability to reason evolve? What is it for? Their answer: we evolved the capacity to reason (i.e., to come up with reasons for defending certain beliefs and conclusions) so that we can win arguments. Let me say that again, because it really is a radical departure from many people's intuitions. The evolutionary function of reason is to win arguments.

Why would this be evolutionarily adaptive? Think of it this way. Throughout most of human history (i.e., the history of Homo sapiens), we have been fighting against the elements, against predators, and against other hominids—not to mention the invisible enemy of infectious disease. As it turns out, per Olshansky and Ault (1987), it is only in the 1800s that we became much more likely to die of degenerative diseases, such as heart disease and cancer, than infectious disease. Infectious disease, you know by now, requires the care of others. And so, whether you are fighting bad weather, running from predators, battling against a Neanderthal, or recovering from a viral infection, you need other people. Having good standing in your social group, in other words, is absolutely imperative to your survival. Put as a counterfactual, throughout most of our evolutionary history, to be without your group basically guaranteed an early death. Why is winning arguments relevant to keeping in good terms with your group? There's many reasons why a group might shun you. One of them might be your behavior. Just think of a falling out between friends because of something one of them did. What would be invaluable in a situation where you put your foot in your mouth or did something you shouldn't have is the capacity to explain yourself. That's where reason comes in. Any individuals who were able to explain themselves and spared themselves being banished from the group were more likely to survive, thus passing on their genes (along with their ability to win arguments).

Mercier and Sperber's The Enigma of Reason

Three things should be said here. First off, this is not the only socio-communicative theory of how reason evolved. Tomasello (2014) gives another theory (which is really cool). However, it looks like socio-communicative theories about the evolutionary function of reason, like those of Mercier and Sperber and Tomasello, are coming to be dominant. This kind of view is growing to be so convincing that, at the end of his A Natural History of Human Thinking, Tomasello makes the case that some socio-communicative theory must be true.

Second, what's so striking about this view is that it is so contrary to our typical conception of reason. We all seem to intuitively think that reason is for forming better beliefs. Right? Don't you think that your capacity to think rationally is there so that you can form more accurate beliefs and make better decisions? Apparently this is completely off. We happen to use reason in this way, sometimes at least. But reason isn't really for this intellectual function.

This brings me to the third point. Confirmation bias has been a specter that has been haunting us this entire course. We've noted repeatedly that it is an impediment for processing information accurately, for forming accurate beliefs, and for making optimal decisions. But now we finally understand why we have confirmation bias as part of our cognition. If the capacity to reason really were for some intellectual function, then confirmation bias seems to be a bug in our programming. It seems to stop us from performing our intellectual duties. But the capacity to reason isn't for some intellectual function; it's for winning arguments. In this context, confirmation bias makes perfect sense. When you're arguing (and your life's at stake), you really don't want to be engaging in an even-handed assessment of the situation. You (desperately) want to win. You want to be right. You need to be right. And in this context, confirmation bias isn't a bug: it's a feature. We've been assuming this whole time that our capacity to reason is a chair, but it's actually a kneeler.1

Argument Extraction

 

 

 

Know Your Enemy

After having thought about metacognition for a bit, it is perhaps a good idea to re-visit some arguments that we met early on in this course. In The Rule of the Knowledgeable we took a brief look at Caplan's (2008) book The Myth of the Rational Voter. The argument that Caplan makes isn’t about endorsing epistocracy, like that of Brennan (2017). Instead, he wants to dispose of the myth that voters are rationally ignorant. The idea behind rational ignorance is that voters do not stay current on political knowledge because the costs of acquiring political knowledge outweigh the benefits that the knowledge provides. In other words, if you believe in the rational ignorance hypothesis, you believe that a rational person would basically not bother with keeping up with politics because there's really no benefit to it.

Caplan

Caplan disagrees with the rational ignorance hypothesis. He doesn't believe that voters are rationally ignorant. They're just plain ignorant. In other words, there is a difference between not keeping up with politics because you did a cost-benefit analysis and realized there's just no point and not knowing anything about politics and good policy (while sometimes pretending that you do). Caplan thinks most of the electorate is guilty of the second one of these. And he makes his case primarily by pointing out that there are large belief gaps between economics PhDs and the general public. Put differently, Caplan argues that the average voter would do miserably poorly in an introductory economics course and that, even if they were to take a course on economics, a single course wouldn't be enough to undo the systematic biases through which he/she views the world.

Notice your own thoughts on this matter right now. Some of you feel outrage. Some of you are already in agreement, without actually having read his book or hearing his argument. In fact, many of us instantly decide whether we agree with Caplan or not without really looking at the argument. I know people with PhDs that do this, by the way, so don't feel too bad. But at this point in this course, you should know that this is not path to good critical thinking.

Is Caplan right? I'm not sure. You'll have to read his book and decide for yourself. What I wanted to do primarily is get you thinking about your own thinking, your own reaction to an argument. It is only once you witness and accept the thoughts and feelings that naturally arise when coming face to face with a controversial viewpoint that you can begin to apply the principles of critical thinking we've been learning. So now that you've thought about your natural reaction to his writings, let's begin assessing Caplan's view. This will be woefully incomplete, by the way. A full analysis of this view belongs in a political science course. I just want to guide you through the first few steps.

Let's begin with the assumptions that fuel the argument. Caplan makes various assumptions during this book, but I will point out three here:

  1. If there is disagreement between the public and economics PhDs, then it is the public that is most likely wrong.
  2. Economics is the most important field to be knowledgeable about when making political decisions.
  3. Economics is monolithic, i.e., there is widespread agreement among economists about the right way to do economics, as well as agreement on what are the best policies for the economy.

Only when the assumptions are laid out in this way can we assess them for truth. So without further ado, here are some complications for Caplan and his assumptions...

Questioning Caplan's Assumptions

Economists' opinions are more accurate than non-economists?

First off, it’s important to note that nearly all of Caplan's case against rational ignorance relies on the belief gap between economics PhDs and the general public. Moreover, he admits that information on this matter is challenging to acquire. Ideally, you'd want a huge segment of the population to take some comprehensive economics tests, but no one is exactly volunteering for that. And so he essentially makes his case with data from only one study(!), albeit a large and well-crafted one (see Caplan 2008: 51-52). He also, I might add, claims that he’s following some assumptions made by Nobel prize winning psychologist Daniel Kahneman, namely an assumption about what counts as an error of judgment.

"The presence of an error of judgment is demonstrated by comparing people’s responses either with an established fact... or with an accepted rule of arithmetic, logic, or statistics” (Kahneman quoted in Caplan 2008: 52).

 

Mill's Black Rights/White Wrongs

There are several criticisms that we can make at this point. First, economics is not an accepted rule of arithmetic, logic, or statistics. It is a social science and, like all social sciences (and really all science), its methods are continuously being updated and improved upon. He cannot and should not assume that what mainstream economists believe is some sort of fundamental law. This would actually be thoroughly unscientific, as we saw in the lesson titled...for the Stronger. So, Caplan pretends that he is using the definition of "error of judgment" from Kahneman. I understand why he wanted to pretend he was doing so. Who doesn't want to cite an Nobel prize winner to add a little prestige to their argument? But it appears that it is unjustified to even mention Kahneman here, since I believe Kahneman wouldn't agree with much of what Caplan claims.

Second, it is possible that the discipline of economics, at least its neoclassical wing (stay tuned), is heavily biased. That is to say neoclassical economists are mostly white and mostly male (see this recent newsletter). Now you might think to yourself, "So what?" Well, this is a problem because it is possible that the dominant theories that are being taught in neoclassical economics aren't being passed down because they are accurate or have a lot of predictive power; rather, they are being passed down because they appeal to white men. In fact, certain disciplines, take Political Science and Philosophy as two more examples, systematically alienate non-white, non-male students, since their dominant theories are not relevant to persons of color and women (see Mills 2017).

"The central debates in the field [of Political Science] as presented—aristocracy versus democracy, absolutism versus libertarianism, contractarianism versus communitarianism—exclude any reference to the modern global history of racism versus anti-racism, of abolitionist, anti-imperialist, anti-colonialist, anti-Jim Crow, anti-apartheid struggles... [Moreover]... The political history of the West is sanitized, reconstructed as if white racial domination and the oppression of people of color had not been central to that history" (Mills 2017: 33; emphasis added; interpolations are mine).

Because Political Science largely neglects how the (supposedly) universalist theories of thinkers like John Rawls and Immanuel Kant omit the history of white supremacy and patriarchy, and because neoclassical economics neglects the role that slavery had in overall American wealth and economic supremacy (see Beckert 2015), non-whites and non-males tend to steer clear of these disciplines. It's not interesting to them because it neglects their own experience and it doesn't teach them any actionable knowledge to guide their future actions. This creates an echo chamber where white males get to propagate theories that support their overall perspective. This obviously has a negative effect on the field, since the field is no longer aspiring to track the truth; instead it just propagates baseless theories that appeal to white men.

Now you might be saying, "That's just an accusation" and accusations are not enough to discredit a social science. Touché. However, it's not just an accusation. This brings me to my third point. Third, and most important, even relative to other social sciences, economics has recently come under heavy and persistent fire for its theoretical and empirical failings. Please enjoy the Food for Thought (and Sidebar!) below:

 

 

Given all this, we might be led to think that it's actually a good thing that the general population doesn't reliably think like a neoclassical economist. Of course, there is much more to this discussion. But(!) if you think this is a discussion that we need to have, then I've made my point: it's not clear that economists' opinions are more accurate than those of non-economists.

Is economics really the most politically relevant field?

There's many fields that might be more relevant to good governance than is economics, both in principle and in practice. Per my buddy Josh Casper, a majority of elected officials have a law degree (although it is unclear how many passed the bar). So, in practice, knowing the law is at least a good way of getting elected. One can also make the case that either International Affairs or Political Science is more relevant, at least with regards to theory. If one considers that it is important to know the effect of a policy on society's well-being and perspective, then Sociology and/or History might be some good candidates. Oh and by the way, since politicians spend most of their time fundraising, advertising or communications might be good disciplines for aspiring politicians—and I'm only saying this slightly tongue-in-cheek. Heck, even Philosophy might be a good idea; there's at least no sign that it leads to any more predictive failure than does neoclassical economics!

Is economics just neoclassical economics?

As we saw in the Food for thought, there are competing approaches in economics. In other words, neoclassical economics represents just one set of methods for the study of economics. Marxian economics and behavioral economics, for example, are two competing approaches, and they are steadily growing in their ranks. And there are other approaches still.2 Thus, Caplan can't pretend that there is unanimity among economists about what the best policies are. There's not even unanimity about the best way of doing economics!

 

 


 

Do Stuff

  • Read from 583b-592b (p. 284-296) of Republic.
  • Start preparing for Quiz 3.7+.

 


 

Executive Summary

  • Per linguist and developmental psychologist Michael Tomasello, the evolutionary origins of our capacity to reason are associated somehow with social communication. In other words, he makes the case that some socio-communicative theory of the evolutionary function of reason must be true.

  • One socio-communicative theory, by Mercier and Sperber, claims that we evolved the capacity to reason (i.e., to come up with reasons for defending certain beliefs and conclusions) so that we can win arguments and explain our behaviors to others.

  • Being aware of the like evolutinary origins of reason helps one understand one's own thought processes (metacognition) and why we are so riddled with confirmation bias.

  • In The Myth of the Rational Voter, economist Bryan Caplan makes three key assumptions when arguing that voters are plain ignorant. All three of those assumptions can be questioned.

 


 

FYI

Suggested Reading: Tom Stafford, How to get People to Overcome their Biases

TL;DR: Animated Lessons, Confirmation Bias: How to avoid it and make better decisions

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. Notice that if reason really is just for coming up with reasons that defend our behavior and beliefs, then this suggests that it is separate from the part of our brain that actually determines our behavior and beliefs. At least this is what the neuroscientists Michael Gazzaniga concluded. After studying split-brain patients his entire career, he arrived at the view that our actions are decided on by one part of the brain, while our justifications for our actions are invented by a different part. He calls the part of our brain that justifies our actions/beliefs (once they’ve already been decided on by another part of the brain) the interpreter module (see Gazzaniga 2012). By the way, in The Blank Slate, cognitive scientist Steven Pinker refers to the interpreter module by a less flattering name: the baloney generator.

2. Click here for an interview with Manfred Max-Neef about his “barefoot economics.”

 

Contra Metaphysica

 

 

Everything is vague to a degree you do not realize till you have tried to make it precise.

~Bertrand Russell

Argument Extraction

 

On nuance

In light of today's reading, it is important to own up to the imitative nature of my course thus far. As far as I can tell, I have not produced any original work throughout the past two dozen or so lessons. I have but duplicated the work of others. I have merely reported some findings, creating nothing of my own. I am an imitator—albeit an academically rigorous one—and Plato would've surely banished me from the kallipolis.

If my job thus far has been to parrot what other thinkers and scientists have said, I think it behooves me to include some criticisms of the findings we've covered so far. To respond to every single argument I have presented would take a whole other course—a Critical Thinking and Discourse (Pt. II). But that's not up to me. Instead, I'll share some information with you that seems to undermine many of the findings I've reported on in earlier lessons.

fMRI

Cobb's The Idea of the Brain

Matthew Cobb's recent The Idea of the Brain is an invaluable resource for anyone who is interested in the brain. It is not quite a history of neuroscience, although a history of neuroscience is included in the text. Rather, it is a history of the different conceptions of the brain as well as the different methods we've had for studying the brain. I really can't recommend this book enough if you're interested in the brain. Most relevant to us at this point of the course, however, is chapter 14 of Cobb (2020). In this chapter, Cobb reviews the history of fMRI studies as well as some critiques of the methodology underpinning fMRI studies. fMRI (which is short for functional magnetic resonance imaging) measures brain activity by detecting changes associated with blood flow. This is at least in principle possible because cerebral blood flow and neuronal activation are coupled; that is, where the blood flows there appears to be neuronal activation. Thus, you can use blood as a proxy for neuronal activation.

fMRI studies have been essential in helping discover more about localization in the brain—the notion that different areas of the brain control different aspects of behavior. Having said that, many neuroscientists (namely those that don't use fMRI studies) have pretty strong criticisms of the practice. To truly understand the criticisms, I'd have to explain fMRI to you in much more detail than we have space for. However, all the criticisms of fMRI essentially all have this to say: The resolution in fMRI is simply too coarse. This is to say that a pixel in an fMRI image (or voxel to use the technical jargon) is too much of an oversimplification. Cobb reports that behind every voxel there is at least 5.5 million neurons, up to 5.5 X 10^10 synapses, 22km of dendrites, and 220km of axons. To pretend that we have a clear reading on what's going on in the brain via the tool of fMRI images is too much of a stretch. It is far too imprecise to really shine a light on how the brain works. Here is Cobb on the subject:

“These critics [of fMRI studies] are unimpressed because they are used to exploring very precise effects in individual cells or those exerted by particular genes, whereas fMRI cannot measure what is truly important for the brain – action potentials, the actual signal in the neuron. The brain is so dense that in 2008 Nikos Logothetis estimated that in each pixel (‘voxel’ in fMRI jargon) of an image of the brain there are a staggering 5.5 million neurons, between 2.2 and 5.5 × 10^10 synapses, 22 km of dendrites and 220 km of axons. The scale at which the real action is taking place – in individual cells and synapses and in networks of cells – is hopelessly blurred out by the coarseness of fMRI. Furthermore, fMRI measures activity changes in seconds, whereas neurons send information in the millisecond range. Even more strikingly, fMRI is unable to reveal one of the key aspects of how the brain works – the difference between activation and inhibition. fMRI cannot tell us what single cells, or networks of cells, are up to. Even at the level of neural tracts, it cannot tell us meaningfully what is happening, merely where, at an extremely coarse level, something is happening relatively more or less than elsewhere” (Cobb 2020: 679-80).

So fMRI studies have some problems. Why is this relevant to us? In his history, Cobb also notes that many studies about, say, the differences between men and women’s brains, are based on the coarsely-grained findings of fMRI studies. And so, without getting into too much detail. The argument that I built in Unit II for the abolition of party politics is, at least in part, based on data that is coming under attack. (Recall that I began that unit by reporting on the work of scientists who argued that there are differences between men and women in the lesson titled The Distance of the Planets.)

Does this undermine the argument completely? Not quite. Remember that the point of the argument was to make the case that party politics make one vulnerable to lazy thinking—to accepting a conclusion for the easy reason (because your in-group accepts it) rather than for the good reason (because evidence supports it). The people who accept or reject the findings of science based on how they can spin the findings are not good critical thinkers. Lefties might reject some science simply because they think it undermines their views. Righties might accept some science simply because they think it supports their view. Both of these approaches to science are unscientific. Nonetheless, it would also be unscientific to accept a premise that isn't wholly supported by the evidence. It might still be the case that there are real differences between men and women at the level of the brain, by the way. But now it is up to you to keep up with the science, to see where this debate goes, and maybe even to be a producer of scientific results yourself—as opposed to merely being a consumer, like me.

Intelligence

In his magnus opus Peak, Anders Ericsson summarizes his life's work into the study of experts and expertise. Most relevant for us is what we has to say about intelligence. If we were to fall into the trap of believing that our genetically-determined intelligence levels determine our lives for us, then we would be short-changing ourselves dramatically. Sure, some people do better on intelligence tests, and there are correlations to life outcomes. But genetic studies like the ones I reported on in The Distance of the Planets and The Family tend to downplay the role of environment. This is, I might add, not necessarily the fault of the researchers: it's not easy to be both an expert in genetics and sociology(!). Ericsson, however, does remind us of the role of environment, support groups, and, of course, deliberate practice.

Ericsson and Pool's Peak

In chapter 8 of Peak, recall, Ericsson harshly critiques the notion of prodigies (e.g., Mozart, the high jumper Donald Thomas, etc.). Again, it is deliberate practice in all cases that leads to excellence and elite level performance. Again, exactly zero experts in a domain got to where they are without deliberate practice. What about intelligence? Ericsson found that intelligence is correlated with higher performance only in the beginning. In fact, note all the things that Ericsson mentions that IQ isn’t correlated with. IQ isn’t correlated with excellence in chess (once players get to the elite level); it isn’t correlated with musical ability (once players get to the elite level); it isn’t correlated with excellence in performing oral surgery or becoming a London taxi driver (where one has to acquire GPS-like knowledge of London); among scientists, it isn’t correlated with scientific productivity. High IQ might help one get through school, since IQ is consistently correlated with academic achievement but it won’t give you an advantage at the elite level.

What does this mean for you? I'll try to summarize. IQ tests are probably measuring something, says Ericsson. And this something is a pretty good predictor of whether or not you will finish school and how well you will perform. This something also looks like it is a good predictor of life outcomes, like lifelong earnings, longevity, etc. But(!) life outcomes (like lifelong earnings, longevity, etc.) might have to do more with environment than with genetics. Remember from the work of Bryan Caplan that going to college gives you an economic premium, even though you don't seem to learn much there, and this is obviously a puzzle. So, it could be the case that the reason why college graduates have better lifeoutcomes have to do with the way society is organized rather than because they have a genetic advantage over everyone else. What's the evidence for this? Ericsson says its in the brains of the experts! Once you get to the elite level, there are no real differences in intelligence. At that level, success has more to do with work ethic and the type of practice that you do to excel in your craft (i.e., deliberate practice). Put bluntly, IQ tests are probably a good predictor of academic success but cannot say anything definitive of lifelong success. This is because when you look at actually successful experts, they are not typically smarter than non-successful individuals in their field. They just practice differently. This suggests, then, that the way society is organized plays an important role in life outcomes. Ericsson again:

“A number of researchers have suggested that there are, in general, minimum requirements for performing capably in various areas. For instance, it has been suggested that scientists in at least some fields need an IQ score of around 110 to 120 to be successful, but that a higher score doesn’t confer any additional benefit. However, it is not clear whether that IQ score of 110 is necessary to actually perform the duties of a scientist or simply to get to the point where you can be hired as a scientist. In many scientific fields you need to hold a Ph.D. to be able to get research grants and conduct research, and getting a Ph.D. requires four to six years of successful postgraduate academic performance with a high level of writing skills and a large vocabulary—which are essentially attributes measured by verbal intelligence tests. Furthermore, most science Ph.D. programs demand mathematical and logical thinking, which are measured by other components of intelligence tests. When college graduates apply to graduate school they have to take such tests as the Graduate Record Examination (GRE), which measures these abilities, and only the high-scoring students are accepted into science graduate programs. Thus, from this perspective, it is not surprising that scientists generally have IQ scores of 110 to 120 or above: without the ability to achieve such scores, it is unlikely they would have ever had the chance to become scientists in the first place” (Ericsson and Pool 2017: 235).

In short, even though it looks like you have to have an IQ of around 110 to be a successful scientist, it is just as likely that the educational system has been set up so that only people with an IQ of around 110 actually get hired as scientists. Is this way of organizing the institution of science the correct one? That's a whole other conversation. But(!) the take-home message here is that work ethic matters. Whether you are an A student or not, you can always improve. You can use deliberate practice to get better at basically any type of activity. It's just going to require considerably unpleasant practice. But hey, at least the ball is in your court.

Conclusion

Does this mean the arguments that I've been developing are all for nothing? Of course not. There is just more nuance to these arguments than was originally let on. That's just how it goes. And if you thought the world was going to be easier to figure out than this, then let me disabuse you of that idea right now. The world is hard. You need sophisticated mathematical tools and scientific literacy to even try to attempt to understand it. And then you have to argue about which mathematical tools and scientific approaches are best for studying the world. And that's a whole new mess. In fact, I didn't even mention the replication crisis in many social sciences (but see the FYI section; see also Clayton 2021). Critical thinking is hard work. But, in my opinion, it's worth the effort.

 

 

 

Contra Metaphysica

In this section, I want to disagree vociferously with Plato. Sure, I am (mostly) an imitator. But imitation isn't all that bad. Poets, painters, and cover bands all have their place in a well-functioning society. I have my place in a well-functioning society. After all, it's not easy to find someone who can summarize information from many disparate fields like I can. Sure, I didn't produce any of it. But I've consumed a ton of it. And quantity has a quality all its own. So I disagree with Plato: imitation has its place. But there's a deeper issue that I have with Plato.

Plato

One way of understanding Plato, as we've seen, is as a teacher. Maybe he didn't mean anything he said (see, for example, Klosko 1986). Maybe he was just trying to get you to think. Another way of looking at Plato is as a counter-revolutionary (see Vernant 2006). Plato wasn't the first philosopher by a long shot. There was a long established tradition once Plato showed up. And most of these thinkers were 'positivist' in their leanings, meaning that they were moving away from myth and towards ideas that were more debatable and even testable. So, instead of accepting dogma, these early philosophers would argue about what the truth might be. Plato, however, showed up and reintroduced myth back into philosophy. It's not only the myth of the metals and the myth which closes Book X. (Stay tuned.) Plato's theory of the Forms sounds completely mythical and supernatural. It would have us believe that numbers and The Good exist independent of humans, in their own special dimension. The positivist philosophers would've been horrified at this idea. If you reintroduce these sorts of supernatural beliefs back into the conversation, then there's no way to settle the issue. Supernatural, metaphysical beliefs are intellectual dead-ends.

Let me give you an example. Let's just say that you and your friend are trying to figure out why your car won't start. You'll say, "It's probably the starter." The nice thing about this claim is that it is testable. You can, for example, replace the starter and see if that does the trick. If it was just the starter, then your car should turn on now. Maybe your buddy said it was the battery. This is equally helpful in that it is testable. Maybe it was both the starter and the battery. Now you're having a discussion as to what could've caused that situation to arise. This is all productive, I hope you can see. But let's just say that another buddy joins the conversation and says, "Yeah, there's ghosts in your engine." This is, I hope you can see, completely unhelpful. By introducing supernatural entities into the conversation, your friend has added nothing testable to the mix. Moreover, metaphysical beliefs (like belief in ghosts or the after-life) are usually conjured up and believed based on essentially nothing. For example, what makes one theory of the after-life stronger than another theory? Nothing. They're just both stories, neither of which can be verified.

This is what Plato has done. In the very beginning of the field, philosophy seems to have been moving away from myth and Plato brought it right back in. Plato is like your friend who thinks that the engine is haunted. He made things worse. Worse still, Plato's writings survive precisely because of their metaphysical content. As it turns out, most of the literature of the classical world, including the writings of the early positivist philosophers, is lost. This process of losing our intellectual heritage was initiated when Christians took control of the Roman empire in the 4th century CE (Nixey 2018). In fact, only about 10% of classical writings are still in existence. Why was Plato preserved? In short: Because it fit in with the Christian doctrine. That's why, if you grew up in the Christian tradition, Plato's ideas aren't too far out for you. He argued for the existence of souls, a different realm that is better than the one we currently live in, an objective right and wrong, ideas about a minority who understands the Forms (priests) and the masses who need to be guided by the minority, etc. So, we don't have Plato's writings because he's the best. We have them because Christians strategically chose to preserve them, while letting the rest turn to dust. This is not to say that Plato's ideas are all trash. But we must realize that there was a great filtering process that led to us thinking of Plato as one of the greatest philosophers of the classical age.

Collin's Sociology of Philosophies

What now? I can only tell you my conclusions. First and foremost, base your beliefs off evidence. This is highly unnatural, since we always want to accept beliefs that cohere with what we already believe. Try to fight that urge, and always follow the evidence. Second, if you come to a contentious issue (i.e., a problem with multiple possible viewpoints and no obvious solution), always try to move the discussion towards what is testable and measurable. Don't bring up ghosts or supernatural entities, and definitely don't rely on them for your explanations or philosophy of life. Try to steer your intellectual pursuits so that they always make contact with reality as understood by the sciences. Remember the key questions in science: "What are we talking about?" and "How do we measure it?" If you're thinking in terms of these questions, you're doing well. Third, develop networks of other critical thinkers. Even if you follow steps 1 and 2, that doesn't guarantee your ideas will be right-headed. Be open to the intellectual challenge of defending your views. Fourth, keep up with the latest science; i.e., become a life-long learner.

On the third point, I have a bit more to share. Students ask me all the time what the moral is to my PHIL 101 course—a class with many twists and turns. I don't want to spoil the fun if you haven't taken it, but here's one of the main takeaways from the course: intellectual breakthroughs come about when certain sociological requirements are met. In other words, there's actual ingredients to intellectual breakthroughs. In that course, time and time again I show different problems and how a solution to them was approximated when several people looked at the problems in different ways and argued about it. Here's a summary of the necessary ingredients for intellectual breakthrough by sociologist Randall Collins (2009), who himself studied the sociology of science and philosophy. Intellectual progress requires the following (in no particular order):

  • Technologies by which we can share ideas and take the perspectives of others (e.g., novels, the internet, mobile phones, etc.)
  • Chains of personal contacts which foster intellectual creativity through the constant raising of objections
  • Intellectual rivalries
  • Emotional energy and ideas you are willing to fight for
  • Movement away from superstition and towards empirical (testable) hypotheses

In closing, if you like my classes, it might be because you like this approach to thinking. Every single one of my classes has this idea as an undercurrent. There's names for this kind of view, but I won't add any more jargon to this lesson. I playfully refer to it with some fake Latin: contra metaphysica. Even though it doesn't really mean anything in Latin, I like the way it sounds. It sounds like a commitment to a certain kind of critical thinking—a kind of critical thinking that works.

 

 


 

Do Stuff

  • Read from 595a-608b (p. 297-313) of Republic.
  • Complete Quiz 3.7+.

 


 

Executive Summary

  • The findings of science might have implications for society, but there are many valid criticisms of some methodologies in science. In other words, there's science denialism (which no critical thinker engages in) and then there's disputes about how to engage in the practice of science (which critical thinkers do engage in). It is being suggested here that the right way to critique a scientific finding is through better scientific processes.

  • The instructor makes some final recommendations for good critical thinking. First and foremost, base your beliefs off evidence. Second, if you come to a contentious issue (i.e., a problem with multiple possible viewpoints and no obvious solution), always try to move the discussion towards what is testable and measurable. Always try to steer your intellectual pursuits so that they always make contact with reality as understood by the sciences. Third, develop networks of other critical thinkers that will challenge your views, since it's good for you. Fourth, keep up with the sciences for the rest of your life.

 


 

FYI

Suggested Viewing: Stuart Firestein, The Pursuit of Ignorance

Supplemental Material—

Related Material—

Advanced Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.

 

 

 

Coup d'état

 

 

In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence.

~David Hume

Our argument is not flatly circular, but something like it... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conception only as cultural posits. The myth of physical objects is epistemologically superior to most in that it has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience.

~W.V.O. Quine

Breaking News!

 

Book X

 

 


 

Do Stuff

  • Read from 608c-621c (p. 313-326) of Republic.

 


 

FYI

Suggested Reading: PositivePsychology.com, 8 Ways To Create Flow According to Mihaly Csikszentmihalyi

TL;DR: Fight Mediocrity, FLOW BY MIHALY CSIKSZENTMIHALYI | ANIMATED BOOK SUMMARY

Supplemental Material—

Related Material—

For full lecture notes, suggested readings, and supplementary material, go to rcgphi.com.