Cart 0

UNIT I

opening.jpg

Problems/Solutions

 

The past is a foreign country;
they do things differently there.

~L.P. Hartley

 

Philosophy is dead

Teaching an introductory course in philosophy requires more assumptions than this instructor is comfortable with. First off, you have to (at least provisionally) demarcate, or set some boundaries, regarding what philosophy is and what philosophy isn't. Unfortunately for this instructor, however, philosophy has been different things at different times and different places.

For example, in classical Greece, it is difficult to distinguish philosophy from what today we would call the beginnings of natural science. Is philosophy the same as science, then? I think we can all agree that that's definitely not the case. But there is a bit of philosophy in science. Allow me to clarify.

The origins of natural science come from multiple, sometimes unexpected, origins, many of them going back to ancient Greece. For example, Freeman (2003: 7-25) discusses the competitive nature of Greek debate and the rejection of supernatural explanations by philosophers, beginning with Thales of Miletus. This competitiveness and rejection of non-natural causes are obviously an important part of what would eventually become science. But, and this is important, it's not the whole picture.

GER LLoyd's book

Here's another important ingredient. Chapter 4 of G.E.R. Lloyd's (1999) Magic, Reason, and Experience makes the case that practices in political debate made their way into the study of the natural world too. He shows how the Greek word for witness was the root of the word for evidence (specifically in scientific discourse). He also shows that the word for cross-examination of a witness is related to the language behind the testing of a hypothesis. So aspects of the legal tradition from Greece made their way into the study of nature.

We often times want simple and intuitive explanations, but the complexity of the world does not allow for that. And so we should understand the origins of complex institutions to be multifactorial, and we should expect that the essential ingredients of complex institutions are sometimes spread thinly across time. The origins of natural science are, in fact, extremely variegated. There are some elements of science that came about in ancient Greece, yes. But there are some elements of science that were not fully formed until the early modern era, from about 1500-1800 CE. Other elements had to wait longer still before they were fully codified; this is something that we will see in this course.

And so, I would never for a second want to suggest that philosophy single-handedly gave birth to science; but I will say that it was there during its conception, it was present during science's long gestation period, and it played a non-negligible role in its birth. And yet, this doesn't get us any closer to defining philosophy, and anyone who is tasked with teaching it still has a problem on their hands.

Frankfurt's book

Here's another problem that arises if someone is trying to teach an introductory course in philosophy: many are predisposed to think of philosophy as bullshit. Bullshit, by the way, is actually a technical term in philosophy. I'm not kidding. Harry Frankfurt (2009; originally published in 2005) argues that bullshit (a comment made to persuade, regardless of the truth) has been on the rise for the last several decades. How do you know a bullshitter? “The liar cares about the truth and attempts to hide it; the bullshitter doesn't care if what they say is true or false, but rather only cares whether or not their listener is persuaded” (ibid., 61).

Students/colleagues who instinctively think little of philosophy don't usually come out and say the word "bullshit", but it lurks somewhere in their minds. To be honest, I don't completely disagree with them. Some philosophy is bullshit. In fact, in a follow-up to Frankfurt's 2005 monograph, Oxford philosopher G. A. Cohen charges that Frankfurt overlooked a whole category of bullshit: the kind that appears in academic works. Cohen argues that some thinkers write in ways that are not only unclear but also unclarifiable. In Cohen's second definition, which applies to the academy, bullshit is the obscure that cannot be rendered unobscure. And so, there is bullshit in philosophy, because there are (unfortunately) some philosophers who write/wrote in a completely unintelligible way. It sounds like word salad to yours truly. They were completely indifferent to having precision in meaning, and this is exceedingly regrettable for the discipline (see Sokal and Bricmont 1999). In fact, we're still trying to get past that caricature of what philosophy is.

But here's where the problem lies. It's not all bullshit. If you argue that much of philosophy is unnecessarily obscure and even pointless, consider me an ally. But if you think no philosophy is or has ever been worthwhile, then you have another thing coming.

This latter objection to philosophy, the notion that no philosophy is or has ever been worthwhile, is (I think) symptomatic of a profound anti-intellectualism that we find in our society. I'm sure you've met people that don't recognize when their beliefs are inconsistent (or self-contradictory). Perhaps you've met people who believe in completely incoherent conspiracy theories about new world orders, vaccines, and/or reptilian aliens. I've certainly met people who proudly claim that they read no books but the Bible, which (alarmingly) they believe to have been originally written in English (which is literally not possible). This, and I hope you share my sentiments, is not okay.

 

Decoding Philosophy

 

Quote by Isaac Asimov (taken from his column in Newsweek, 21 January 1980)

 

Sidebar

I actually have a theory as to why some people instinctually dislike philosophy. It has to do with something called the halo effect. First, take a look at the Cognitive Bias of the Day below.

Cognitive Bias of the Day

The halo effect, first posited by Thorndike (1920), is our tendency to, once we’ve positively assessed one aspect of a person (or brand, company, product, etc.) to also positively assess other unrelated aspects of that same entity (see also Nisbett and Wilson 1977 and Rosenzweig 2014).

Rosenzweig's analysis of this bias is particularly enlightening. He shows that this bias is rampant in the business world. Consider Company X. Let me first tell you that Company X has had record bonuses for higher management two years in a row. Now let me ask you a question: Do you think that management at Company X is exceptionally good? In order to make a truly educated guess, you'd have to look at more than the bonuses of the executives. You'd have to also look at retention rate, overall profits, how competitors are doing, etc. But the mind naturally wants to say, "Yes! There must be good management there, because why else would there be record bonuses for high management!" This is the halo effect at work, and if you don't pay attention to how this bias influences your assessments of firms and market behavior, then you are going to make mistakes that will cost you money.

How does this explain why philosophy is looked down upon? My general sense is that, prior to a college introductory course, most students know very little or nothing about philosophy. The reasons for this are complicated, as stated above, but suffice it to say that people don't know how to feel about this discipline. Hence, the mind feels uneasiness about this subject. Uneasiness is not something that the human mind is built to endure. We naturally look for resolutions to any kind of cognitive dissonance (see Tavris and Aronson 2020). And so, when faced with a discipline we've never been exposed to before, we think to ourselves, "Had this subject been worth a damn, I would've already known about it." And so, this dissonance is resolved by deciding that the subject isn't really worthwhile.

It could also be the case that as students are choosing their majors, they lack deep and substantive reasons for choosing said majors. After all, they don't know much about the field yet. And so, some feel the need to denigrate other disciplines so as to ease cognitive dissonance. In other words, they feel a need to have good reasons for why they chose their discipline, they don't have them, and so they construct reasons why other disciplines "suck." The general idea is that the mind is doing something like this, "My discipline [which I don't know much about yet] is way better than these other disciplines since [insert whatever I can think of in the moment]."

That's at least part of my theory...

 


 

Solutions

None of the preceding, by the way, even remotely solves our demarcation problem, the setting of boundaries between what counts as philosophy and what doesn't. A standard "solution" to this problem is to use a question-based approach in introductory classes. In other words, the idea is to pose questions that are traditionally accepted as being philosophical, whatever that may mean, and look at the responses of professional philosophers. This is, more or less, the approach that I will take. However, I will not limit myself to philosophers. Being a voracious reader, I am able to add the input of thinkers from various disciplines to this conversation. As such, I will pose questions that are traditionally accepted as being philosophical and look at the responses of conventional philosophers (in a historical context) as well as the responses of psychologists, neuroscientists, mathematicians, and anyone else who might have something relevant to say.

Frankly, though, this instructor doesn't want to waste your time. To me, it wasn't enough to just cover traditional philosophical questions. So instead, I've decided to build a course that tells a story—actually, a history. I'm going to teach you a history that I think is worth knowing. It's not necessarily a history of philosophy. In a way, in fact, the philosophy is incidental. But don't worry. There'll be plenty of mind-bending philosophical ideas, and I'm bound to upset a few individuals (if I haven't already). Nonetheless, I think it's a story worth telling.

 

 

The ruined Roman city of Jerash, near present-day Amman (Jordan)

 

Collapse

The story that will be told in this course is, in a way, about collapse. If we are taking this negative perspective, we will be looking at how ideas can "collapse". But this is much too vague, especially after going on a tirade against some bullshit philosophers above. Let me begin with a more concrete example of collapse. Please enjoy your first Storytime! of this course:

It is difficult to imagine that our own civilization could collapse. It is even harder to imagine that our civilization could collapse and that humans centuries from now—if we still exist—would have no idea how our gadgets and technologies were made (although perhaps this is easier to imagine post-COVID). And yet, we know from the historical record that civilization has its highs and lows. At one point, Rome was the most splendid city in the world. Still it fell (and tragically so). In fact, historian Ian Morris called the fall of the Roman Empire the single greatest regression in human history.

We can speculate as to why it's difficult to think of our collapse in any realistic way. We do seem to have an end of history illusion. This is the bias that makes us believe that we are, for all intents and purposes, done "growing". We believe that our experiences have resulted in a set of personal tastes and preferences which will not substantially change in the future. We pretty much believe we're "done". This is, of course, not true at all, but other biases come in to block us from realizing this. The recall bias, for example, does not allow us to remember previous events or experiences accurately, especially if they conflict with input we are actively receiving. So, if your preferences did change, you are likely to think you've always held those preferences and not even notice the change.2

And so maybe these biases are active not only in our assessments about ourselves, but also in how we view history itself. Perhaps we have a predisposition to thinking that history is "done". We're "it". Civilizational complexity won't dip any lower than we are, and it won't rise much higher either.

But reason compels us to dive deeper. I don't have to remind you that civilizational collapse is a real threat. Global warming, nuclear weapons, superintelligent artificial intelligence, and other existential risks might change your life dramatically. Rest assured: The world will change; the only question is whether it will change through wisdom or through catastrophe.

This is the nature of collapse. It's the flipside of progress, it seems. And just like civilizations can rise and fall, so can ideas. In fact, ideas are sometimes a causal actor in the collapse of some civilizations (Freeman 2003). In this class, we'll look at one such collapse.

 

 

 

Important Concepts

I've taken the liberty of isolating most of the important concepts in each lesson and giving them their own section. Please take a moment to review the concepts below.

Some comments

Although in an introductory philosophy class you will not dive deep into the nature of logical consistency, since that is reserved for a course in introductory logic, I believe that you intuitively (and non-consciously) care about logical consistency. If you've ever seen a movie with a plothole and it bothered you, then you care about logical consistency. It's just that simple. Consistency just means that the sentences describing the event in the movie can all be true at the same time. But in plotholes, this is not possible. Some parts of the movie contradict other parts. And so, the movie contains inconsistencies and we intuitively sense something is wrong.

Logical consistency will be a cornerstone of everything we will be doing. We will build up conceptual frameworks (i.e., philosophical theories) and we will be perpetually on the lookout to ensure that the ideas in question do not contradict each other. As such, it is important that you understand this concept well. Logical consistency only means that it's possible that all the sentences in a set are true at the same time. That's it. Don't equate consistency with truth. Two statements can be consistent while being false. For example, here are two sentences.

  • "The present King of France is bald."
  • "Baldness is inherited from your mother's side of the family."

"The present King of France is bald" is false. This is because there is no king of France. Nonetheless, these sentences are logically consistent. They can both be true at the same time, it just happens that they're not both true.3

Arguments will also be central to this class. Arguments in this class, however, will not be heated exchanges like the kind that couples have in the middle of an IKEA showroom. For philosophers, arguments are just piles of sentences that are meant to support another sentence, i.e., the conclusion. The language of arguments has been co-opted into other disciplines, such as computer science, but, although I'll cover this briefly, you'll have to learn about that mostly in a logic or computer science course.

Food for Thought...

Informal Fallacy of the Day

As you learned in the Important Concepts, there are two types of fallacies: formal and informal. A course in symbolic logic focuses on formal reasoning through the use of a specialized language. Formal fallacies are most clearly understood in this context. We won't be covering them here. There are also classes that focus more on the informal aspect of argumentation, such as PHIL 105: Critical Thinking and Discourse. Typically all the informal fallacies are covered in courses like these. In our case, we will only infrequently discuss some informal fallacies. Here's your first one!

 

 

 

Eurocentrism

On multiple occasions, well-meaning colleagues have questioned why I don't feature some non-Western philosophies in my introductory courses. Even some students have asked me about this. I'd like to now give you a summary of the reasons I have for focusing mostly on the Western Tradition in PHIL 101.

Three Reasons Why I Focus on the Western Tradition in PHIL 101

  1. Although my friends and colleagues are well-meaning, there is a hint of a potentially pernicious implication in their line of questioning. It almost seems like they are asking, "Why don't you focus on material that is more yours?" The implication is that neither I, nor my students of non-European descent, are really Western. I hate to be the bearer of bad news, but Europeans (and the descendants of Europeans) have dominated much of the world since the middle of the 20th century, whether it be culturally, economically, politically, or militarily (as in the numerous American occupations after World War II). It's also true that European standards (of reasoning, of measuring, of monetary value, etc.) have become predominant in the world as part of a general (if accidental) imperial project (see chapter 21 of Immerwahr's 2019 How to Hide an Empire). Thus, those of us who are Latinx, Native American, of African descent, etc., are immersed in Western culture, whether we like it or not. It is our right to learn about Western culture, because it has become our culture, even if it was through conquest (or worse). It is our duty as good thinkers, however, to also be critical of this tradition, which is how we will progress.

    Moreover, as I think you will come to see, whether you are of European descent or not, Western ideas really do permeate your mind. I believe this because students that should, according to my well-meaning colleagues, be more closely aligned with a non-Western ideology actually become very defensive when I question Western ideas. As we go through this class, you will notice that some of your most deeply held convictions are Western in origin. And it's going to bother you when I go after them. Fun times ahead.

  2. The Europeans who gave rise to the Western tradition had some really good ideas! I'm not going to not cover them just to assuage white guilt. These great ideas include, by the way, democratic governance, classical liberalism, humanism, and the modern scientific method. We must give credit where credit is due.

  3. The Europeans who gave rise to the Western tradition also had some really bad ideas, and I want to cover them. But it is unfair, I think, to look at only the bad. In order to assess a tradition, we must approach it from all directions. I will tell you about the good and the bad. And if there's one thing you should know about yours truly it's that I don't pull punches. This will be, without a doubt, a critical introduction.

I promise I'll try to give you an even-handed assessment of this tradition. Know that I'm doing it this way because I don't want us to fall into the traps laid on us by our biases, from the halo effect to the end-of-history illusion. We can't just say, "Philosophy bad" or "Yay non-Western philosophy". The world is nuanced. Ideas are nuanced. In the days to come, you'll have to fight your urge to either completely accept or completely reject an idea. It's time to be ok with cognitive discomfort.

One last thing...

I designed this course so that it all revolves ultimately around one simple question. It may seem unlikely, but every single topic we'll be covering will, in the final analysis, be connected to this question. As such, I call this the fundamental question of the course. Stay tuned.

 


 

Executive Summary

  • Anyone tasked with teaching introductory is faced with two problems: demarcating what philosophy actually is (and then teaching it), and overcoming the initial apprehension that some have about the discipline.

  • The main theme of the course is, depressingly(?), collapse. We will study the collapse of an idea.

  • The notion of logical consistency and the analysis of rational argumentation will play essential roles throughout the course.

  • The influence of the Western Tradition is pervasive. You will find that many ideas that are traditionally considered to be "Western" reside within your mind, whether you are of European descent or not.

 


 

Lifestyle Upgrade

Lifestyle Upgrade

As we learned in today's lesson, what philosophy is varies between time periods and even thinkers within certain time periods. In these sections, I'd like to focus on philosophy as a way of life, the so-called lifestyle philosophies of, for example, the Stoics. The Stoics, above all else, sought to live life in accordance with nature. For them, this meant living wisely and virtuously. This was done by using reason to help them achieve excellence of character: presence of mind, wisdom, understanding reality as it truly is, not letting emotions get the better of them, and the development of generally desirable character traits.

What can we do in this course that is Stoic-approved? Well, in your current role, your task is to learn this material. So, a Stoic would emphasize dedicating yourself to fulfilling your duties.

How does one learn material like this? As it turns out, most students are completely new to the discipline of philosophy, since it is not taught at the high school level usually. This means you're not always sure how to go about studying. So, in what follows, I hope to make some recommendations that you can implement during the rest of this course.

To learn philosophy, you need to engage in both declarative and procedural learning. You might not know what that means, and that's ok, so let me just tell you what to do. First off, the first step is to learn the relevant concepts. Before really diving into a lesson, you should skim it and look for all the concepts that are underlined, bolded, or that otherwise seem important. (In the next lesson, they are as follows: deduction (deductive arguments), induction (inductive arguments), validity, soundness, imagination method, modus ponens, modus tollens, epistemology, metaphysics, JTB theory of knowledge, skepticism, the regress argument, and begging the question.) Write them down and define them using the content in the lesson. Practice these words for a bit. Read the name of the concept, cover up the definition, and try to recall what it is that you wrote. This is called retrieval practice. Once you do this a few times, take a break. After a five minute break or so, you're ready to read the lesson.

As you're reading the lesson take careful notes on the topic that is being argued, what is being argued (i.e., the different positions on the topic), and what the arguments for and against these views are. This might take a while. I recommend you do this in 25-minute "bursts". Set a timer, put away all distractions (like phones), and start to work through the material. Once the timer goes off, take a break. Then repeat until you complete the lesson. 

All these recommendations, by the way, come from the most up-to-date findings in educational neuroscience and should work for any class. You can check out A Mind for Numbers, How We Learn, and Uncommon Sense Teaching for more info on this. If you don't time to read these at the moment, here's a TEDTalk by the author of A Mind for Numbers and co-author of Uncommon Sense Teaching, Barbara Oakley: 

 

 

I'll give you some more tips next time. Until then, remember that your mind is the most fundamental tool you have in life. All other tools are only utilized well once you have mastered your own mind. Training and taking control of your mind is, in a sense, literally everything.

 


 

FYI

Supplemental Material—

 

Footnotes

1. For an introduction to the history and philosophy of science, DeWitt (2018) is definitely a good start.

2. An interesting example of the recall bias can be found in breast cancer patients. Apparently, getting diagnosed with breast cancer changes a woman's retrospective assessment of her eating habits. In particular, it makes patients more likely to remember eating high-fat foods, as compared to women who were not diagnosed with breast cancer. This is the case even in longitudinal studies where records of the subjects' food diaries suggest no discernible difference in eating patterns between the two groups. It is simply the case that cancer patients believe they ate more high-fat foods because they are sick. It is the reality of being faced with a deadly cancer (an active input) that predisposes their minds towards remembering actions and events that might've led to this sickly state, whether they actually happened or not (see Wheelan 2013, chapter 6).

3. Contrary to popular belief, apparently baldness is not all your mother's fault. In the very least, smoking and drinking have an effect.

Agrippa's Trilemma

 

Not to know what happened before you were born is to remain forever a child.

~Cicero

 

On the possibility of the impossibility of learning from history

There are so many cognitive traps when one is studying history. As we mentioned last time, we have biases operating at an unconscious level that don't allow us to perform an even-handed assessment of persons, ideas, products, etc. These biases might also disallow us from properly assessing a culture from the past (or even a contemporary culture that is very different from our own). In addition to this, however, it appears that we sapiens are surprisingly malleable with regards to the tastes and preferences that culture can bestow in us. Cultures, past and present, range widely on matters regarding humor, the family, art, when shame is appropriate, when anger is appropriate, alcohol, drugs, sex, rituals at time of death, and so much more.1

Recognizing our limitations, we have to always be cognizant of the boundaries of our intuition and of our prejudices. More than anything, we need to keep in check our unconscious desire to want to know how to feel about something right away. Whether it be some person, some practice, some event, or some idea, our mind does not like dealing in ambiguities; it wants to know how to feel right away.

Diagram of Systems 1 and 2

The psychological machinery that underlies all this is very interesting. The psychological model endorsed by Nobel laureate Daniel Kahneman, known as dual-process theory, is illuminating on this topic. Although I cannot give a proper summary of his view here, the gist is this. We have two mental systems that operate in concert: a fast, automatic one (System 1) and a slow one that requires cognitive effort to use (System 2). Most of the time, System 1 is in control. You go about your day making rapid, automatic inferences about social behavior, small talk, and the like. System 2 operates in the domain of doubt and uncertainty. You'll know System 2 is activated when you are exerting cognitive effort. This is the type of deliberate reasoning that occurs when you are learning a new skill, doing a complicated math problem, making difficult life choices, etc.2

How is this related to the study of history? Here is what I'm thinking. It appears that cognitive ease, a mental feeling that occurs when inferences are made fluently and without effort (i.e., when System 1 is in charge) makes you more likely to accept a premise (i.e., to be persuaded). In fact, it's been shown that using easy-to-understand words increases your capacity to persuade (Oppenheimer 2006). On the flip side, cognitive strain increases critical rigor. For example, in one study, researchers displayed word problems with bad font, thereby causing cognitive strain in subjects (since one has to strain to see the problem). Fascinatingly, this actually improved their performance(!). This is due to the cessation of cognitive ease and the increase of cognitive strain, which kicked System 2 in to gear (Alter et al. 2007).

It is even the case that cognitive ease is associated with good feelings. When researchers made it so that images are more easily recognizable to subjects (by displaying the outline of the object just before the object itself was displayed), they were able to detect electrical impulses from the facial muscles that are utilized during smiling (Winkielman and Cacioppo 2001).3

The long and short of it is that if you are in a state of cognitive ease, you'll be less critical; if you are in a state of cognitive strain, you've activated System 2 and you are more likely to be critical.

Thus, when you hear or read about cultural practices that are very much like your own, you often unquestioningly accept them, since you are in a state of cognitive ease. However, when you read about cultural practices (or ideas or whatever) that are very much unlike your own, this increases cognitive strain. Because of this, we are more likely to be critical about such practices (or ideas, etc.). Of course, it is ok to be critical, but we are often overly critical, applying a strict standard that we don't apply to our own culture. Moreover, this is compounded by the halo effect. We've found one thing we don't like about said culture, and so we erroneously infer the whole culture is rotten.

I'm not, of course, saying that this effect manifests itself in everyone all the time, or even in some people all the time. But it is a possible cognitive roadblock that might arise when you are going through the intellectual history that we'll be going through.

 

 

High-water mark

We'll begin our story soon, but let me give you two bits of historical context. The first has to do with the recent memory of the characters in our story. We will begin our story next time in the year 1600, but to begin to understand why the thinkers we are covering thought the way they did, you have to first know what they had seen, what their parents had seen, what they had been raised to accept. The 16th and early 17th centuries were, in my assessment, the high-water mark of religiosity in Europe. This period was characterized by a religious conviction that is only rivaled, to my mind, by the brief period of Christian persecution at the hands of the Romans under Nero.4

I am not alone in believing that this period stands out. In their social history of logic, Shenefelt and White (2013: 125-130) discuss the religious fanaticism of the 16th and 17th centuries, which are stained with wars of religion. The wars of this time period included the German Peasant Rebellion in 1524 (shortly after Martin Luther posted his Ninety-five Theses in 1517), the Münster Rebellion of 1534-1535, the St. Bartholomew’s Day Massacre (1572), and the pervasive fighting between Catholics and Protestants that culminated in the Thirty Years War (1618-1648). Shenefelt and White also discuss how these events inspired some thinkers to argue that beliefs needed to be supported by more than just faith and dogma. Thinkers became increasingly convinced that our beliefs require strong foundations. My guess is that if you lived through those times, you would've likely been in shock too. You would've longed for a way to restore order.

And so this is why I call this the high-water mark of religiosity. But notice that embedded in this claim is a very improper implication. By saying that this was the highest point of religious fervor in the West, and by grouping (most of) you into the greater Western tradition, I'm implying that these 16th and 17th century believers were more religious than you are (if you are religious). How very devious of me! Notwithstanding the outrage that many have expressed when I say this, I stand by my claim. It really does seem to me that the religious commitment of 16th and 17th century Christians was stronger than that of Christians today.

At this point, we could get into a debate about what religious commitment really is; or how the institution that one is committed to might change over the centuries so that commitment looks different in different time periods. I'd love to have those conversations. My general argument would be that the standards of what it meant to be a believer back then were higher than the standards of today (where going to a service once a week suffices for many). But without even getting deep into that, let's just look at the behavior of believers 400 years ago. Maybe once you learn about some of their practices, you'll side with me.

When I first wrote this lesson, I knew I needed something shocking to show you just how different things were in the early modern period in Europe. And, to be honest, I didn't think for too long. I knew right away what would drop your jaw. I'm going to describe for you an execution, in blood-curdling detail—an execution steeped in religious symbolism and where the "audience" was sometimes jealous of the person being tortured and executed.

Stay tuned.

 

Breaking on the wheel

 

In the center you have Plato on the left and Aristotle on the right

 

Important Concepts

 

Distinguishing Deduction and Induction

As you saw in the Important Concepts, I distinguish deduction and induction thus: deduction purports to establish the certainty of the conclusion while induction establishes only that the conclusion is probable.5 So basically, deduction gives you certainty, induction gives you probabilistic conclusions. If you perform an internet search, however, this is not always what you'll find. Some websites define deduction as going from general statements to particular ones, and induction is defined as going from particular statements to general ones. I understand this way of framing the two, but this distinction isn't foolproof. For example, you can write an inductive argument that goes from general principles to particular ones, like only deduction is supposed to do:

  1. Generally speaking, criminals return to the scene of the crime.
  2. Generally speaking, fingerprints have only one likely match.
  3. Thus, since Sam was seen at the scene of the crime and his prints matched, he is likely the culprit.

I know that I really emphasized the general aspect of the premises, and I also know that those statements are debatable. But what isn't debatable is that the conclusion is not certain. It only has a high degree of probability of being true. As such, using my distinction, it is an inductive argument. But clearly we arrived at this conclusion (a particular statement about one guy) from general statements (about the general tendencies of criminals and the general accuracy of fingerprint investigations). All this to say that for this course, we'll be exclusively using the distinction established in the Important Concepts: deduction gives you certainty, induction gives you probability.

In reality, this distinction between deduction and induction is fuzzier than you might think. In fact, recently (historically speaking), Axelrod (1997: 3-4) argues that agent-based models, a new fangled computer modeling approach to solving problems in the social and biological sciences, is a third form of reasoning, neither inductive nor deductive. As you can tell, this story gets complicated, but it's a discussion that belongs in a course on Argument Theory.

 

Food for Thought...

 

Alas...

In this course we will only focus on deductive reasoning, due to the particular thinkers we are covering and their preference for deductive certainty. Inductive logic is a whole course unto itself. In fact, it's more like a whole set of courses. I might add that inductive reasoning might be important to learn if you are pursuing a career in computer science. This is because there is a clear analogy between statistics (a form of inductive reasoning) and machine learning (see Dangeti 2017). Nonetheless, this will be one of the few times we discuss induction. What will be important to know for our purposes, at least for now, is only the basic distinction between the two forms of reasoning.

 

Statue of Aristotle

 

Assessing Arguments

 

Some comments

Validity and soundness are the jargon of deduction. Induction has it's own language of assessment, which we will not cover. These concepts will be with us through the end of the course, so let's make sure we understand them. When first learning the concepts of validity and soundness students often fail to recognize that validity is a concept that is independent of truth. Validity merely means that if the premises are true, the conclusion must be true. So once you've decided that an argument is valid, a necessary first step in the assessment of arguments, then you proceed to assess each individual premise for truth. If all the premises are true, then we can further brand the argument as sound.6 If an argument has achieved this status, then a rational person would accept the conclusion.

Let's take a look at some examples. Here's an argument:

  1. Every painting ever made is in The Library of Babel.
  2. “La Persistencia de la Memoria” is a painting by Salvador Dalí.
  3. Therefore, “La Persistencia de la Memoria” is in The Library of Babel.

At first glance, some people immediately sense something wrong about this argument, but it is important to specify what is amiss. Let's first assess for validity. If the premises are true, does the conclusion have to be true? Think about it. The answer is yes. If every painting ever is in this library and "La Persistencia de la Memoria" is a painting, then this painting should be housed in this library. So the argument is valid.

But validity is cheap. Anyone who can arrange sentences in the right way can engineer a valid argument. Soundness is what counts. Now that we've assessed the argument as valid, let's assess it for soundness. Are the premises actually true? The answer is: no. The second premise is true (see the image below). However, there is no such thing as the Library of Babel; it is a fiction invented by a poet. So, the argument is not sound. You are not rationally required to believe it.

Here's one more:

  1. All lawyers are liars.
  2. Jim is a lawyer.
  3. Therefore Jim is a liar.

You try it!7

 

Pattern Recognition

 

La Persistencia de la Memoria, by Salvador Dalí

 

The Parthenon in Athens, Greece

 

The second bit

I said we needed two bits of historical context before proceeding. I gave you one: the high-water mark of religiosity. Here's the other.

In 1600, the Aristotelian view of science still dominated. Intellectuals of the age saw themselves as connected to the ancient ideas of Greek and Roman philosophers. It's even the case that academic works were written in Latin. So, in order to understand their thoughts, you have to know a little bit about ancient philosophy. Enjoy the timeline below:

 

 

Storytime!

There is one ancient school of philosophy that I'd like to introduce at this point. I'm housing this section within a Storytime! because very little is known about the founder of this movement. For all I know, none of what I've written in the next paragraph is true. Here we go!

Pyrrho (born circa 360 BCE) is credited as the first Greek skeptic philosopher. It is reputed that he travelled with Alexander (so-called "the Great") on his campaigns to the East. It is there that he came to know of Eastern mysticism and mindfulness. And so he came back a changed man. He had control over his emotions and had an imperturbable tranquility about him.

Here's what we do know. He was a great influence on Arcesilaus (ca. 316-241 BCE) who eventually became a teacher in Plato's school, the Academy. Arcesilaus' teachings were informed by the thinking of Pyrrho, and this initiated the movement called Academic Skepticism, the second Hellenistic school of skeptical philosophy. This line of thinking continued at least into the third century of the common era.

Skepticism is a view about knowledge, namely that we cannot really know anything. The branch of philosophy that focus on matters regarding knowledge is epistemology. You'll learn more about that below in Decoding Epistemology.

Decoding Epistemology

 

 

The Regress Argument

Although Pyrrhonism is interesting in its own right, we won't be able to go over its finer details here. In fact, we will only concern ourselves with one argument from this tradition. In effect, the last piece of the puzzle in our quest for context will be the regress argument, a skeptical argument whose conclusion states that knowledge is impossible. The regress argument, by the way, is also known as Agrippa’s Trilemma, named after Agrippa the Skeptic (a Pyrrhonian philosopher who lived from the late 1st century to the 2nd century CE).

To modern ears, the regress argument seems like a toy argument. It seems so far removed from our intellectual framework that it is easy to dismiss. But, again, this is easy for you to say. You are, after all, reading this on a computer. You are assured that the state of knowledge of the world is safe. You didn't live through the Peloponnesian War, or the fall of the Roman Empire, or the Thirty Year's War. You are comfortable that science will progress, perhaps indefinitely. In other words, you don't really think that collapse is possible for your civilization. But thinkers of the past didn't have this luxury. They were concerned with basic distinctions like, for example, the distinction between knowledge and opinion.8

As such, try to be charitable when you read this argument. Today, epistemology, the branch of philosophy concerning knowledge, is more like a game that epistemic philosophers play. But in the ancient world, when the notion of rational argumentation was still in its infancy, the possibility that perhaps we can never really know anything (i.e., skepticism) was a real threat.

The argument

  1. In order to be justified in believing something, you must have good reasons for believing it.
  2. Good reasons are themselves justified beliefs.
  3. So in order to justifiably believe something, you must believe it on the basis of an infinite amount of good reasons.
  4. No human can have an infinite amount of good reasons.
  5. Therefore, it is humanly impossible to have justified beliefs.
  6. But knowledge just is justified, true belief (the JTB theory of knowledge).
  7. Therefore, knowledge is impossible.

The general idea is quite simple. Consider a belief, say, "My dog is currently at home". How do you know that belief is true? You might say, "Well, she was home when I left the house, and, in the past, she's been home when I get back to the house." A skeptic would probe further. "How do you know that today won't be the exception?" the skeptic might ask. "Perhaps today's the day she ran away or that someone broke in and stole her." You give further reasons for your beliefs. "Well, I live in a safe neighborhood, so it's unlikely that anyone broke in" and "She's a well-behaved dog so she wouldn't run away" are your next two answers. But the skeptic continues, "But even safe neighborhoods have some crime. How can you be sure that no crime has occurred?" Eventually, you'd get tired of providing support for your views. Even if you didn't, it's impossible for you to continue this process indefinitely (since you live only a finite amount of time). If knowledge really is justified, true belief, then you could never really justify your belief, because every justification needs a justification. I made a little slideshow for you of the "explosion of justifications" required, where B is the original belief and the R's are reasons (or justification) for that belief. Enjoy:

According to Agrippa, you have three ways of responding to this argument (and none of them work):

  • You can start providing justifications, but you’ll never finish.
  • You could claim that some things don’t need further justification, but that would be a dogma (which is also unjustified).
  • You could try to assume what you are trying to prove, but that’s obviously circular.

The third possibility that Agrippa points out is definitely not going to work. In fact, that form of reasoning is considered an informal fallacy, which brings us to the...

Begging the Question

This is a fallacy that occurs when an arguer gives as a reason for his/her view the very point that is at issue.

Shenefelt and White (2013: 253) give various examples of how this fallacy appears "in the wild", but the main thread connecting this is the circular nature of the reasoning. For example, say someone believes that (A) God exists because (B) it says so in the Bible, a book which speaks only truth. They might also believe that (B) the Bible is true because (A) it comes from God (whom definitely exists). Clearly, this person is reasoning in circles.

An even more obvious example is a conversation I overheard in a coffee shop once. One person said, "God exists. I know it, man." His friend responded, "But why do you believe that? How do you know that God exists?" The first person, without skipping a beat, said, "Because God exists, bro." Classic begging the question.

 

 

 

Two more things...

Now that you know about the high degree of religiosity and some tidbits about ancient philosophy, the setup is complete. We can begin to move towards 1600. Let me just close with two points. First, I know I left you hanging last time. Let me correct that now. The fundamental question of the course is: What is knowledge? I know it doesn't seem like it could take the whole term to answer this question, but you'd be surprised.

Second, that execution that I mentioned... Don't worry. It's coming.

 


 

Executive Summary

  • There are two important bits of context that are important to understand the history being told in this class:

    1. The first century of the early modern period in Europe (1500-1800 CE) was characterized by a high degree of religiosity;
    2. There was still an active engagement between the thinkers of this early modern period and the philosophies of ancient Greece.
  • The distinction between deduction (which purports to give certainty) and induction (which is probabilistic reasoning) is important to understand.

  • The jargon (i.e., technical language) for the assessment of arguments, namely the concepts of validity and soundness are essential to know.

  • Epistemology is the branch of philosophy that concerns itself with questions relating to knowledge.

  • The ancient philosophical schools of skepticism posed challenges to the possibility of having knowledge which early modern thinkers were still thinking about and working through.

  • The regress argument, one argument from the skeptic camp, questions whether we can ever truly justify our beliefs, thereby undermining the possibility of having knowledge (at least per the definition of knowledge assumed in the JTB theory of knowledge).

FYI

Suggested Reading: Harald Thorsrud, Ancient Greek Skepticism, Section 3

TL;DR: Jennifer Nagel, The Problem of Skepticism

Supplementary Material—

Related Material—

Advanced Material—

 

Footnotes

1. To add to this, there was a major philosophical debate in the 20th century over the possibility of translation (e.g., see Quine 2013/1960). Consider how, in order to translate the modes of thought and concepts of an alien culture, you need to first interpret them. But the very process of interpretation is susceptible to a misinterpretation—distortion due to unconscious biases. Perhaps the whole process of translation itself is doomed.

2. The interested student should consult Kahneman's 2011 Thinking, fast and slow or watch this helpful video.

3. Zajonc argues that this trait, to find the familiar favorable, is evolutionarily advantageous. It makes sense, he argues, that novel stimuli should be looked upon with suspicion, while familiar stimuli (which didn’t kill you in the past) can be looked on favorably (Zajonc 2001).

4. Emperor Nero took advantage of the Great Fire of 64 CE to build a great estate. Facing accusations that he deliberately caused the fire, he heaped the blame on the Christians, and a short campaign of persecution began. However, the Christians appeared to revel in the persecution. Martyrdom allowed many who were otherwise of lowly status or from a disenfranchised group (like women or slaves) to become instant celebrities and be guaranteed, they believed, a place in heaven. Martyrdom literature proliferated, and Christians actively sought out the most painful punishments (see chapter 4 of Catherine Nixey's The Darkening Age).

5. By the way, I'm not alone in using this distinction. One of the main books I'm using in building this course is Herrick (2013) who shares my view on this distinction.

6. Another common mistake that students make is that they think arguments can only have two premises. That's usually just a simplification that we perform in introductory courses. Arguments can have as many premises as the arguer needs.

7. This argument is valid but not sound, since there are some lawyers who are non-liars—although not many.

8. Interestingly, in an era of disinformation where there is non-ironic talk of "alternative facts" and "post-truth", the distinction between knowledge and opinion is once again an important philosophical distinction to make.

 

 

The Advancement of Learning

 

No fact is safe from the next generation of scientists with the next generation of tools.

~Stuart Firestein

 

Worldviews

The year 1600 marks a turning point, according to historian and philosopher of science Richard DeWitt. It was in this year that a worldview, and its set of accompanying beliefs and ideas, died. The Aristotelian worldview, which had dominated Western Thought starting at about 300BCE, finally imploded. In its wake, a deterministic and mechanical worldview came to permeate in the minds of intellectuals, scientists, and philosophers.

Now, of course, the death of abstractions is always hard to pin down. The year 1600, more than anything, is perhaps best considered a convenient record of the hour of death. Even more evasive than determining the time of death of an abstraction may be coming to an agreement over its cause of death. Even though there is nothing analogous to, say, cardiac arrest for an abstraction, there is what we might call a natural death: some ideas simply run their course and are abandoned. This, we can say with considerable certainty, was not what happened to the Aristotelian worldview. Speaking crudely, this worldview lived past its prime and its usefulness, and so it was put down deliberately via the concerted effort of many of the most famous names in science and philosophy—Copernicus, Galileo, Descartes, Kepler, and Newton (to name a few). Its cause of death, in short, was violent.

To understand how a worldview dies, however, we should probably begin with some basics. What is a worldview, anyhow? And how do abstractions die? What was the Aristotelian worldview? What replaced it? We take these in turn.

 

Jigsaws

DeWitt begins his Worldviews by clarifying the notion of a worldview. He likens beliefs to a jigsaw puzzle. To have a worldview is to have a system of beliefs where the beliefs fit together in a coherent, rational way. In other words, the beliefs of our worldview should fit together like the pieces of a puzzle. We wouldn't, for example, both believe that the Earth is the center of the universe and that our solar system revolves around the center of the Milky Way galaxy. There, of course, can't be two centers. And so, at least when we are being our best rational selves, develop consistent, rational jigsaw puzzles of beliefs.

Moreover, the central pieces tend to be more important. These are what allow the rest of our beliefs to fit in or cohere with each other. Without the central pieces the outer pieces are just "floating", with no connection to the center or to each other. It may be that, as it happens to be the case with me, when you are assembling a jigsaw, sometimes you change the location of the outer pieces. You thought it went in the lower-right quadrant, but it turns out it belongs in the upper left. Analogously, less central beliefs in our worldview might be updated or even discarded, but the central beliefs are integral to the system. The central beliefs give the system meaning and form.

 

The geocentric model

 

The Aristotelian Worldview

A jigsaw puzzle of Aristotle's beliefs forming his worldview
The Aristotelian Jigsaw
(borrowed from
DeWitt 2018: 10).

Before proceeding, it should be said that what is called the Aristotelian worldview is not exactly what Aristotle believed. Rather, it takes as a starting point several beliefs held and defended by Aristotle and grows from there. Having said that, let's take a look at some of these beliefs:

  1. The Earth is located at the center of the universe.
  2. The Earth is stationary.
  3. The moon, the planets, and the sun revolve around the Earth in roughly 24hr cycles.
  4. The region below the moon, the sublunar region, contains the four basic elements: earth, water, air, fire.
  5. The region above the moon, the superlunar region, contains the fifth element: ether.
  6. Each of the elements has an essential nature which explains their behavior.
  7. The essential nature of the elements is reflected in the way the elements move.
  8. The element earth has a tendency to move towards the center of the universe. (This explains why rocks fall down, since the Earth is the center of the universe.)
  9. The element of water also has a tendency to move toward the center of the universe, but this tendency is not as strong as that of earth. (This is why when you mix dirt and water, the dirt eventually sinks.)
  10. The element air naturally moves away from the center of the universe. (That’s why when you blow air into water, the air bubbles up.)
  11. Fire also tends to naturally move away from the center of the universe. (That is why fire rises.)
  12. Ether tends to move in circles. (This is why the planets, which are composed of ether, tend to move in circular motions around the Earth.)

 


 

Slow down there, Turbo...

Sidebar Image

I've sometimes encountered people, including people who should know better, who naively believe that people who held the Aristotelian worldview were "simple-minded" or even "stupid". I assure you, they had the same (roughly speaking) mental machinery that you and I have. In the case of Aristotle, Ptolemy, and others, I'm actually willing to wager that they were much smarter than even the average college professor. Moreover, they had what were—at the time—very good rational and empirical arguments for their beliefs. I'm sure that if you were around in those days, you would've bought into these.

DeWitt (ibid., 81-91) summarizes some arguments for the Aristotelian viewpoint from Ptolemy’s Almagest (published ca. 150 CE). For example, it was known that the Earth was spherical since objective events, like eclipses, were recorded at different times in different places, and the difference in time was proportional to the distances between locations. By studying the regularity of the sun rising in the East prior to more westerly locations, the ancients reasoned that the curvature of the Earth is more or less uniform. Having established the east-west spherical nature of Earth, which is compatible with Earth being a cylinder, he then notes that some stars are only visible the further north one goes. The same goes when traveling south. This suggests that Earth is spherical in the north-south direction too, thereby establishing that Earth is spherical.

Even though you agree (I hope) that the Earth is spherical, could you have come up with those arguments? It's important to remember that our ancestors were not naive.

It's instructive also, I think, to look at beliefs you likely don't agree with. Ptolemy gives some common-sense arguments for geocentrism: the belief that the Earth is the center of the solar system and/or the universe. Here's one such argument. The ancients knew that the Earth’s circumference was about 25,000 miles. Assuming that the Earth rotates on an axis would lead to the conclusion that a full rotation takes 24 hours. This would mean that, if we are standing at the equator, we would be spinning at over 1,000 miles per hours (since 25,000 miles / 24 hours = 1,041.7 mph). This speed would obviously be compounded by an Earth that is also orbiting around a sun. But obviously, it doesn’t even feel like we are moving at 1,000 mph, so considering greater distances is unnecessary. Heliocentrism simply doesn’t match with our everyday experience, since it doesn't at all feel like we are on a sphere travelling at well over a thousand miles per hour.

Here's another argument. Objects don’t typically move without some external force acting on them. Try it if you want to convince yourself. Earth, moreover, is a very large object. It stands to reason that only if a very massive force was moving it would Earth move. But no massive force is immediately evident. So it is reasonable to infer that Earth is stationary.

The aforementioned arguments could perhaps be dubbed "common-sense" arguments. Ptolemy also gave more empirical arguments about the nature of falling objects and stellar parallax, but these are a little more technical than what is needed to prove the basic point I want to make: the ancients were not dumb. In fact, in section 7 of the preface to Almagest, Ptolemy even briefly considers the mechanics heliocentrism(!). Also, it is noteworthy that it took the combined efforts of some of the biggest names in science, as we've already mentioned, to dethrone geocentrism. Relatedly, Ptolemy's model of the solar system was unrivaled in predictive power for 1400 years. Lastly, most educated people beginning around 400 BCE correctly believed that the Earth was spherical, as we do. In sum, the ancients were no slouches.

 


 

How do worldviews die?

Recall the analogy between worldviews and jigsaw puzzles. It is the central pieces that hold most of the import. These central pieces connect what would otherwise be disunited and seemingly unrelated components. The center is, in short, the beating heart of the worldview. Worldviews die when its central tenets are no longer believed.1

 

 

London, 1600

 

The magicians

We find ourselves in London in the year 1600. We are in the tail end of the Tudor Period (1485-1603), which is generally considered to be a "golden age" in English history. Queen Elizabeth I is on the throne as she has been since 1558, and what a reign it's been. Advancements in cartography and the study of magnetism have led to an increase in maritime trade. The British navy itself was doing very well. In the summer months of 1588, the English defeat the Spanish Armada, which had previously been thought to be invincible. There were no major famines or droughts, and there was an increase in literacy rates. It's even the case that, towards the end of Elizabeth's reign, in the early 1590's, Shakespeare's plays were beginning to hit the stage.

It was during this time period that some pivotal steps towards modern science were taken. It was a few decades earlier, in the middle of the 1500s, that there were remarkable innovations in the making and use of tools of observation, as well as ways of conceptualizing and categorizing one’s findings. Tools of observation and systematizing our beliefs (an inheritance from Aristotle), of course, are essential to science. But by 1600, one idea more than any of the others was catching in certain intellectual circles: the idea of the controlled experiment.

Prior to this time period, when one experimented, what one really meant by that was merely to say that they tried something. In fact, in some Spanish-speaking countries, the Spanish word for experiment (experimentar) is still used to mean something like, "to see what something feels like." But the word experiment was shifting in meaning. It was, in some specialized circles, starting to mean an artificial manipulation of nature; a careful controlled examination of nature. This would eventually lead to a complete paradigm shift. Lorraine Daston writes:

“Most important of these [innovations] was ‘experiment,’ whose meaning shifted from the broad and heterogeneous sense of experimentum as recipe, trial, or just common experience to a concertedly artificial manipulation, often using special instruments and designed to probe hidden causes” (Daston 2011: 82).

Who were these early experimentalists? Like all great movements, the experimentalist movement went through three stages: ridicule, discussion, adoption. Although this may pain those of us who are science enthusiasts, the early experimentalists were looked upon as... well... weird. They typically spent most of their free time engaging in experiments and would let other social and professional responsibilities lapse. They would spend an inordinate amount of their money on their examination of nature, and they would mostly socialize only with other experimentalists. Daston again:

“[O]bservation remained a way of life, not just a technique. Indeed, so demanding did this way of life become that it threatened to disrupt the observer’s other commitments to family, profession, or religion and to substitute epistolary contacts with other observers for local sociability with relatives and peers... French naturalist Louis Duhamel du Monceau depleted not only his own fortune but that of his nephews on scientific investigations. By the late seventeenth century, the dedicated scientific observer who lavished time and money on eccentric pursuits was a sufficiently distinctive persona in sophisticated cultural capitals like London or Paris to be ridiculed by satirists and lambasted by moralists” (Daston 2011: 82-3).

 

 

Stage 2: Discussion

Enter Francis Bacon (1561-1626). By 1600, Bacon had already served as a member of Parliament and had been elected a Reader, a lecturer on legal topics. But the aspect of Bacon's work that was truly novel was his approach to natural science. Bacon struggled against the traditional Aristotelian worldview as well as the mixture of natural science and the supernatural. He re-discovered the pre-Socratics and was fond of Democritus and his atomism: the view that the world is composed of indivisible fundamental components. He also had a clear preference for induction, contrary to many of his contemporaries.

In 1605, Bacon publishes The Advancement of Learning in which he rejects many Aristotelian ideas. In book II of Advancement, he argues that we must cleanse ourselves of our “idols” to engage in empirical inquiries well. These idols include intuitions rooted in our human nature (idols of the tribe), overly cherished beliefs of which we can’t be critical (idols of the cave), things we hear from others but never verify (idols of the market place), and the ideological inheritance of accepted philosophical systems (idols of the theater).

Instead, Bacon argues that the only way to know something is to be able to make it, to control it. In other words, making is knowing and knowing is making. The way this idea is most often encapsulated is in the following phrase: Knowledge is power. As it turns out, this controlling of nature overawed early experimentalists. Bacon even referred to this applied science as "magic".

 

Regress? What regress?

In the last lesson, we were introduced to an argument from the skeptic camp, an argument that concluded that knowledge, as conceived of by Plato, is impossible. I joked that this is more like a game to contemporary epistemic philosophers, but I wasn't completely kidding. At least some epistemic philosophers do see it that way (e.g., Williams 2004). If it is a game, then, we should be able to solve it. Bacon provides us with one possible approach: change the definition of knowledge.

Take a look once more at the regress argument:

  1. In order to be justified in believing something, you must have good reasons for believing it.
  2. Good reasons are themselves justified beliefs.
  3. So in order to justifiably believe something, you must believe it on the basis of an infinite amount of good reasons.
  4. No human can have an infinite amount of good reasons.
  5. Therefore, it is humanly impossible to have justified beliefs.
  6. But knowledge just is justified, true belief (the JTB theory of knowledge).
  7. Therefore, knowledge is impossible.

Notice that the word "justification" is featured prominently throughout the argument. This is important because this argument, then, is assuming Plato's JTB theory: knowledge is justified, true belief. As such, one way to diffuse this argument is to not assume Plato's JTB theory. This is precisely what Bacon is doing, and this is no accident.

Bacon rejects the Aristotelian tradition of what he calls the "anticipation of nature" (anticipatio naturae), and argues that we should instead interpret nature through the rigorous collection of facts derived from experimentation. Knowledge isn't "justified, true belief." Bacon couldn't care less whether or not his knowledge claims were justified in the eyes of Platonists. The only question for Bacon is, "Can you control nature?" If you can, that's enough to say that you know how nature works. This should be the true goal of science: to interrogate and ultimately control our natural environments and actively work towards bringing about a utopian transformation of society. Mathematicians and historian of mathematics Morris Kline puts it this way:

“Bacon criticizes the Greeks. He says that the interrogation of nature should be pursued not to delight scholars but to serve man. It is to relieve suffering, to better the mode of life, and to increase happiness. Let us put nature to use... ‘The true and lawful goal of science is to endow human life with new powers and inventions’ ” (Kline 1967: 280).

Many students find Bacon's views intuitively appealing, and they very well may be. We will put off critiquing them until the next section. For now, let me stress how this relates to our overall project.

 

Dilemma #1: How do we solve the regress?

So far, we've been introduced to the branch of philosophy known as epistemology, which concerns itself with questions relating to knowledge. We've seen one possible definition of knowledge (the JTB theory) and one objection to that theory (given via the regress argument) from one camp of thinkers who believe that knowledge is impossible (the skeptics). Let's flag this spot in the conversation and call it Dilemma #1. This dilemma is, in a nutshell, this: how do we stop the skeptic's regress argument?

Bacon provides us with one solution: change the definition of knowledge. In the next section, we will introduce some problems with this solution. In the lessons to come, we will look at alternate solutions coming from different theorists. Only then will we be able to judge which is the best solution.

I might add that this will be the general trajectory of the course. There will be 10 dilemmas in all, and each will be associated with a number of different solutions. These dilemmas are all interrelated, as you'll see. They are also connected to fundamental ideas in Western thought. Fun times ahead.

 

Francis Bacon

 

Bacon's legacy

Bacon's public career ended in disgrace. Although it was an accepted practice at the time, Bacon received gifts from his litigants and his political rivals used this to charge him with corruption. He was removed from public office, and he devoted the rest of his days to study and writing. He died in 1626.

His legacy lives on, however. Per Daston, “Baconians [those who subscribed to Bacon's ideas] played a key role in the rise of the terminology of observation and experiment in mid-seventeenth-century scientific circles” (Daston 2011: 83; interpolation is mine). Moreover, since the practice of science doesn’t come to fruition just from a common methodology, Bacon also suggested a way for establishing joint ground and sharing insights. In New Atlantis (published posthumously in 1626), an incomplete utopian novel, Bacon discussed the House of Salomon, a concept that influenced the formation of scientific societies, societies which still live on today. Daston again: “[T]he scientific societies of the late seventeenth and early eighteenth centuries shifted the emphasis from observation as individual self-improvement, a prominent theme in earlier humanist travel guides, to observation as a collective, coordinated effort in the service of public utility” (Daston 2011: 90).

 

 

 

Decoding Pragmatism

Some comments

At this point we've introduced a second contender into the fray: pragmatism. We've also complicated our epistemic picture a little bit, so I'd like to make sure everyone's on board with this:

  • We have two different definitions of what knowledge is: Plato's JTB theory and pragmatism.

  • We have two different theories for justifying knowledge claims: the correspondence theory of truth and the coherence theory of truth.

  • We have two different conceptions of the aims of science: realism and instrumentalism.

  • Lastly, these different views tend to cluster together: JTB fits in nicely with the correspondence theory of truth and realism, while pragmatism tends to fit in nicely with coherentism and instrumentalism.

 

Putting on our historical lenses

It may be the case that you've already picked which view you agree with. That's fine. But now I want you to think about what view would've seemed more sensible at the turn of the 17th century. If you were there at Bacon's funeral, would you have bet on his ideas catching and taking off? Would you have wanted them to?

It's hard for us to put ourselves in the right frame of mind to understand and to feel what someone from the 1600's might've thought and felt. We are, in a very real way, jaded. But we have to realize this: the great success of the natural sciences that we enjoy the fruits of today was not yet empirically validated in the 17th century. In other words, if you were there, and you were looking at these "experimentalists" and "magicians", you could reasonably argue that society should not to take the plunge. You found yourself at the crossroad. Would you choose the worldview that had dominated Western thought for over a thousand years (i.e., the devil you know) or would you take a chance with this new natural science stuff? I'm sure that we'd like to think that we would've been early adopters of the latest ideas, but consider for a moment how uncertain this new worldview must've seemed.

 

All in pieces...

To be honest, to only focus on the nascent scientific method is to grossly underestimate the feeling of inconstancy that was gripping society in the 1600s. It was a time of intellectual and social upheaval, sometimes violent; and it had been for a few centuries.

My guess is that most of us would have felt the same feeling of precariousness. It all felt on the verge of collapse. I close with Morris Kline's words:

“It was to be expected that the insular world of medieval Europe accustomed for centuries to one rigid, dogmatic system of thought would be shocked and aroused by the series of events we have just described. The European world was in revolt. As John Donne put it, ‘All in pieces. All coherence gone” (Kline 1967: 202).

 

Caspar David Friedrich's 'Monastery Ruins in the Snow'

 

Executive Summary

  • Around the year 1600, the Aristotelian worldview, which had dominated Western Thought since roughly 300BCE, finally gave way to a new more mechanistic worldview.

  • During this time period, the concept of an experiment began to be developed, and experimentalists enthusiastically took to the analysis of nature.

  • The ideas of Francis Bacon were instrumental in standardizing and codifying the concepts of experiment, scientific societies, and eventually the scientific method itself.

  • Pragmatism is a competing conception of knowledge.

  • There are two constellations of views that fit well together:

    • JTB theory + the correspondence theory of truth + realism about science
    • pragmatism + coherence theory of truth + instrumentalism about science

FYI

Suggested Reading: Lorraine Daston, The Empire of Observation, 1600-1800

  • Note: The suggested reading is only the first 11 pages of the document. Here is a redacted copy of the reading.

TL;DR: 60Second Philosophy, Who is Francis Bacon?

Supplementary Material—

Related Material—

  • Video: Smarthistory, Friedrich, Abbey among Oak Trees
    • Note: This is an analysis of the work of Caspar David Friedrich, one of my favorite artists. I use his paintings throughout this course. One painting in particular, his Monastery Ruins in the Snow, is the image I use to represent the feeling of being trapped in the pit of skepticism.

Advanced Material—

  • Reading: Francis Bacon, Novum Organum

    • See in particular Book II.

  • Reading: Jürgen Klein, Stanford Encyclopedia of Philosophy Entry on Francis Bacon

 

Footnotes

1. DeWitt and I are both heavily influenced by the work of physicist/philosopher Thomas Kuhn (2012), as are many. This is not to say that DeWitt (or I) agree completely with Kuhn, though. Nonetheless, this whole way of speaking is reminiscent of Kuhn's The Structure of Scientific Revolutions, originally published in 1962. The interested student can find a copy of this work or watch this helpful video.

 

 

Cogito

 

If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.

~René Descartes

 

Aristotle's dictum

Up to this point, we've been using the analogy between worldviews and jigsaws or between worldviews and webs of beliefs (which was philosopher W.V.O. Quine's preferred metaphor). Analogies, unfortunately, have their limitations. As it turns out, there are aspects of the downfall of worldviews that are decidedly not like a jigsaw puzzle (or a spider's web, for that matter). For example, once a worldview is torn apart, it is not uncommon for thinkers to parse through the detritus, find a pearl, and incorporate it into their new worldview. This would be as if someone, after breaking apart the completed jigsaw puzzle that decorated their living room coffee table, were to then take a prized puzzle piece from that set and use it in a different jigsaw. Surely that doesn't make much sense. Nonetheless, this is how the partitioning of worldviews goes: not all beliefs are tossed. There are always some pearls of wisdom that can be updated, or otherwise adapted to the new normal.

First, some context. If I painted too rosy a picture of the end of the Tudor period in the last lesson, you will forgive me—I hope. My perception might have been altered by my knowledge of what happened next. As if we needed a reminder that, in the past, societal advancements were often accompanied by backtracking, both nature and established institutions pushed back against intellectual progress and stability—with a vengeance. In 1600, for example, philosopher and (ex-)Catholic priest Giordano Bruno was burned at the stake for his heretical beliefs in an infinite universe and the existence of other solar systems. Even more tragic, if only because of the sheer magnitude of the numbers involved, was the Thirty Years' War (1618-1648), which claimed the lives of 1 out of 5 members of the German population. This began as a war between Protestant and Catholic states. However, fueled by the entry of the great powers, the conflict wrought devastation to huge swathes of territory, caused the death of about 8 million people, and basically bankrupted most of the belligerent powers.

Yersinia pestis, a bacteria which causes the disease plague, which itself takes three forms: pneumonic, septicemic, and bubonic.
Yersinia pestis, the bacteria
which causes plague.

As if human-made suffering isn't unhappy enough, Mother Nature often only compounds the agony. In 1601, due to a volcanic winter caused by an eruption in Peru, Russia went through a famine that lasted two years and killed about one third of the population. There were also intermittent resurgences of bubonic plague in Europe, northern Africa, and Asia throughout the 17th century. I, of course, don't have to tell you that a tiny virus 125 nanometers in size can cause major cultural, political, and economic disruptions.

But thinkers of the time did not know yet about germ theory, let alone climate science. The former would have to wait until the work of Louis Pasteur in the 1860s. Intellectuals instead focused on a domain that they felt they might be able to affect: our religious convictions. Thinkers like René Descartes (the subject of this lesson whose name is pronounced "day-cart") and John Locke (the subject of the next lesson) noticed that religious fanatics could reason just as well as logicians. The problem, they thought, was that they began with religious assumptions that they took to be universal truth. From these dubious starting points these fanatics were able to justify the killing of heretics, the torture of witches, and their religious wars. In other words, these thinkers, like many others of their age, wanted to ensure that people reasoned well, that people would filter their beliefs and discard those that didn't stand up to scrutiny. A worthy goal, indeed.

How should we reason? Descartes believed that our premises must be more certain than our conclusions. In other words, we must move from propositions that we have more confidence in to those that we have less confidence in (at least initially, before argumentation). In fact, however, Aristotle had made this point in his Posterior Analytics almost two thousand years before (I.2.72.a30-34). As such, we will call this principle, that we should move from propositions that we have more confidence in to those that we have less confidence in, Aristotle's dictum. Remember: never throw out the baby with the bathwater.

 

Soldiers plundering a farm during the thirty years' war; painting by Sebastian Vrancx

 

Reasoning Cartesian style

Descartes gives the metaphor of a house. We must build the house on a strong foundation, otherwise it collapses. Similarly, we must move from certain (or near certain) premises on towards our conclusion. Aristotle, by the way, speaks in a similar language. Aristotle talks about how our premises are the “causes” of our conclusions, but he’s talking about sustaining causes, the supports that keep our conclusions from crashing down.

Cartesian coordinates
Cartesian coordinates.

Descartes uses this epistemic principle to argue against circular reasoning. For example, recall the example we gave for the informal fallacy of begging the question. Say someone believes that (A) God exists because (B) it says so in the Bible, a book which speaks only truth; and they also believe that (B) the Bible is true because (A) it comes from God (that exists). Clearly this person can’t find both A and B more convincing than the other. This is clearly circular reasoning (for a discussion, see Shenefelt and White 298: n11). The trouble with religious fanatics, Descartes reasoned, is that their premises are not known to begin with. And from these false beliefs they build an even more dubious superstructure. Realizing this, Descartes decided to use the method of doubt (or hyperbolic doubt), disabusing himself of anything he didn’t know with absolute certainty. Stay tuned.

 


 


Newton claimed that he stood on the shoulders of giants, but most people don’t really know which giants he spoke of. I’ll go ahead and tell you, at least with regards to Newton’s first law of motion. Galileo (1564-1642) came close to getting it right, but it was Descartes (1596-1650) who first to clearly state the principle. Finally, Newton (1642-1727), borrowing heavily from Descartes’ formulation, incorporated it into his own greater mechanical view of the universe (see DeWitt 2018: 100).

This goes to show that Descartes was much more than a philosopher; he was also a gifted mathematician and scientist. One of his most famous insights was the bridging of algebra and geometry (as in Cartesian coordinates, pictured here). I might add, however, that Descartes doesn’t get sole credit for coordinate geometry. French lawyer and mathematician Pierre de Fermat arrived at the same broad ideas independently and concurrently.

 


 

 

1640

 

Important Concepts

 

Stragglers

As we saw in the last section, not all of the ideas from the Aristotelian worldview were discarded. For one, Aristotle's dictum was preserved and used by Descartes in an attempt to tamper the religious fanaticism of the time. But there were other ideas that Descartes felt should be preserved. One of those was the hope for deductive certainty about the world.

Recall that by this point in history there was an alternative conception of knowledge: Bacon's pragmatist approach. Although we've already looked at problems with this view, I'd like to drive these a little further now. Pragmatism, as we've seen, melds well with a coherence theory of justification. But coherentism leaves us with many unanswered questions. For example, is a statement true because it coheres with an individual’s web of beliefs? This, by the way, is sometimes referred to as alethic relativism. It seems like most individuals aren't reliable enough to be counted to have a well-supported coherent set of beliefs. Should we instead say that a statement is true only if it coheres with a whole group’s web of beliefs? If that's the case, how do we decide who’s a member of the group? Do they have to be experts? How do we decide who’s an expert?

Perhaps the most worrisome implication of pragmatism is the possibility that everything we currently know to be true could be overturned by empirical developments. In other words, it could very well be the case, according to the logic of pragmatism, that the worldview that we hold in the 21st century could be completely overthrown in the decades and centuries to come. Moreover, that worldview might itself also eventually be overthrown. It is possible that everything we think we know is actually false. This is called pessimistic meta-induction, and it's very worrisome for some people.

Descartes, it seems, was definitely not comfortable with this notion. The alternative appeared to be JTB theory + the correspondence theory of truth + realism about science. Now, it should be said that Descartes didn't use any of these labels. Instead, we are labeling Descartes' views after the fact, but it's worth it because it will help us understand a very important point. Stay tuned.

This alternative package of views (JTB theory + the correspondence theory of truth + realism about science) isn't without its problems. Both the correspondence theory of truth and realism about science have their objections. But let's just focus on the JTB theory. After all, we've already looked at a problem for this view: the regress argument. Until we solve the regress, we really can't move forward with this constellation of views. We know this, and Descartes knew it as well.

And so Descartes sought to establish solid foundations for science and common sense. He thought to himself that there must be some things we know with absolute certainty, things that are self-evident, things that cannot be denied. Since these beliefs cannot be doubted or questioned, then these are regress-stopping beliefs. In other words, these are foundational beliefs, and upon these foundational beliefs we can build the rest of our beliefs. That's how Descartes planned on stopping the regress: by starting with beliefs that literally could not be questioned.

How do you discover foundational beliefs? For this goal, Descartes employs his Method of Doubt, also referred to as hyperbolic doubt. In a nutshell, Descartes would discard any beliefs that he did not know with absolute certainty. He would take as long as he needed to because, in the end, whatever was left over was known with 100% confidence. In other words, at the end of that process, he should have his foundational beliefs. On those beliefs, he would explain the rest of the world.

 

 

Descartes' goals

Descartes' story is complicated and interesting, and we cannot do justice to his life here. One tantalizing detail that just can't be left out, though, is that he served as a mercenary for the army of the Protestant Prince Maurice against the Catholic parts of the Netherlands during the early stages of the Thirty Years' War. It was there that he studied military engineering formally. Although we do not know his actual course of study, he very likely studied the trajectories of cannonballs and the use of strong fortifications to aid in defense and attack. Can you see any analogies to his philosophical project?

In any case, the interested student can consult a biography of Descartes for more details. Here, I'd like to focus instead on what was happening to Descartes' contemporaries and how that might've influenced Descartes' thinking. Galileo Galilei (1564-1642) was an advocate of the heliocentric model. He was also thirty-five years the elder of Descartes and a very public figure. Even before his astronomical observations, Galileo had invented a new type of compass to be used in the military. This compass, developed in the mid- to late-1590s, helped gunners calculate the position of their cannon and the quantity of gun powder to be used. When the telescope was invented in 1608, Galileo set to observing the skies and made many important observations that would contribute to the downfall of the Aristotelian worldview. To add to his star power, Galileo became the chief mathematician and philosopher for the Medicean court.1 The Medicis were apparently flattered that Galileo named a group of four stars the Medicean stars. All this to say that if something happened to Galileo, intellectuals would notice.

In effect, something did happen to Galileo, although he certainly played a hand in this. Galileo, a fierce advocate of heliocentirsm, published his Dialogue Concerning Two Chief World Systems in 1632. In it, he insulted the powers that be (the Roman Catholic Church) by putting the position of the Church (geocentrism) in the mouth of the character called "the fool". This, of course, did not sit well with the pope. It should be added that at this point the Church was reeling from its conflict with the Protestants, and so the Church was a little touchy about challenges to its authority. And so, in 1633, the Inquisition condemns Galileo to house arrest, although it could've been worse. Already in his 60s, Galileo died a few years later.2

Before leaving Galileo, it should be said that, although he was immensely important in the history of science for many reasons, one stands out. He was one of the first modern thinkers to clearly state that the laws of nature are mathematical. Morris Kline writes:

“Whereas the Aristotelians had talked in terms of qualities such as earthiness, fluidity, rigidity, essences, natural places, natural and violent motion, potentiality, actuality, and purpose, Galileo not only introduced an entirely new set of concepts but chose concepts which were measurable so that their measures could be related by formulas. Some of his concepts, such as distance, time, speed, acceleration, force, mass, and weight, are, of course, familiar to us and so the choice does not surprise us. But to Galileo’s contemporaries these choices and in particular their adoption as fundamental concepts, were startling” (Kline 1967: 289).

 

Galileo facing the Roman Inquisition, by Cristiano Banti

 

How is this related to Descartes? We must recall that at this time period religion gave meaning to every sphere of life. As it turns out, we don't have to travel far to see religious inspiration. The very same scientists who played a role in the downfall of Aristotelianism were motivated by religious fervor. Copernicus was motivated by neo-Platonism, a sort of Christian version of Plato's philosophy (DeWitt 2018: 121-123), and Kepler developed his system, which was actually right, through his attempts to "read the mind of God" (ibid., 131-136). Even Galileo was a devout Catholic; he just had a different interpretation of scripture than did Catholic authorities.3

Yuval Noah Harari reminds us that up until the dawn of humanism, religion gave meaning to every sphere of life.

“In medieval Europe, the chief formula for knowledge was: knowledge = scriptures × logic. If we want to know the answer to some important question, we should read scriptures and use our logic to understand the exact meaning of the text... In practice, that meant that scholars sought knowledge by spending years in schools and libraries reading more and more texts and sharpening their logic so they could understand the texts correctly” (Harari 2017: 237-8).

Religion gave meaning to art, music, science, war, and, if you recall, even death. The following contains graphic descriptions of executions which were practiced in Descartes' native France. Those who are sensitive can and should skip this video.

And so Descartes had to toe a fine line. He, obviously, wanted to continue to engage in scientific research, but he didn't want to end up like Galileo. He wanted to find a way to reconcile science and the Church. He needed to establish a foundation for science without alienating the faithful. But how? After a long gestation period, Descartes finally publishes his Meditations in 1641.

“Because he had a critical mind and because he lived at a time when the world outlook which had dominated Europe for a thousand years was being vigorously challenged, Descartes could not be satisfied with the tenets so forcibly and dogmatically pronounced by his teachers and other leaders. He felt all the more justified in his doubts when he realized that he was in one of the most celebrated schools of Europe and that he was not an inferior student. At the end of his course of study he concluded that all his education had advanced him only to the point of discovering man’s ignorance” (Kline 1967: 251).

 

 

 

Decoding foundationalism

 

Into the fray...

At this point, we have two different conceptions of knowledge competing with each other. Moreover, they are both a constellation of views. They're like little worldviews, not as all-encompassing as the Aristotelian worldview, but still a coherent system of beliefs nonetheless. They are:

  • A Cartesian view: JTB theory + correspondence theory of truth + foundationalism + realism about science.
  • A Baconian view: pragmatism + coherence theory of truth + instrumentalism about science.

Notice that I didn't call them "Descartes' view" and "Bacon's view". Just as with the Aristotelian worldview, this isn't exactly what they believed. These labels are simply a means for us to talk about these concepts, and they may or may not have matched perfectly how Descartes and Bacon actually thought. These different approaches to knowledge are certainly, however, in the spirit of each of these thinkers.

For rhetorical elegance, I won't call them the Cartesian view and a Baconian view. I'll use a shorter label. For the Cartesian view, I will use the label foundationalism. This is a little clumsy in that it is both the name of a theory about how to stop the regress as well as the name for the overall constellation of views. I think that's ok, though. The reason for this is that foundationalism (as a means to stop the regress) is what holds the constellation of views together. It's sort of the "band-aid" that fixes the regress problem and keeps the JTB theory, the correspondence theory of truth, and realism about science alive. For the Baconian view, things are a little simpler. It has a label. We will refer to it as positivism.4

 

Dispute of Queen Cristina Vasa and Rene Descartes, by Nils Forsberg

 

 

Executive Summary

  • When a worldview collapses, it is the central beliefs that tend to die off. However, not all beliefs associated with the worldview die off. For example, Aristotle's dictum, the view that you should move from premises that you are more certain of to conclusions that you are less certain of (at least initially), was still advocated as the Aristotelian worldview was collapsing and is still considered a good maxim for reasoning even today.

    • Note: It might be added that a whole field that Aristotle started, the study of validity (a field known as logic), was also not discarded along with the Aristotelian view. This field had amazing leaps forward in the 19th century and is an integral part of computer architecture and computer science today. The interested student can take my symbolic logic course for more.
  • Descartes decided to attempt to rescue deductive certainty in science as the Aristotelian worldview was collapsing. In other words, he wanted to find a way to establish the foundations of science such that the truths of science are known with certainty.

  • Descartes was also looking for a way to reconcile science and faith, as well as a method for pacifying religious fanaticism.

  • Descartes' foundationalist project is not entirely without other motivations. Pragmatism carries with it the possibility that everything we think we know is actually false; this is called pessimistic meta-induction. Avoiding this would've been appealing to thinkers in the early modern period.

  • Using the method of doubt, Descartes arrived at his four foundational truths:

    1. He, at the moment he is thinking, must exist.
    2. Each phenomenon must have a cause.
    3. An effect cannot be greater than the cause.
    4. The mind has within it the ideas of perfection, space, time, and motion.
  • We now have two competing constellations of views: foundationalism and positivism.

FYI

Suggested Reading: René Descartes, Meditations on First Philosophy

  • Notes:

    • Read only Meditations I & II.

    • Here is an annotated reading of Descartes’ Meditations, courtesy of Dan Gaskill at California State University, Sacramento.

TL;DR: Crash Course, Cartesian Skepticism

Related Material—

  • Video: Nick Bostrom, The Simulation Argument

  • Netflix: The interested student should also watch "Hang the DJ" (Episode 4, Season 4) of the Netflix series Black Mirror.

Advanced Material—

 

Footnotes

1. At this point in time, the term philosopher meant what we would today call a scientist (see DeWitt 2018: 143). We talked about this in Lesson 1.1 Introduction to Course.

2. It is my understanding that Galileo got some revenge, sort of.

3. The interested student can read Galileo's Letter to Castelli, where he most clearly articulates his views on the matter.

4. I might add that positivism should be understood as an umbrella term, since there are various "flavors" of positivism.

 

 

Rationalism v. Empiricism

 

El original es infiel a la traducción.
(The original is unfaithful to the translation.)

~Jorge Luis Borges

 

Jigsaws, revisited

What do you think about when you hear that some physicists now believe that it's not only our universe that exists, but that there are innumerable—maybe infinite—universes in existence? It certainly is a strange idea, and I can imagine a variety of different responses. Some might be in awe. Others might be in disbelief. Some might think it is completely irrelevant to their life, especially if you are a member of a historically/currently disenfranchised group. I'm reminded of the Russian communist Vladimir Lenin's response to the possibility that there is a 4th dimension of space. Apparently Lenin shot back that the Tsar can only be overthrown in three-dimensions, so the fourth doesn't matter.

My guess, however, is that these possibilities were not open to people of the 17th century. The Aristotelian worldview that had just been dethroned was a worldview that had purpose built into it. Let me explain. Recall that in the Aristotelian worldview, each element had an essential nature. Moreover, each element's behavior was tied to this essential nature. This is called teleology, the explanation of phenomena in terms of the purpose they serve. These explanations are, if you were to truly believe them, extremely satisfactory. And so, there is something psychologically alluring about Aristotle's teleological science. Losing this purpose-oriented explanation must've been accompanied with substantial distress, intellectual and otherwise.

The distress of uncertainty lasted almost a century. Thinkers were arguing about what would replace the Aristotelian vision. And then, in 1687, Isaac Newton publishes his Philosophiæ Naturalis Principia Mathematica, and a new worldview was established. Natural philosophy (i.e., what would eventually be called "science") went from teleological (goal-oriented or function-oriented) explanations to mechanistic explanations, which are goal-less, function-less, and purpose-less. This was, psychologically, probably worse for the people of the time. Not only did humans realize they are not living in the center of the universe, the function-oriented type of thinking they had been using for a thousand years had to be supplanted by cold, lifeless numbers.

And yet, as the saying goes, if you want to make an omelette, you gotta break some eggs. There were some unexpected benefits to the downfall of the Aristotelian worldview outside of the realm of science, philosophy, and mathematics. There were important social revolutions during this time period, and some thinkers link them to the end of teleological views:

“Likewise, the general conception of an individual’s role in society changed. The Aristotelian worldview included what might be considered a hierarchical outlook. That is, much as objects had natural places in the universe, so likewise people had natural places in the overall order of things. As an example, consider the divine right of kings. The idea was that the individual who was king was destined for this position—that was his proper place in the overall order of things. It is interesting to note that one of the last monarchs to maintain the doctrine of the divine right of kings was the English monarch Charles I. He argued for this doctrine—unconvincingly, it might be noted—right up to his overthrow, trial, and execution in the 1640’s. It is probably not a coincidence that the major recent political revolutions in the western world—the English revolution in the 1640’s, followed by the American and French revolutions—with their emphasis on individual rights, came only after the rejection of the Aristotelian worldview” (DeWitt 2018: 168).

 

 

Problems with Descartes' view

Let's return now to our story. Recall that Descartes' grand scheme was to discover foundational truths and build the rest of his belief structure upon them. However, one might lose confidence in the enterprise when one views, with justifiable dismay, the unimpressive nature of the foundational truths. Here they are reproduced:

  1. He, at the moment he is thinking, must exist.
  2. Each phenomenon must have a cause.
  3. An effect cannot be greater than the cause.
  4. The mind has within it the ideas of perfection, space, time, and motion.

These are the supposedly foundational beliefs. However, we can, it seems, question most if not all of these. Take, for example, the notion that an effect cannot be greater than its cause. Accepting that there is a wide range of ways in which we can define effect and cause, we can think of many "phenomena" that are decidedly larger than their "cause". Things that readily come to mind are:

  • the splitting of an atomic nuclei and the resulting chain reaction that takes place at the detonation of an atom bomb,
  • the viruses, which are measured in nanometers, and the pandemics that they cause both in antiquity as well as in the modern age, and
  • the potential of lone gunman (Lee Harvey Oswald), who would otherwise be historically inconsequential, bringing about the assassination of a very historically important world leader (John F. Kennedy) and the ensuing political chaos and national mourning.

Reflecting on this last point, if this is a credible instance of an effect being smaller than the cause, it almost seems as if Descartes was vulnerable to one of the cognitive biases that many conspiracy theorists prove themselves susceptible to: the representativeness heuristic. The representativeness heuristic is the tendency to judge the probability that an object or event A belongs to a class B by looking at the degree to which A resembles B. People find it improbable that (A) a person that they would've otherwise never heard about (Oswald) was able to assassinate (B) one of the most powerful people on the planet (JFK) precisely because Oswald and JFK are so dissimilar. Oswald is a historical nobody; JFK is a massive historical actor. And it is precisely this massive dissimilarity between Oswald and Kennedy that makes it so difficult for our mind to imagine their life trajectories ever crossing. And so one might judge the event to be improbable and, if you're prone to conspiratorial thinking, you'll begin searching for alternative theories involving organized crime, the Russians, Fidel Castro, etc.1

 

Fidel Castro

 

If it walks like a duck...

One very good criticism that one can make about Descartes' foundationalist project is that, if this is supposed to be an attempt to escape skepticism, then hasn't gotten us very far. If one considers just the four foundational beliefs, it seems every bit as bad as not knowing anything at all. It might even be worse, since it gives rise to fears hitherto unconsidered. How do we know, for example, that other people have minds? How do we know we're not the only person in existence? Could it be that everyone else is a figment of our imagination or mindless robots pretending to be human? Perhaps most deviant of all is the possibility that you are in an ancestor simulation where you are the only being that has been awarded consciousness; everyone else is a specter.2

It is precisely on this point, however, that Descartes picks up his project. As it turns out, Descartes argued that he could get out of skepticism, both of the pyrrhonian variety as well as his self-imposed skepticism (also called methodological skepticism). In a nutshell, he argued that he could prove God exists and that such a perfect being wouldn’t let him be deceived about those things of which he had a “clear and distinct perception” (see Descartes' third meditation). His argument for God's existence, in an unacceptably small nutshell, was that he has an innate idea of a perfect God. An innate idea, by the way, is an idea that you are born with. Descartes, if you recall, identifies having the ideas of perfection, space, time, and motion as a foundational belief. He felt that he knew these with certainty, and that these were featured in the mind since birth. Continuing with his argument, the idea of perfection, reasons Descartes, could have only been provided (or implanted?) by a perfect being, which is God. So, God exists!

There are two things that I should say here. First, I am temporarily holding off on a substantive analysis of arguments for and against God's existence; these will occur in the next portion of the course. Stay tuned.3

Second, Descartes is essentially claiming that God rescues him (and the rest of us) from skepticism. God wouldn't allow an evil demon to deceive us. God wouldn't allow you to think your perceptions of reality are hopelessly false. Think about this for a second. Descartes was able to link evil demons with skepticism and God with knowledge. This, then, accomplishes one of Descartes' goals: to reconcile science and faith.

Having said that, Descartes' argument for God's existence has been criticized since the early modern period. One interesting exchange on this topic occurred between Descartes and Elisabeth, princess of Bohemia. The two exchanged letters for years, and on numerous instances Elisabeth questioned whether we really knew whether God is doing all the things Descartes said God was doing, such as helping us to establish unshakeable knowledge of the world. Although I've not read all the letters, it is my understanding that Descartes never argued for his claims to Princess Elisabeth's satisfaction.

Princess Elisabeth is a nice respite from this otherwise male-dominated world. Nonetheless, due to the "greatest hits" nature of introductory philosophy courses, our next stop is another white male. However, he is one that you may have heard of before: John Locke. We turn to his take on knowledge next.

 

Princess Elisabeth of Bohemia

 

 

Two camps

The rift between Locke and Descartes is best understood in terms of a helpful if controversial division of thinkers from the early modern era into two camps: rationalism and empiricism. Rationalism is the view that all claims to knowledge ultimately rely on the exercise of reason. Rationalism, moreover, purports to give an absolute description of the world, uncontaminated by the experience of any observer; it is an attempt to give a God’s-eye view of reality. In short, it is the attempt to reach deductive certainty with regards to our knowledge of the world.

The best analogy to understanding rationalism is through deductive certainty in geometry. In his classic mathematics text, Elements of Geometry, the Greek mathematician Euclid began with ten self-evident claims and then rigorously proved 465 theorems. These theorems, moreover, were believed to be applicable to the world itself. For example, if one is trying to divide land between three brothers, one could use some geometry to make sure that everyone gets exactly their share. This general strategy of starting with things that you are certain of and building from there should sound familiar: it is what Descartes did. If you're guessing that Descartes is a rationalist, you are correct.

“In Euclidean geometry… the Greeks showed how reasoning which is based on just ten facts, the axioms, could produce thousands of new conclusions, mostly unforeseen, and each as indubitably true of the physical world as the original axioms. New, unquestionable, thoroughly reliable, and usable knowledge was obtained, knowledge which obviated the need for experience or which could not be obtained in any other way. The Greeks, therefore, demonstrated the power of a faculty which had not been put to use in other civilizations, much as if they had suddenly shown the world the existence of a sixth sense which no one had previously recognized. Clearly, then, the way to build sound systems of thought in any field was to start with truths, apply deductive reasoning carefully and exclusively to these basic truths, and thus obtain an unquestionable body of conclusions and new knowledge” (Kline 1967: 149).

Empiricism, on the other hand, argues that knowledge comes through sensory experience alone. There is, therefore, no possibility of separating knowledge from the subjective condition of the knower. By default, empiricism is associated with induction since they reject the possibility of deductive certainty about the world. John Locke, the thinker featured in this lesson, was an empiricist.

Each of these camps embodies an approach to solving issues in epistemology during that time period, from 1600 to about 1800. However, we should add two details to give a more accurate (yet complicated) view about these opposing camps. First, these thinkers never referred to themselves as rationalists or empiricists. These are labels placed on them after the fact by another philosopher, Immanuel Kant. Moreover, these labels have a different meaning in this context. Typically, today the word rationalism is given to anyone who is committed to reason and evidence in their belief-forming practices. This is not how we are using the term in this class. Here, rationalism refers to this camp of thinkers who sought to establish deductive certainty about the world.

An excerpt of Leibniz's notebook
An excerpt of Leibniz's notebook.

Second, it might even be the case that these thinkers would be hostile to the labels we've given them. Rationalists, predictably, were committed to mathematics. Descartes, as we've discussed, is famous for some of his mathematical breakthroughs. Gottfried Wilhelm Leibniz, another rationalist, had insights that led him to the development of differential and integral calculus independently of and concurrently with Isaac Newton. Nonetheless, any careful reader sees very different approaches between these two thinkers (Descartes and Leibniz), thinkers who are allegedly in the same camp. And while ascribing a joint methodology to the rationalists is suspect, it is even more contentious to put the “empiricists” into a single camp. This is because there are well-documented cases of “empiricists” denouncing empiricism (see in particular Lecture 2 of Van Fraassen 2008). In short, this is a useful distinction, but one should not fall into the trap of believing that the two sides are completely homogenous.

“This convenient, though contentious, division of his predecessors into rationalists and empiricists is in fact due to Kant. Believing that both philosophies were wrong in their conclusions, he attempted to give an account of philosophical method that incorporated the truths, and avoided the errors, of both” (Scruton 2001: 21; emphasis added).

 

1689, Holland

 

The human

John Locke (1632-1704) was a physician and philosopher and is commonly referred to as the "Father of Liberalism". In 1667, Locke became the personal physician to Lord Ashley, one of the founders of the Whig movement (a political party that favored constitutional monarchy, which limits the power of the monarch through an established legal framework, over absolute monarchy). When Ashley became Lord Chancellor in 1672, Locke became involved in politics, writing his Two Treatises of Government during this time period. Two Treatises of Government is a sophisticated set of arguments against the absolute monarchism defended by Thomas Hobbes and others. In 1683, some anti-monarchists planned the assassination of the king; this is known as the Rye House Plot. The plot failed and, although there is no direct evidence that Locke was involved, he fled to Holland. There, he turned to writing once more. In 1689, he publishes his Essay Concerning Human Understanding, the main subject of this lesson.

 

Decoding Locke

 

 

Signpost

At this point, we have three competing ways of conceptualizing knowledge: positivism, Cartesian foundationalism, and Lockean indirect realism. It might be helpful to at this point provide images for associating each of these. Enjoy the slideshow below:

These are incompatible perspectives. Moreover, Descartes and Locke have an added layer of conflict. Descartes clearly assumes that the foundation of all knowledge is the foundational beliefs established through reason, not beliefs acquired through the senses. Locke states exactly the opposite view: the mind is empty of ideas without sensory experience. The ultimate foundation of knowledge is sensory experience. We will identify this debate about the ultimate foundation of knowledge as "Dilemma #2: Empiricism or Rationalism?".

 

Problems with Locke's view

Just like Descartes' view has some issues, Locke's view doesn't seem to really help us escape from skepticism either. First and foremost, Locke admits that there is no way we can ever check that our ideas of the world actually represent the world itself. We can be sure, Locke claims, that our simple ideas do represent the world, but an ardent skeptic would challenge even this. The only way you could know that your representations of the world correspond to the world is if you compare them; but this is impossible on Locke's view. This is, in fact, the argument that a fellow empiricist, George Berkeley, levied against Locke.

Speaking of empiricism, here's another challenge to Locke. The empiricist camp happens to feature one of the most prominent skeptics of all time: David Hume. During his lifetime, Hume's work was consistently denounced as works of skepticism and atheism. How did Hume arrive at his skeptical conclusions? Through what some would call his "radical empiricism". If the goal is to escape skepticism, do we really want to take the side of an empiricist, like Locke or Hume?

For those who are well-versed in contemporary psychology, you might notice that there are other problems with Locke's view that are empirical in nature (see Pinker 2002). However, given the time period we are covering, we should, for the moment, focus on philosophical objections about how this project apparently fails to help one escape from skepticism. The empirical objections will have to wait until the last lessons of the course. Stay tuned.

 

Moving forward

Now that you've been exposed to three competing approaches to knowledge available to you in the early modern period, which do you think is best? Let me rephrase that question. Had you lived through what someone living in the mid-17th century had lived through, which approach to knowledge do you think you'd prefer? Remember that we are jaded. We have 20/20 hindsight. We know how things are going to pan out. But if you were living at the time, my guess is that you would've sided with Descartes. Here are some reasons:

  • Descartes and his rationalist/foundationalist project purports to be able to get us certainty, and this is very appealing. We have to remember that people were moving from the Aristotelian worldview, where objects were purpose-oriented and the Earth was the center of the universe(!), to a more mechanistic, cold, purpose-less Newtonian worldview. Humanity's self-esteem had taken a severe blow, and salvaging deductive certainty about the world seemed to at least be a comforting consolation prize.
  • The Baconian empiricist movement was new and unproven. Sure there had been advances in astronomy, but these had the effect of putting science into a state of crisis more than anything. Thinkers, like Descartes, desperately sought to find foundations to replace the discarded old ones.
  • Moreover, empiricism was suspicious. Locke's incapacity to ensure his readers that we really could know what the external world is like was worrisome. Later on, of course, Hume would use empiricism to lead us straight into the pit of skepticism. To many, empiricism didn't seem very welcoming.

And so, moving forward, we will further explore Descartes' rationalist/foundationalist project. The mere possibility of being able to achieve deductive certainty is extremely alluring. There is just one problem: Descartes uses God's existence as a means to escape skepticism. The next rational question is this: Does God actually exist?

 

 

 

Executive Summary

  • Two approaches to scientific explanations were covered: teleological (goal-oriented or function-oriented) explanations and mechanistic explanations, which are goal-less, function-less, and purpose-less. The Aristotelian science that dominated Western thought for a thousand years was teleological. The universe was thought to be inherently teleological. The job of a natural scientist was to understand the teleological essences of categories of objects.

  • There were some unexpected social benefits associated with the downfall of the Aristotelian worldview.

  • There are various problems with Descartes' view, including concerns over his foundational truths not being very foundational and the perceived inability of his project to get us out of skepticism.

  • There is a division between two warring philosophical camps: rationalism and empiricism. The rationalists ground knowledge ultimately on reason, like Descartes. The empiricists, like Locke, use sensory experience as their starting point for acquiring knowledge.

  • Locke's view is distinguished from Descartes' in many ways, including Locke's reliance on the senses and his rejection of innate ideas.

  • Locke's view, however, also has its problems; most notably, it's hard to see how we can ever check that our representations of the world actually match the world itself (which seems just as bad as skepticism).

FYI

Suggested Reading: John Locke, An Essay Concerning Human Understanding

  • Note: Read only chapters 1 and 2.

TL;DR: Crash Course, Locke, Berkeley, and Empiricism

Supplementary Material—

Related Material—

Advanced Material—

 

Footnotes

1. Castro at least would've had a good reason for wanting Kennedy dead. In chapter 18 of Legacy of Ashes, Tim Weiner (2008) reports that Kennedy seemed to have implicitly consented to assassination attempts on Fidel Castro, in order to repair his image after the Bay of Pigs disaster. There were dozens, maybe hundreds of attempts on Castro's life.

2. This discussion reminds me of the Metallica song "One" that describes the plight of a patient with locked-in syndrome.

3. The interested student can also take Dr. Leon's philosophy of religion class.

 

 

Aquinas
and the Razor

 

My name is Legion: for we are many.

~Mark 5:9

 

Dilemma #3: Does God exist?

The uncomfortable question that titles this subsection leads to the even more uncomfortable analysis of the arguments for God's existence that have been given throughout history. Equally discomforting is what I'm about to say: there's about a thousand years—the first thousand years, roughly speaking—of Christian history that will offer very little of value. Of course, my guess is that you will not consider this blanket statement to be in good judgment. And so, I'll happily explain my reasoning in this lesson and the next.

First, however, I must make some preliminary comments. Recall that we are endeavoring to answer this question because we are attempting to find support for the Cartesian rationalist/foundationalist project. Descartes, it is clear from his Meditations, considered himself a devout Catholic. Descartes, moreover, believed that God, through His goodness, could be the solution to the problem of skepticism. And so, to keep the narrative flow of the course, we will look exclusively for proof of God as envisioned by the Catholic tradition. I hope this omission does not offend any students taking this course who align with Protestantism, the Church of Latter-day Saints, Islam, Hinduism, Wiccanism, etc.

Second, the approach will be, by and large, historical. This is, to repeat myself, first and foremost an intellectual history. As such, I want us to focus on arguments and ideas that would've been available to people in the early modern period in Europe. We can, at times, look at other arguments for and against God's existence that are more contemporary. But in the final analysis, these will have to be bracketed and put aside when assessing Cartesian foundationalism (although we can certainly discuss the meaning of contemporary arguments for and against God's existence for our own lives).

Third, I will make my very best attempt to give both sides of this debate. There are four lessons on the philosophy of religion in total. The first and the third will advance arguments for God's existence (and critique them); the second and fourth lessons will be given from the perspective of an atheist (and discuss some limitations). This bipolar approach might be jarring at first, but I think it is a useful method nonetheless.

Fourth, given that we are exclusively be discussing Catholicism, it should be said at the outset that when I use terms like "believers", "the faithful", and other allusions to adherents of a particular religion, I mean adherents to Catholicism. Again, this is for the sake of the narrative and has nothing to do with excluding non-Catholics.

Lastly, these arguments are very much only the "greatest hits" of philosophy of religion, something that is inevitable in an introductory course. A thorough review of the arguments for and against God's existence takes much longer than 4 brief lessons. The interested student can enroll in a philosophy of religion course. The references given in these lessons might also make valuable reading material.

 

Ganesha, unfortunately not covered in this course

 

The Closing of the Western Mind

 

Bart Ehrman's Lost Christianities

I made a contentious claim in the last subsection: that about a thousand year span of Christian history made no worthwhile arguments for God's existence. I defend that claim now.

Although I cannot peer into the minds of people, my guess is that most believers think that Christian doctrine is and has always been clearly articulated, with no major dissension. Nothing could be further from the truth. The history of the early Church was littered with heated theological disputes, some of which became lethal. For example, in Lost Christianities, Bart Ehrman details the various competing interpretations of what Christianity meant. One worthy of mention is the adherents of a view known as gnosticism. This Christian sect believed that spiritual knowledge trumped any authority the Church might have, which you can imagine did not sit well with early church leaders. There was also, according to this sect, two gods: a good one and an evil one.

Another dispute that shook the early Church was over what eventually would be called the Arian heresy: the view that Jesus, although divine, was created by God and was subordinate to God. What would eventually become the orthodox (or official) position was that Jesus, or God the Son, has always been co-equal with God the Father. There are, however, various Bible passages that support the Arian interpretation over the orthodox interpretation (e.g., Mark 10:18, Matthew 26:39, John 17:3, Proverbs 8:22; see Freeman 2007: 163-177 for a full discussion). The existence of these passages explains why it was so difficult to stamp out the Arian heresy, which persisted well into the 4th century.

There was even dissent between church leaders, such as the rift between St. Paul and St. Peter. Paul seems to have been difficult to work with and he distanced himself from those who had actually known Jesus (such as Peter). Paul's insecurity over his questionable authority caused him to stress the supernatural aspects of Jesus, such as his resurrection, as well as how he received a revelation, the implication being that knowing Jesus personally is not the only way to receive the good news (see Freeman 2007, chapter 9).

And as if all this wasn't bad enough, there was even discord in a single person's interpretation of events. To take the example of Paul again, it turns out Paul was inconsistent in his teachings. Initially, he said that faith alone will save you. But this was because he believed that the second coming was imminent, and there was no time to change one’s behavior. Only later, once the second coming failed to materialize, did he stress the need for charity (ibid.).

 


Constantine, the Great.

Then came Emperor Constantine. Realizing that persecuting Christians wasn’t working, he decided to show them tolerance in the Edict of Milan, co-adopted with the Eastern emperor Licinius. (Note that by this point, the Roman Empire had multiple emperors at once to help rule the vast empire.) As a way to gain the favor of Christian communities that had been persecuted, Constantine gave exemptions from the heavy burdens of holding civic office and taxation to the clergy. However, he was surprised by the amount of communities that called themselves "Christians" as well as by their conflicts with each other. Given that he had lavished gifts on the Christian clergy, there was an urgency for clarifying what “Christian” really meant. Constantine ended up selecting those more pragmatic communities of "Christians" to maintain their civic gifts and withdrew his patronage from the others. In doing so, he began to shape what would become the Catholic Church.

However, this did not end the disputes, which seemed to somewhat irritate Constantine. He thus convened the Council of Nicaea in 325 CE to try to establish what orthodox Christianity really was. The factions, however, did not come to an agreement, and Constantine had to settle theological matters by imperial decree. In other words, the non-Christian Roman emperor simply stipulated what the right view was. This happened time and time again in the history of the Church, with different emperors settling different theological disputes by decree. Not only does this seem less-than-divinely inspired, but it also imbued, at least the Western half of the empire, with a persistent and pervasive authoritarianism. Authority seemed like the way to solve problems. Moreover, once Christians had the backing of the state, they turned to the persecution of pagans. This persecution was literal, and it included destruction of pagan temples, harassment, and murder (see Nixey 2018).

“’Mystery, magic, and authority’ are particularly relevant words in attempting to define Christianity as it had developed by the end of the fourth century. The century had been characterized by destructive conflicts over doctrine in which personal animosities had often prevailed over reasoned debate. Within Christian tradition, of course, the debate has been seen in terms of a ‘truth’ (finally consolidated in the Nicene Creed in the version of 381) assailed by a host of heresies that had to be defeated. Epiphanius, the intensely orthodox bishop of Salamis in the late fourth century, was able to list no less than eighty heresies extending back over history (he was assured his total was correct when he discovered exactly the same number of concubines in the Song of Songs!), and Augustine in his old age came up with eighty-three. The heretics, said their opponents, were demons in disguise who ‘employed sophistry and insolence’… From a modern perspective, however, it would appear that the real problem was not that evil men or demons were trying to subvert doctrine but that the diversity of sources for Christian doctrine—the scriptures, 'tradition', the writings of the Church Fathers, the decrees of councils and synods—and the pervasive influence of Greek philosophy made any kind of coherent ‘truth’ difficult to sustain... Both church and state wanted secure definitions of orthodoxy, but there were no agreed axioms or first principles that could be used as foundations for the debate... The resulting tension explains why the emperors, concerned with maintaining good order in times of stress, would eventually be forced to intervene to declare one or other position in the debate ‘orthodox’ and its rivals ‘heretical’” (Freeman 2007: 308-309).

 

 

Rediscovery

The preceding section showed that the early Catholic Church, rather than using reasoned debate and rational argumentation, relied on authority and the power of the state to establish Christian doctrine. After the fall of the Western Roman Empire in 476 CE, the region fell into its so-called dark age.1 Even prior to this, however, the nomadic tribes of the northern Arabian peninsula were progressively being unified into a meta-ethnicity (a coherent group composed of several ethnicities), perhaps surprisingly, by a shared language (particularly its high form, which was used for poetry). And then, in 570 CE, a man named Muhammad was born. After receiving his revelations, the unification of the Arab people was expedited, this time by religion (see Mackintosh-Smith 2019).


Early Muslim conquests.

The early Muslim conquests, 622-750, were lightning fast. In a historical blink of an eye, there was a new regional power. To show just how formidable the Muslim armies were, consider the Sassanid Persian Empire. The Sassanids had been at war with the Eastern Roman Empire (also known as the Byzantine Empire) for decades without one side truly defeating the other. Approaching the middle of the 7th century, though, the Sassanids began their military encounters with the Arab armies. The Sassanids, used to battle with the Byzantines, had a powerful but slow heavy cavalry. The Arabs, on the other hand, were light and quick-moving, and, through the combined use of camels and horses, were able to overwhelm the Persians. Not even war elephants could stop the Arabs, since some Arabs served as mercenaries during the Byzantine-Persian conflicts and knew how to deal with them. Eventually, compounded by bouts of plague and internal problems, the Sassanid Empire finally fell in 651.

The new Muslim Caliphates quickly became hungry for the intellectual insights of its conquered peoples and beyond. Historian of mathematics Jeremy Gray explains:

"When the Islamic empire was created, it spread over so vast an area so quickly that there was an urgent need for administrators who could hold the new domains together. Within ten years of Muhammad's death in AD 632 the conquest of Iran was complete and Islam stood at the borders of India. Syria and Iraq to the North were already conquered, and to the West Islamic armies had crossed through Egypt to reach the whole of North Africa. Since the Islamic religion prescribed five prayers a day, at times defined astronomically, and moreover that one pray facing the direction of Mecca, further significant mathematical problems were posed for the faithful... So it is not surprising that Islamic rulers soon gave energetic support to the study of mathematics. Caliph al-Mamun, who reigned in Baghdad from AD 813 to AD 833, established the House of Wisdom, where ancient texts were collected and translated. Books were hunted far and wide, and gifted translators like Thabit ibn Qurra (836-901) were found and brought to work in Baghdad... So diligent were the Arabs in those enlightened times that we often know of Greek works only because an Arabic translation survives" (Gray 2003: 41).

 


A madrasa, or place of study.

Although the Muslim conquests are fascinating in their own right, they are relevant in our story because of what happens next. In 1095, Pope Urban II calls for the First Crusade. He instigated this Crusade by exaggerating the threat of Turkish aggression at Constantinople, as well as poor treatment of Christians by Muslims (e.g., during pilgrimages). Most importantly, he demonized Muslims. And so, the Crusades began with the goal of taking back the Holy Land (see Asbridge 2011). Depending on what you count as a "Crusade", the story ends at different times. For our purposes, the Crusades ended when Acre falls to the Mamluk Sultanate in 1291.

How is this relevant to our story? It was during these centuries of prolonged contact with Islamic peoples that Europeans began rediscovering and reacquiring many ancient texts in mathematics, philosophy, and more. We will see this rebirth take place as we study two arguments for God's existence. The first will be by St. Anselm, written before the Crusades; the second will be by St. Thomas Aquinas, written towards the end of the Crusades.

 

 

 

Important Concepts

 

Some comments on metaphysics

I had previously said that you can just think of metaphysical questions as being of the sort "What is ______?" The reason for this short-hand is that metaphysics, much like philosophy itself, has been different things at different times. In classical Greece, metaphysics was delving into the ultimate nature of reality and it included some theories about cosmogenesis. By the 20th century, cosmogenesis was the purview of physics, and some philosophers, notably the members of the Vienna Circle, attacked metaphysics and demanded that the branch of philosophy be shutdown. Today, given the state of theoretical physics, some philosophers (who are also trained in physics and/or mathematics) dive into the metaphysics of reality once more (see Omnès 2002). In short, defining metaphysics is tough, since it somewhat depends on the historical context.

 

Decoding Anselm

 

Decoding Aquinas

 

 

 

Ockham

Telling the story of Thomas Aquinas and his Five Ways is not complete without telling the story of William of Ockham. Ockham, as I'll refer to him, was a Franciscan friar who was born around 1287 and died in 1347. He is best known for endorsing nominalism, a view we'll cover in Unit III. He is relevant here due to his use of a methodological principle that bears his name: Ockham's razor. Ockham's razor, a.k.a. the principle of parsimony, is the principle that states that given competing theories/explanations, if there is equal explanatory power (i.e., if the theories explain the phenomenon in question equally well), one should select the one with the fewest assumptions. Expressed another way, it's the principle that states that, all else being equal, the simplest explanation is the most likely to be correct. Put a third way, don't assume things that don't add explanatory power to your theory, i.e., don't assume things that have no value in helping you explain a phenomena. In short, when explaining a phenomena, don't assume more than you have to.

 


 

Ockham's razor is applicable in many different fields. In computer science, although one starts off just trying to get the code to do what it's supposed to do, eventually, a good programming practice to develop is to make your code more elegant and simple so that it is easier to read by other developers (see Andy Hunt and Dave Thomas' The Pragmatic Programmer). In the social sciences, it is a feature of good theories to not assume more than what is necessary. If you can explain, for example, the Fall of the Western Roman Empire through disease, naturally-occurring climate change, social unrest, and external threats, then it's no good to also add in there some fifth factor that doesn't add any explanatory value.

One domain where I wish people would use Ockham's razor is in the realm of conspiracy theories. Often times, when someone espouses a conspiracy theory, they must assume many more things than the non-conspiracy theorist, such as secret societies, space-alien races, hitherto unknown chemicals and weapons, etc. Moreover, these extra assumptions actually have zero explanatory power. If you are attempting to explain some event via, for example, a secret society, then you have effectively explained nothing because the society is SECRET. That means you don't know their dark plan, or their sinister methods, etc. It's the equivalent of attempting to explain something you don't understand by using some other thing you don't understand as the explanation. Utter nonsense.

 


 

The full story of Ockham can only be told in a class on the Middle Ages, but suffice it to say here that Ockham's gave commentaries on theological matters much like that of Aquinas' natural theology. Mind you, Ockham was a believer, but he also didn't think the arguments of his day proved God's existence. Ockham was not alone. Duns Scotus, another Franciscan friar who was about two decades the elder of Ockham, found some difficulties in reconciling God's freedom with any reasoned argumentation or proof of God's existence.

“Duns Scotus (d. 1308) gave open expression to the rejection of reason from questions of faith. God, he held, was so free and his ways so unknowable they could not be assessed by human means. Accordingly there could be no place for analogy or causality in discussing him; he was beyond all calculation. Duns, in the great emphasis he placed on God's freedom, put theology outside the reach of reason” (Leff 1956: 32).

What Scotus is saying is that God is so free and so powerful that he can just change reason itself. The laws of logic can be bent by God if God chose it. So any attempted proof of God, through reason argumentation, would fail, since it would assume something that God can destroy at will: the laws of logic.

"For the skeptics [of scholasticism], God, by his absolute power, was so free that nothing was beyond the limits of possibility: he could make black white and true false, if he so chose: mercy, goodness, and justice could mean whatever he willed them to mean. Thus not only did God’s absolute power destroy all [objective] value and certainty in this world, but his own nature disintegrated [in terms of the human capacity for understanding God through rational reflection]; the traditional attributes of goodness, mercy and wisdom, all melted down before the blaze of his omnipotence. He became synonymous with uncertainty, no longer the measure of all things” (Leff 1956: 34; interpolations are mine).

 

 


William of Ockham.

In other words, for Ockham and others, since God is all-powerful, He could change anything at anytime, including our methods of human reasoning and logic itself. Put another way, human reason and logic are insufficient to understand a being this powerful. Since all arguments are grounded by human reason and logic, all arguments are insufficient to establish the existence of God, let alone an understanding of Him.

Again, Ockham (and Scotus) were believers. Ockham's view on his faith is called fideism. Fideism is the view that belief in God is a matter of faith alone. Any attempt to prove God’s existence is futile.

On account of his teachings, Ockham and others were called to Avignon, France, in 1324, to respond to accusations of heresy. After some time in Avignon, during which a theological commission was assessing Ockham's commentaries on theological matters, Ockham and other leading Franciscans fled Avignon, fearing for their lives. He lived the rest of his life in exile.

Ockham, however, plays an important role in the intellectual history of the Middle Ages. Indeed, Ockham serves as a breakpoint. Up to that point, the goal of philosophy in the Middle Ages was to affirm and reinforce the predominant theological views of the time. In other words, philosophy was just for defending existing religious doctrine. But Ockham saw the roles that philosophy and theology played as being distinct. In particular, philosophy could not really function to defend belief in a supernatural deity; it had, instead, its own separate function of exploring and defending philosophical positions (on politics, metaphysics, art, etc.). Historian of science W. C. Dampier writes of the intellectual milestone that is the work of William of Ockham:

“[T]he work of Occam marked the end of the mediaeval dominance of Scholasticism. Thenceforward philosophy was more able to press home its enquiries free from the obligation to reach conclusions foreordained by theology... [T]he ground was prepared for the Renaissance, with humanism, art, practical discovery, and the beginnings of natural science, as its characteristic glories” (Dampier 1961: 94-5).

 

 

 

Executive Summary

  • On account of Descartes' needing God to rescue us from skepticism, we are seeking an answer to the question of God's existence. Our focus will be on Catholicism.

  • During the early part of the Middle Ages, reasoned argumentation was not at its best in the West. It was primarily through prolonged contact with Muslim caliphates that the West rediscovered the Greek-style of argumentation.

  • Thomas Aquinas' work marks the height of the attempted combination of reason and faith. He essentially argues for God's existence using Aristotelian reasoning, albeit a Christianized version.

  • Approaches to theology like that of Aquinas, however, had some critics. We noted the work of William of Ockham.

  • Ockham argued that human reason and logic are insufficient to understand God since God (through his power) can simply change the laws of logic.

  • Per historian of science W. C. Dampier, Ockham's work marks the end of Scholasticism's dominance in the West and paved the way for the Renaissance and more.

FYI

Suggested Reading: Gordon Leff, The Fourteenth Century and the Decline of Scholasticism

TL;DR: Sky Scholar, What is Occam's Razor? (Law of Parsimony!)

  • Note: The Sky Scholar (Dr. Robitaille) is very cheesy. I love it.

Supplementary Material—

Advanced Material—

 

Footnotes

1. Interpreting what "dark age" means requires some clarification. In chapter 6 of his Against the Grain, James C Scott beckons us to reconsider the use of the phrase “dark age.” Dark for whom? Is it merely decentralization? Is it merely that we can’t study it as well? Is it a real drop in health and life expectancy for the people? In many cases, it might've been a simple reversion to a simpler, more local way of life, without an imperial ruler unifying disparate peoples. Nonetheless, I will use this phrase here for rhetorical elegance.

 

 

The Problem of Evil

 

Utter trash.

~The Greek intellectual Celsus,
evaluating the Old Testament

 

An ever-changing dynamic

The relationship between intellectuals and religious faith is a complicated one. Some intellectuals, as will be shown below, had nothing but animosity towards the Christian faith. Others have been tremendously empowered by it and have used it as their motivation for their intellectual breakthroughs. Taking the perspective of the atheist, then, requires some subtle maneuvering. Many atheists consider themselves to be on the side of reason—in an imagined competition between faith and reason. But this will be easier said than done.

Let's begin in the early days of the Christian faith. We'll start in the year 163 CE. The intellectual climate in Antiquity, the label given to the time period which comprises the era of Classical Greece and Rome, is impressive to say the least. The most famous physician of the time period, Galen (129-210 CE), was performing live vivisections of animals (for scientific purposes) which were well-attended and popular. And although the views held by Galen were not strictly-speaking compatible with modern science—for example, he never used control groups—it was nonetheless more empirical than any other previous approaches to medicine. In short, it was progress.

It is also the case that the philosophy of atomism was in full swing. This rich tradition, which began in the 5th century BCE, held that the universe is composed of tiny, indivisible atoms, each having only a few intrinsic properties—like size and shape. Atomist philosophers were seen as competitors to the Aristotelian system, who built their own comprehensive view of the origins of the universe without teleological natural philosophy.

It was even the case that the emperor of the day was a philosopher. This was none other than Marcus Aurelius (121-180 CE), who was deeply committed to Stoicism. Although most Stoic writings are lost to history, from the few surviving texts and commentaries that have survived, we can discern the general view of the Stoics. The basic goal of Stoicism is to live wisely and virtuously. The Greek word used to label this state is arete, a term that is actually best translated not as “virtue” (as it is commonly done) but as “excellence of character.”

Marcus Aurelius
Marcus Aurelius.

Something excels, according to this tradition, when it performs its function well. Humans excel when they think clearly and reason well about their lives. To achieve this end, the Stoic develops the cardinal virtues of wisdom, justice, courage, and moderation. The culmination of these virtues is arete, and this is the only intrinsic good, i.e., the only thing that is good for its own sake. External factors, like wealth and social status, are only advantages, not goods in themselves. They have no moral status. They, of course, can be used for bad aims, but the wise mind uses them to achieve arete. For a review of Marcus Aurelius' practice, see Robertson (2019).

It appears that early Christians rejected many of the intellectual movements of their time. For example, the atomists, whose belief system contradicted the notion of life after death, were targeted by early Christians. Apparently the atomist doctrine that human lives were just a “haphazard union of elements” conflicted starkly with the Christian notion of a soul (Nixey 2018: 38). Much later, Augustine was concerned that atomism weakened mankind’s terror of divine punishment and hell (ibid., 39). As such, texts that espoused atomist views, like those of Democritus, were actively denounced or neglected and were eventually lost to the West.

The Stoic movement was also negatively impacted by Christianity. As previously noted, Stoic writings, along with most of the literature of the classical world, are lost, and this process was initiated when Christians took control of the Roman empire in the 4th century CE (Nixey 2018). In fact, only about 10% of classical writings are still in existence. If one narrows the scope only to writings in Latin, it is only about 1% that remains (ibid., 176-177; see also Edward Gibbon's History Of The Decline And Fall Of The Roman Empire, chapters 15 and 16). And so, the Stoic logic that is studied today (in courses like PHIL 106) was actually reconstructed (or "rediscovered") in the 20th century, as modern logicians were formalizing their field (O'Toole and Jennings 2004).1

Celsus
Celsus.

As the anti-intellectualism of Christianity was becoming evident, thinkers in Antiquity were becoming increasingly suspicious and critical. The Christians shirked military service and instead preached meekness (which must've sounded ludicrous to Roman ears). Later on, the public spending on monks and nuns was believed to have weakened the Roman Empire substantially. Some intellectuals felt the need to cry out.

Celsus was one of these thinkers. He openly mocked the virgin birth and the Christian creation myth. He wondered why Jesus’ teachings contradicted earlier Jewish teachings (Had God changed his mind?) and why God waited so long between creation and salvation (Did he not care prior to sending Jesus?). He also wondered why he sent Jesus to a “backwater” (i.e., Bethlehem) and why he needed to send Jesus at all (Nixey 2018: 36-7). Celsus was most concerned with how willful ignorance, of which he accused the Christians, made you vulnerable to believing in things that are easily dismissed had the person in question been more well-rounded in their reading. He pointed out what many of us today know all too well: people that only get their information from one source are at risk of biased information, disinformation, and believing in things that a little critical thinking could dispel.

“Lack of education, Celsus argued, made listeners vulnerable to dogma. If Christians had read a little more and believed a little less, they might be less likely to think themselves unique. The lightest knowledge of Latin literature, for example, would have brought the interested reader into contact with Ovid’s Metamorphoses. This epic but tongue-in-cheek poem opened with a version of the Creation myth that was so similar to the biblical one that it could hardly fail to make an interested reader question the supposed unique truth of Genesis" (Nixey 2018: 42).2

After Celsus, Porphyry waged an even more thorough attack on Christianity. His attack was so ferocious that his works were completely eradicated, the task being begun by Constantine. Celsus’ works would be lost if it weren’t for the long passages that the Christian Origen quoted during his counterattack.

But then again, almost every major thinker from the early modern period that we've covered so far was a believer; Hume is the sole exception (so far). It's not the case that believing in Christianity necessarily negatively affects your information-processing capacities.

“The mathematicians and scientists of the Renaissance were brought up in a religious world which stressed the universe as the handiwork of God... Copernicus, Brahe, Kepler, Pascal, Galileo, Descartes, Newton, and Leibniz… were in fact orthodox Christians. Indeed the work of the sixteenth, seventeenth, and even some eighteenth-century mathematicians was a religious quest, motivated by religious beliefs, and justified in their minds because their work served this larger purpose. The search for the mathematical laws of nature was an act of devotion. It was the study of the ways and nature of God which would reveal the glory and grandeur of His handiwork” (Kline 1967: 206-7).

 

 

 

The atheist response

We are now going to take a look at the perspective of the atheist. You will soon discover that there are atheist responses to every kind of argument for God's existence. To understand this, however, we must first clarify what the different types of arguments are.3

Once again it is Immanuel Kant who categorizes for us. In the domain of religion, Kant argues that there are only three kinds of argument for God’s existence:

  • the ‘cosmological’ category, as in Aquinas’ Argument from Efficient Causes,
  • the ‘ontological’ category, as in Anselm’s and Descartes’ ontological arguments,
  • and the ‘physico-theological’, also known as arguments from design.

The third category is of interest to us. We've not yet covered an argument of this type. Moreover, Kant, himself a believer, makes the case that this is the type of argument that is best understood by the human intellect. We'll look at one next.

“Kant says of the argument from design [the ‘physico-theological’ type of argument] that it ‘always deserves to be mentioned with respect. It is the oldest, the clearest, and the most consonant with human reason. It enlivens the study of nature, just as it itself derives its existence and gains ever new strength from that source’” (Scruton 2001: 66-7; interpolation is mine).

 

Argument from
intelligent design

As we mentioned last time, Aquinas provided his Five Ways, five proofs of God's existence. The fifth of these five ways is an argument of the physico-theological type, an argument from intelligent design. Rather than using Aquinas' arguments again, though, I will use a more modern argument. We will look at William Paley's watch analogy.

In the early 19th century, there was a revival of natural theology. Natural theologians, recall, assumed God's existence and sought to discover the grandeur of God's handiwork by studying the natural world. This revival is primarily due to William Paley. Paley advocated natural theology as a method of discovering (and praising) God’s work. It was perceived as an act of devotion. In fact, this is why Charles Darwin’s father, realizing his son’s waning interest in medicine (his first career choice), recommended that Charles take up natural theology instead. (Ironic, isn't it?) See chapter 1 of Wright's (2010) The Moral Animal for a short biography of Darwin which includes this anecdote.

The argument

  1. The world displays order, function, and design.
  2. Other things (e.g., watches) display order, function, and design.
  3. Other things (e.g., watches) that display order, function, and design, do so because they were created by an intelligent designer.
  4. Therefore, the world displays order, function, and design because it was created by an intelligent creator (and this is God).

 

 

Objections

The Regress Problem

If complexity implies that there is a designer, then consider how complex God must be. It seems like God, since He is so complex, also had a designer. In fact, how do we know that God wasn't made by some even more powerful meta-god?

Compatibility with Polytheism

Even if this argument is sound, it does not necessarily prove the existence of a singular God. It’s possible that many gods collaborated to create the universe. In fact, this sort of makes more sense, since most complex enterprises are done by teams and not individuals.

Hume’s Objection

In his Dialogues on Natural Religion, Scottish philosopher David Hume made a comment relevant to this argument (although he had died before the publication of Paley's work). Hume made the point that in order for an analogical argument to work, you have to know the two things you are comparing. That is to say if you are comparing, let's just say, life to a box of chocolates, in order for the comparison to work, you'd have to know both things fairly well. We are, of course, alive. And most of us have had experience with those boxes of assorted chocolates, where some items are very tasty but some are filled with some kind of gross red goo. The box of chocolates takes you by surprise sometimes, just like life. The analogy works because you know both things.

So here's the problem that this poses for the teleological argument: maybe you've seen a watch get made, but you've never seen a universe get created. You're comparing a thing you know (a watch) to a thing you don't understand fully (the universe). So, Hume would say, the analogy doesn't work.

Argument from Ockham’s Razor (Atheist Edition)

The following is an argument made by G.H. Smith in his book The Case Against God. It makes use of Ockham's razor to disarm the argument from intelligent design. Smith argues that the only difference between the view of the natural theologian, who uses empirical observation to try to prove God's existence, and the atheistic natural philosopher, who uses empirical observation to learn about the world, is that the former (the natural theologian) has an extra belief in his/her worldview. The extra belief is, of course, belief in God. But belief in God offers no explanatory power. This is because to posit the supernatural as an explanation for some natural phenomenon explains nothing. Supernatural things are, by definition, beyond natural explanations. Thus, the design argument has zero explanatory power. By Ockham’s Razor, the belief in God is superfluous.

Denying premise #3

Another, more general strategy is to deny that premise #3 is true. If successful, this objection would undermine the soundness of the whole argument. The argument might go like this. First off, we can say that the universe does not display purpose. Even though there are some regularities in the universe (like stable galactic formations and solar systems), none of these have any obvious purpose. What is the purpose of the universe? What is it for? These appear to be questions without answers, at least not definitive ones.

Some atheists (e.g., Firestone 2020) go further and attempt to dispel any notion that the universe might be well-ordered in any way. Firestone argues that the so-called regularities we do observe in the universe only appear to be regularities from our perspective. For example, we know that the early universe, soon after the Big Bang, was very chaotic (Stenger 2008: 121). Further, some parts of the universe are still chaotic (there are galaxies that are crashing into each other, black holes swallowing entire solar systems, etc.). We couldn't see much of that during Paley's time, but to continue to argue that the universe is well-ordered and displays function seems to be anachronistic (or out of sync with the times).

Some theists might respond to the objections above by arguing that some of the universe does have a function. Perhaps the function of our part of the universe is to harbor human life. If this is the argument, then there is a glaring problem with it. We must remind ourselves that human life on this planet is only temporary. This is because life on this planet will be impossible somewhere between 0.9 and 1.5 billion years from now (see Bostrom and Cirkovic 2011: 34). At this point, the sun will begin to enter its red giant phase and expand. It might consume the planet. Or it might simply heat up our planet until complex life is impossible. In either case, harboring human life would not be one of the functions of Earth.

Lastly, even if we agree that there is some kind of order to the universe, this is not the same kind of order that is seen in a watch. Rather, it is merely the sort of patterns you would find in any complex system. That is to say any sufficiently complex systen gives rise to perceived regularities. This is usually referred to as Ramseyian Order (see Graham & Spencer 1990). In other words, this means that Paley is guilty of an informal fallacy; he used the word "order" with two different meanings (see Firestone 2020).

 


 

Equivocation is a fallacy in which an arguer uses a word with a particular meaning in one premise, and then uses the same word with a different meaning in another premise. For example, in the argument below, the word "man" is used in two different senses. In premise 1, "man" is (in sexist language) being used to refer to the human species. In premise 2, "man" is being used to refer to the male gender. Not cool, fam. Not cool.

Example of the equivocation fallacy

 


 

 

 

Important Concepts

 

Atheists on the attack

The traditional approach that atheists take when critiquing belief in the Judeo-Christian God is to show how the very descriptions and traits ascribed to God render the whole notion of God incoherent. There are, of course, other ways to argue against God's existence, but, in this course, we'll focus on the method just described.4

The Problem of Divine Foreknowledge

The problem of divine foreknowledge is one such problem. This problem arises when one reflects on the apparent incompatibility between God's omniscience and human free will. If God knows everything that there is to be known, then God knows every single choice you will ever take. To God, your entire life is like a book; God already knows how everything is going to turn out. As such, your life seems like it is set; it is written, it cannot change. If this is the case, then you don't enjoy real freedom; if your path is already determined, then you don't really have free will. It seems that your life is already determined; your fate is sealed.

 


Meme from internet,
grammatical error
in original.

This problem gets even more troublesome when you realize that, in the Christian worldview, there is an existence after your earthly death—either eternal bliss in heaven or endless punishment in hell. But God already knows where you'll end up. So, it might be that some of us end up in hell, tortured for all eternity, but that we never exercised any free will to deserve being there. In other words, it seems that God punishes those who can't help but to have behaved in the way that they did, and this seems really unfair.

The Omnipotence Paradox

The attribute of omnipotence is itself probably incoherent. For example, can God make a boulder so big that God can't move it afterwards? If God can't, then God is not all powerful, since God can't make it; if God can, God is not all-powerful since God can't move it. Here's another one: can God make beings such that, afterward, God can't control them? There are numerous other examples. Two is enough.

The Question of Miracles

Miracles occur when God bends the laws of nature. But why does God need to bend the laws of nature? Didn't God set everything in motion? Why didn't God plan everything correctly from the beginning? Did God make a mistake? Did God have a change of heart? Why would miracles be necessary for a being that is omniscient?

 

 

The Problem of Evil

The preceding problems each reflect on one of the attributes ascribed to God and argues that it conflicts with our commonsense intuitions about the world, about humans, or about the attributes ascribed to God. The Problem of Evil (PoE), however, utilizes all of the attributes ascribed to God in order to mount a powerful argument against God's existence...

Context

To understand this argument, we need to first grant an assumption: there exists unnecessary suffering in the world. To me, there's hardly a better example of unnecessary suffering than the world wars that the Great Powers dragged the rest of the world into during the 20th century. These included the deaths of millions, starvation, exposure to the elements, grotesque injuries, genocide, etc. The horrors of World War I (WWI) will be relevant later in this course, so I will now give some details.

WWI was caused by a series of alliances of and arm races by the European powers, as well as a German nation that felt it could not exercise its imperial power since the other imperialist powers had already colonized much of the world. Moreover, Germany felt increasingly surrounded by its rivals. And so, Germany felt that a war against France and Russia could bolster its status as a regional power and grant it some colonies. All powers that eventually became involved thought it'd be a short war—the generals believed the soldiers would be "home before the leaves fall." They were wrong.

Seven reasons why WWI was pointless and full of unnecessary suffering:5


A soldier with shell shock.
  1. Military uniforms were horribly outdated. For example, the French high command structure refused to trade in their traditional, easily-spotted red pants for newer, more camouflaged colors like field gray. One commander even said, “Le pantalon rouge c'est la France.” [The red pants are France.] Clearly, these colored pants simply painted a target on French soldiers and is a completely senseless policy.

  2. Soldiers had insufficiently protective head gear. The Germans, for example, introduced the steel helmet only in early 1916, after almost two years of fighting. Countless deaths could be attributed to this shortsighted policy alone.

  3. Even though some thinkers believed that the economic cost of war was so great that it would be irrational to engage in conflict, like Norman Angell did in his influential The Great Illusion (first published in 1909), the Great Powers nonetheless continued their arms race. This made war all but inevitable as well as irrational. In the end, nations were bankrupted, empires fell, and millions were dead.

  4. The French had known about the German plan of attack, the Schlieffen Plan, through their intelligence channels. But they simply didn’t believe it, and later thought it would workout to their favor. They were wrong. They could've stopped Germany earlier, but they didn't.

  5. Highly ranked officers in the Russian army, on whom which the French and British relied on to attack Germany from the East, willfully ignored learning modern war tactics and were proud of it. This led to the continued deployment of the frontlines to charge machine guns. This is might've worked with older weaponry, but not with the mechanized warfare of WWI. Soldiers were slaughtered as they charged machine guns. Completely inexcusable.

  6. Although WWI was referred to as the “war to end all wars”, it really was classic imperialism in disguise. The Allies, for example, were making private deals with countries that could help them win the war, such as Italy, while making public proclamations about how people have the right of self-determination. The most glaring example of this was the Sykes-Picot agreement. Per this agreement, after the Allies won, the Ottoman Empire was to be broken up and distributed among the allied victors. The same month that this was being negotiated, the British government made a public declaration guaranteeing a national home for Jewish people in Palestine, which was part of the Ottoman Empire. Britain primarily wanted this home for Jewish people to extend its sphere of influence. And none of this would’ve been known if it weren’t for the Bolsheviks. Weeks after taking power, the Bolsheviks divulged the details of the Sykes-Picot agreement, showing that the Great Powers were engaging in imperialism, business as usual.

  7. WWI was the final straw that broke the Romanov Dynasty in Russia. A few months after the collapse of this regime the Bolshevik Communists took over, initiating one of the most awful political experiments in the history of civilization. Even if the only effect of WWI was that the Bolsheviks gave rise to the Soviet Union, the excess mortality in the Soviet Union during Stalin's reign was 20 million. In other words, Stalin caused as many deaths as WWI and it was in large part WWI that allowed Stalin's party to take power.

 

The result:

About 20 million people died in WWI.

 


 

 


 

 

 

 

To be continued...

 

FYI

Suggested Reading: John Mackie, Evil and Omnipotence

TL;DR: Crash Course, The Problem of Evil

Supplementary Material—

Related Material—

Advanced Material—

 

Footnotes

1. Gibbon’s Decline and Fall of the Roman Empire was placed in the Vatican’s Index Librorum Prohibitorum, i.e., the "List of Prohibited Books".

2. Of course, an educated Christian would’ve also been able to tell that there were several other people preaching that they were the messiah, claiming divinity and living a life of renunciation. Several even claimed that they had to die for the sake of humanity and had followers who claimed they resurrected after their death. One such case was that of Peregrinus, whose biography mirrors that of Jesus of Nazareth to an alarming degree. In any case, educated Christians would not have known about Peregrinus for long. A book by Lucian describing the tale of Peregrinus was also banned by the Church. The Church dealt with pagan messiahs whose life was similar to that of Jesus so often that there's a term for it: diabolical mimicry.

3. A note on the label "atheist": It is not universally agreed upon what the label "atheist" actually means. Does it mean an active denial of the Judeo-Christian God's existence or is it merely non-belief? In this class, I will refer to the individual who actively denies the existence of some deity as an atheist and someone who just has no beliefs about deities as a non-theist; someone who believes in God is a theist. As for the agnostics, I won't define that category in a loose "spiritual" sense like some of my students have in the past. In this class, an agnostic is someone for whom knowledge of the supernatural is impossible; a gnostic is someone who believes knowledge of the supernatural is possible. According to this nomenclature, one can actually combine positions. One can be either a theist, atheist, or non-theist as well as either a gnostic or agnostic. For example, Ockham is a theist agnostic, who believed in God but thought that knowledge of God was impossible. Aquinas was a theist gnostic, who believed in God and thought he could have knowledge of this existence. In this class, we will cover the position of the gnostic atheist: the person who actively denies the existence of the Judeo-Christian God and who believes they know that this supernatural being does not exist.

4. Another strategy used to argue against God's existence is sometimes referred to as "debunking". The general idea here is to tell the history of some religion, for example, Christianity, and then argue that there is nothing divine or supernatural about this belief system. So, there is the deductive approach which seeks to undermine the intelligibility of the notion of God and the historical inductive approach which undermines confidence in religious beliefs by explaining the phenomenon in purely natural terms. We will be using the first approach, i.e., the deductive approach, in this class.

5. For more on 1914, see Barbara Tuchman's The Guns of August. For a history of the imperial powers in the Arabian peninsula, see chapter 13 of Mackintosh-Smith (2019). Mackintosh-Smith has this to say about the Sykes-Picot agreement: “Some commentators have argued that their pact... ‘was a tool of unification, rather than the divisive instrument it is now commonly thought to have been.’ That is sophistry. The agreement did, in fact, accept the principle of eventual Arab independence, but on condition of the two powers having permanent influence. A prisoner is not free just because he is under house arrest instead of in jail” (Mackintosh-Smith 2019: 442-443).

 

 

Pascal's Wager

 

Essentially, all models are wrong, but some are useful.

~George E. P. Box

 

On the reluctance of using the Bible to defend one's belief in God's existence

When first encountering the philosophy of religion, many years ago now, I was surprised at how sophisticated theologians refrained from using the Bible in their argumentation. To my young mind, it was obvious that to argue for God's existence, the Bible is your most valuable tool. And yet, exactly the opposite strategy was in play, and it took me many years for me to discover why. I hope to now save you much of the trouble that I went through and summarize why recourse to the Bible is not a viable maneuver in defending belief in the existence of God.

The first reason for not utilizing the Bible to defend religious conviction has already been stated. Recall that Descartes and Locke noted that religious fanatics reasoned just fine, and were perfectly capable of coming to conclusions given some initial assumptions. The problem, they realized, was that religious fanatics held some radical religious assumptions—for example that Protestants were heretics who were poisoning minds and were leading souls straight to hell—without any proper justification for them (in Descartes' case) or that they simply could not be highly-confident about (as Locke argued). With regards to using the Bible to defend belief in God's existence, these beliefs are not independent of each other. Typically, either one believes both in God's existence and the truth of the Bible or in neither. In other words, one belief supports the other and vice versa. This is not bad in and of itself. However, the problem arises when one uses the first belief to support the second and then the second to support the first. This is circular reasoning, and it violates what we've been calling Aristotle's dictum.

 


 

The mental machinery behind this sort of circular reasoning is absolutely fascinating. In their recently revised Mistakes Were Made (But Not By Me), the authors explore the phenomenon of self-justification, our tendency to justify those beliefs and decisions in which we are heavily invested and/or cannot change easily. In other words, if we are heavily-invested and cannot change an action (like having lied in an interview to get a job) or a belief (like the belief that our career choice was the right one), then we are likely to engage in self-deception and claim that the choice/belief was the right one all along and that any reasonable person would've done or believed the same. Moreover, we are likely to selectively recall information that confirms the rightness of our action/choice and ignore threatening evidence—classic confirmation bias.

After explaining their theory, the authors go through many examples of self-justification in different professions and domains. Chapter 4—my favorite—details the phenomenon of false memories. They set up the topic by discussing various episodes in which false memories were implanted by well-intentioned therapists, including 'recovered' false memories of sexual molestation (where there never really was any abuse). The authors then discuss the real victims: those that were wrongfully convicted of the "crimes" in the recovered false memories. Most relevant to us, the authors stress that the therapists do not change their mind and do not question their techniques and clinical practices.

The general problem here is that the therapists' system of beliefs is a closed loop. If the patient gets better or recovers a memory, then the psychotherapist’s practice must’ve worked; if they didn’t get better or recover a memory, the patient must be resistant to therapy. In other words, there is no way to disconfirm the therapists' system of beliefs. It's not only clinical psychiatrists that are plagued by the curse of the closed loop. The authors provide details of closed loops in the criminal justice system (in the minds of both prosecutors and those that engage in interrogation), in politics, and even in romantic relationships. The same could be said about the closed loop of believing God exists because it's stated in the Bible, and believing that the Bible is true because it was inspired by God. It's a trap that's hard to get out of, since it requires admitting that we formed beliefs or took actions without good reasons (see Tavris and Aronson 2020).

One more thing that is relevant here is self-justification when choosing majors. In my own experience, I've met students who were originally in a major under the Behavioral and Social Sciences Division (where I am housed) who then switch to another division. Choosing a major is obviously a big decision; it can be nerve-racking and could define the course of the rest of your life. Because of this uncertainty, I've noticed the need for some students to make themselves more comfortable with their choice by ridiculing, attacking, or otherwise dissociating from their past major. However, it is perfectly possible to be, say, a math major without denigrating, say, psychology. The fact that these students felt compelled to disparage their former major, I think, is due to the mind's need for self-justification.

 


 

 

The second reason I'll give for why intellectuals from the past have repeatedly refrained from using the Bible to defend their religious convictions is that intellectuals tend to be committed to logical consistency and, unfortunately, the Bible has some glaring inconsistencies. If one engages in serious study of the Bible, one runs into problems early on. For example, the Book of Genesis has two incompatible creation stories. Why? Why would there be two stories of the origins of the world that are incompatible with each other right next to each other? Even more problematic is that the creation stories conflict with our scientific understanding of the world. For example, in the first creation story, God creates day and night before he created the sun (which is what determines when it is day and night). Obviously, the thinkers we've been covering were heavily invested in the science of astronomy of their day, and they would've recognized this little hiccup (in literally the first book of the Bible).

There are other problems. As we've seen before, the early Church had to deal with many (what would eventually be called) heretical versions of Christianity. We mentioned before the Arian heresy, which is the view that Jesus was not equal with God the Father but actually subordinate to Him. Reading the Bible closely, though, really does lead one to this sort of belief, a view sometimes called subordinationism. For example, if one looks at the Gospels, the genealogies of Jesus are very incriminating. If you have a Bible handy, take a look at Matthew's genealogy of Jesus, and compare it with that of Luke. Two things that are immediately obvious are this. First, they're not identical. Matthew's account is substantially shorter. Shouldn't there be agreement on this, though? Second, and perhaps more damning, is that the genealogies are patrilineal. In other words, they describe the male line of ancestry. They both end with Joseph and then Jesus. But, according to tradition, Joseph is the stepfather of Jesus; Jesus was born of Mary, who is said to have been a virgin all her life. And so, it doesn't make sense to give a patrilineal genealogy since Joseph is not the father. Subordinationists used this to argue that Jesus was actually just a human and, at least in one version of the view, was adopted by God (see Ehrman 2005 and Ehrman 2015).1

Augustine of Hippo
Augustine of Hippo
(354-430 CE).

One more issue with the Bible is not so much about the inconsistencies within but rather has to do with the quality of the writing. Apparently, the register (the level of formality of a piece of writing) and accent that the Bible was written in was “embarrassing” to many Christian intellectuals (see Nixey 2018, chapter 10). This, however, is difficult to understand for modern readers, since the process of translation has smoothed over some of the problems in the original Hebrew and Aramaic. One would have to both learn these languages and then find early copies of these texts to understand the complaint being given here. I, for one, don't speak either of these languages, so I can't make this claim directly. However, it is telling that some early Christian intellectuals did feel the need to defend the way the Bible was written. In fact, even Saint Augustine (354-430 CE), considered one of the most important Church Fathers of the Latin Church, felt the need to defend it (ibid., 160-61). The poor style of the biblical writers was even more obvious when compared to the superior pagan literature, including writings from non-Christian philosophers, medics, and even emperors.2

I'll give one more reason why the Bible is not often used by intellectuals to defend belief in God. This has to do with something that is taught in every class on public speaking or persuasive argumentation: know your audience. When one is defending religious convictions, it is typically against non-believers. Clearly, non-believers aren't going to be convinced by references to a book they don't believe is true. In short, no atheists are going to be convinced by a recourse to the Bible, so there's no sense in trying.

These are just some of the inconsistencies in and problems with the Bible; there are more. Rather than spending more time on this, though, I'll just say the following. If one is engaging in rational argumentation, books of supposed supernatural origin aren't going to have much explanatory power. Generally speaking, this strategy is a dead end.

And so, religious intellectuals from Anselm to Descartes have thought it best to steer clear of these biblical issues. Anselm and Descartes, instead of using the Bible in their argumentation, gave their different versions of an ontological argument. Aquinas and Paley argued from natural theology, as we've seen. The author we're covering today, though, went by an entirely different route.

“Let us now speak according to natural lights. If there is a God, he is infinitely incomprehensible, since, having neither parts, nor limits, He has no affinity to us [Ockham’s point]. We are therefore incapable of knowing either what He is, or if He is... Who then will blame Christians for not being able to give a reason for their belief, since they profess a religion for which they cannot give a reason?” (Blaise Pascal as quoted in Blackburn 1999: 186; interpolation is mine).

 

 

 

The human

Blaise Pascal was born in 1623. Due to his poor health, he did not enjoy regular schooling; rather, his father (who was an accomplished mathematician) taught Pascal a curriculum primarily consisting of classical languages and mathematics. He was never formally trained in theology or philosophy. At a very young age, he showed great promise for mathematics and, throughout his life, he met and corresponded with various influential mathematicians of the day, including some of the members of the Marsenne Circle, René Descartes, and Pierre Fermat.

When meeting with Descartes, Pascal likely told him about an experiment he would be conducting soon, the results of which were published in Pascal's Récit de la grande expérience de l'équilibre des liqueurs in 1648. With few exceptions, however, most of Pascal's works weren't published until after his death. This is because, throughout his life, his attentions oscillated between mathematics and his devotion to a very strict sect of Christianity started by Cornelius Jansen. In fact, Pascal's contributions to probability theory (likely what he is most known for besides the argument we'll be covering in this lesson) were not known until the Swiss mathematician Daniel Bernoulli used Pascal's insights in the early 1700s.

Towards the end of his life, he focused exclusively on a theological defense of Catholicism. He left bundles of notes to be published, but gave no directions on the order in which they should be organized. This posthumously published notebook is known as the Pensées, and it is from this body of work that we are pulling his argument for "wagering for God." In this passage, we can also see Pascal's early insights into probability theory.

““In another landmark moment in this passage, he next presents a formulation of expected utility theory. When gambling, ‘every player stakes a certainty to gain an uncertainty, and yet he stakes a finite certainty to gain a finite uncertainty, without transgressing against reason.’ How much, then, should a player be prepared to stake without transgressing against reason? Here is Pascal’s answer: ‘…the uncertainty of the gain is proportioned to the certainty of the stake according to the proportion of the chances of gain and loss…’ It takes some work to show that this yields expected utility theory’s answer exactly, but it is work well worth doing for its historical importance” ( Hájek 2018, Section 4).

Pascal's life was characterized by ill health and lifelong pain. Some commentators even speculate that he suffered from manic depression (see the Stanford Encyclopedia of Philosophy's Entry on Blaise Pascal). He died in Paris in 1662 at the age of 39.

 

Important Concepts

 

Decoding Pascal

 

 

 

Objections

The "mathematical error" objection

Many students are initially offput by the use of ∞ in Pascal's mathematical argumentation. But, this was a perfectly valid move in Pascal's time. The reality is that mathematical concepts have a messy history. Time and time again, new ideas and approaches to the study of mathematics were questioned, updated, debated, thrown out, and taken up again. For example, irrational numbers and non-Euclidean geometries were all looked upon suspiciously in the beginning. There's even a myth that some Pythagoreans, who believed that all numbers could be expressed as the ratio of integers, drowned the discoverer of irrational numbers (which can't be expressed as the ratio of integers). Similarly, in his survey of the cognitive science of mathematics, Dehaene reminds us that negative numbers were also heavily debated upon their introduction.

“For Pascal himself, the subtraction 0 - 4, whose result is negative, was pure nonsense”
(Dehaene 1999: 87).

This is, in fact, how real science is done. Real science involves debate, suspicion of another's findings, conflicting interpretations of data, and even challenging the experimental methodology of other scientists. It is far from a clean, heroic picture. Here are two examples of now-established scientific views which were initially looked at with wariness and one widely-held view that is now coming under attack:

  • Einstein's relativity was not universally accepted at first. His views were confirmed (for most) only in 1919 when an eclipse afforded scientists the opportunity to see how a great mass can bend light (Kennefick 2019).
  • Einstein himself could not accept quantum mechanics, always believing that there was a 'hidden variable' that would eventually be discovered and return something known as determinism to physics. (Stay tuned.)
  • Today there's a debate in the neuroscience of emotion between those who believe that there are universally built-in emotions (the classical view) and the view that emotions are socially-constructed in different ways by different people (the construction theory of emotion; see Barrett 2017).

All this to say that Pascal likely had some good responses to this criticism, even if they are lost to history. All we can say is that it was perfectly acceptable for him to argue in the way he did given the state of mathematics at the time. It's unreasonable to accuse him of committing a mathematical error.3

The Many Gods Objection

An extended decision matrix.
An extended decision matrix.

Another possible response to Pascal's wager comes from the admission that there are, in fact, many other competing religions. At least some of them are worthy of some probability in the decision matrix. However, the utility of some of these religions, such as Judaism and Islam, is also infinity. As such, if the probability that these religions are correct is non-zero, then these religions would also yield infinite expected utility. This would imply that there are multiple possible routes to infinite expected utility. Choosing Christianity, then, would seem like an arbitrary choice, and this is unjustified.

The Zero Probability Objection

Perhaps the most damning—no pun intended—objection to Pascal's wager is the zero probability objection. The atheist might argue against premise 2; i.e., she might claim that the probability of God existing is actually zero. Perhaps this atheist is assuming that the problem of evil is valid and sound. Since the conclusion of the problem of evil argument states that the very concept of God is incompatible with the world as we know it, it entails assigning a probability of zero to God's existence. In this case, the expected utility is 0, (since ∞ X 0 = 0). If this atheist is correct, it makes no sense to wager for God, since you're better off choosing another row in the decision matrix where there is at least the chance of an expected utility that's greater than zero. A lot rides on whether or not the problem of evil can be solved...

 

 

 

Executive Summary

  • The thinkers from the early modern period who held religious convictions typically did not recourse to the Bible to defend their faith; they used rational argumentation instead.

  • Blaise Pascal gives an argument suggesting that believing in God might be rational despite the absence of a convincing argument for God's existence.

  • Pascal's argument, however, might be problematized by arguments like the Problem of Evil, which one can interpret as arguing that there is zero chance that God exists.

FYI

Suggested Reading: Alan Hájek, Pascal’s Wager (Note: Focus primarily on sections 1, 4, & 5.)

TL;DR: Crash Course, Indiana Jones and Pascal’s Wager

Related Material—

  • Video: TED, Dan Gilbert: Why We Make Bad Decisions
  • Podcast: Making Sense Podcast, What is and what matters: A Conversation with Rebecca Goldstein and Max Tegmark
    • Note: In this recording of a live event, Sam Harris and his guests Rebecca Goldstein (a philosopher) and Max Tegmark (a physicist) discuss how, in effect, it seems like science has a little philosophy built into it. This is because the interpretation of empirical data is theory-laden. Put into the language that we've been using, although scientists in a given field are all looking (roughly) at the same data, their interpretations sometimes differ. That is, scientists import a little bit of their worldview into their work. They interpret data using the assumptions of their worldview.

Advanced Material—

  • Note: In this lecture, we utilize decision theory to assess Pascal's argument. Here is an introduction to decision theory.

  • Reading: Katie Steele and H. Orri Stefánsson, Stanford Encyclopedia of Philosophy Entry on Decision Theory

 

Footnotes

1. Fun fact: A student once came up to me after this lesson and told me that a. she really enjoyed the lesson and was thinking a lot about the content, and b. she felt like just studying this was sinful and she felt the need to go to confession.

2. The fact that non-Christian literature appeared to many to be more sophisticated led to a process by which Christians began to Christianize and absorb the pagan literature, at least in part due to self-interest. Augustine is given credit for "christianizing" Plato. Aquinas "christianized" Aristotle.

3. Students, unfortunately, often come to higher education with overly-simplistic views of science where there are people in lab coats producing facts. Many have an even more simplistic view were scientific progress is basically the labor of just a handful of thinkers. Science—real science—is dynamic, combative, competitive, and this is actually what makes it so effective. It's history—it's real history—has tens of thousands of views that were discarded because they didn't work out. The fact is that science, generally speaking, progresses through hundreds of small discoveries, hundreds of labs working independently (and competitively), fighting for the same grants (their sources of funding), and it is only time that definitively weeds away the non-productive theories. Part of the reason why students have some of the more unsophisticated beliefs about science is because the history of science really is taught like its a heroic tale, where the good guys have the right idea, but some others were nay-sayers, but in the end the good guys always win. Essentially, I'm blaming K-12 education. Again, the reality is much more convoluted than this. For example, in chapter 11 of his Not Even Wrong, Woit argues that most histories of physics and mathematics of the 20th century—like his own—ruthlessly suppress and ignore research programmes that ultimately failed, making it seem like most researchers were working on the theories that ultimately prevailed. However, Woit makes clear that the reality was that a majority of physicists weren’t working in the “triumphant” research programmes; only a minority of thinkers were working on the projects that were actually successful. Let me say that in a different way to drive the point home. Most physicists working on particle physics in the time period Woit describes were wrong. They worked on theories that ultimately failed. Now this is (to me) heroic in its own way; but this is not the tale of science that our educational system imparts on our students. They want to tell a neat tale of success, and this (unfortunately) obscures reality.

One last point that seems relevant here. We're beginning to see something peculiar. Time and time again, some intellectual insight is preceded by some mathematical breakthrough. Consider the relationship between mathematics and physics. Time and time again, mathematicians have created tools that would eventually become useful to physicists even though mathematicians never intended to be that way. Non-Euclidean geometry, for example, pre-dated relativity theory; representation theory was eventually used in Schrödinger’s equation; and Clifford algebra was used in Dirac’s equation (which is basically Schrödinger’s equation but also is compatible with relativity theory). This is strange. It's almost as if the world necessarily conforms to mathematical law, and so mathematicians, by simply engaging in pure mathematics, develop tools that will eventually be useful in understanding the world (even though this was not their intention). Is mathematics the key to understanding reality? And if it is, why is this the case? Stay tuned.

 

 

The Problem of Evil (Pt. II)

 

 

We are all atheists about most of the gods that societies have ever believed in. Some of us just go one god further.

~Richard Dawkins

 

PoE II

 

 


 

Executive Summary

  • The dynamic between intellectuals and faith is complex and constantly changing. In the early stages of Christianity, many non-Christian intellectuals were opposed to the faith. In the early modern period, many great scientific discoveries were actually fueled by religious devotion. Today, some intellectuals are once again attacking religion—but not all.

  • Much ink has been spilled by atheists in giving their numerous objections and responses to any conceivable argument for God's existence.

  • With regards to arguing against God's existence, a common argument from the atheist camp is known as the Problem of Evil. This argument states that the existence of all-powerful, all-knowing, all-loving God is incompatible with the existence with the unnecessary suffering that we see in the world around us. Since it is unreasonable to deny that unnecessary suffering exists, the atheists argue, the only rational response is to abandon belief in an all-powerful, all-knowing, all-loving God.

  • There are various attempted solutions to the Problem of Evil that, although initially appealing, either concede way too much to the atheist or are ultimately too weak; e.g., deism, the devil causes unnecessary suffering, etc. One strong, more popular, potential solution is the free will solution. We turn to the topic of free will next.

 

 

 UNIT II

endgame1.jpg

Laplace's Demon

 

 

“I used to say of him that his presence on the field made the difference of forty thousand men.”

~Arthur Wellesley,
1st Duke of Wellington
speaking of Napoleon Bonaparte

War is hell

What is the best way to wage war? What do all great generals have in common that bad generals lack? What strategy will ensure that your forces win and the other side loses? In How Great Generals Win, military historian and veteran of the Korean War Bevin Alexander (2002) makes the case that what truly sets apart great generals is their disposition to only attack the flanks (sides) and rear of the opponent and the poise to not engage in a full frontal attack. Full frontal attacks, such as the kind that were repeatedly attempted in WWI, rarely lead to a decisive victory and instead simply increase the casualties on both sides. Bevin acknowledges that there are other requirements for being successful in war, such as having up-to-date armaments, getting to the theater of battle early and with sufficient forces, having reliable intelligence on the operations of the enemy, and mistifying and confusing your opponent as much as possible. But all things being equal, Alexander argues that attacking the flanks and rear is what most successful generals have done in most of the battles that they've won. One example that Alexander gives of a general who employs this strategy is Napoleon Bonaparte (1769-1821).

Alexander's How Great Generals Win

Let's begin with some context. As you recall, the Thirty Years' War (1618-1648) desolated large portions of Western Europe and caused the deaths of about 8 million people, including about one-fifth of the German population. Thousands of towns and villages were burned and abandoned. At the conclusion of the conflict, the map of Europe had been altered, and absolute kings gained control of the devastated territories. The kings that ruled these territories were set upon stopping the depredation, and they established professional standing armies that were kept separate from the general population. These professional armies had a lamentable combination of traits: they were both extremely expensive and extremely untrustworthy. These were expensive because they had to be kept on war footing at all times. There was essentially never any demobilization; they were always ready for war. The soldiers, however, were recruited from the dregs of society. They were violent and prone to desertion. The generals kept a constant watch on them, apparently not even letting them bathe without supervision. Because of their expense and their unreliable nature, these armies were kept small and military tactics were formed around these constraints.

But things were about to change. The writings of thinkers like Jean-Jacques Rousseau (1712-1778) challenged the aristocracy and argued instead for democracy and freedom. Indeed, in one nation in particular these ideas led to the overthrow of the monarchy and the institution of a republic. This is, of course, France. Although the French Revolution deserves a whole course onto itself, here's what we can say about how it affected war. Since the soldiers were no longer merely subjects—but citizens—there was an increased loyalty to the state. Since they were, in a sense, the state, a type of worship of the state began. Nationalism was born. Now the soldiers could be trusted to not desert, to be committed to the cause of the nation-state. Armies could grow in size and were more maneuverable. Into these circumstances entered a military genius who would transform warfare.

A Gribeauval cannon from the Napoleonic era
A Gribeauval cannon
from the Napoleonic era.

Per Alexander (2002, chapter 3), Napoleon's favorite tactic was the strategic battle. Napoleon would first attack frontally with enough determination to fool his enemy into thinking that he intended to breakthrough. Napoleon would then send a second, large force to attack the flank of the enemy. This would cause the enemy to divert forces from their center to the flank being attacked. Since the enemy had to send reinforcements to the flank hastily, these were usually taken from the part of the center closest to the flank being attacked. This created a weak part in the center—again, the part closest to the flank being attacked. And so, the third part of this strategy is to unleash an artillery attack that had been hidden opposite the part of the center that is now weak. Napoleon knew this portion would be weak, since he had planned on attacking the flank near it and hence knew the enemy would pull men from this region. After the artillery attack, Napoleon would concentrate forces and push through the weakened part of the center. Speaking very loosely, Napoleon faked a frontal attack, then faked a lateral attack, and then pushed through the weakened part between the faked frontal and the faked lateral attack.1

This was, of course, not the only strategy that made Napoleon militarily successful. Moreover, the strategic battle was described ideally above, but as Prussian military commander Helmuth van Moltke said, "No plan survives first contact with the enemy." This appears to be true in all types of combat. Mike Tyson is quoted as saying, "Everybody has a plan until they get punched in the mouth." And so, Napoleon's strategic battle is hardly ever executed in pristine form, but this is not the place to give those details. Lastly, it seems Napoleon did not always follow his own strategic recommendations. After he had become emperor, Alexander claims that Napoleon no longer attempted to win battles through guile, speed, and deception; instead, he purchased victories at the cost of human lives.

Nonetheless, Napoleon is remembered for his genius in battle, even if it was inconstant throughout his life. It should come as no surprise that he was taught by some of the best. During his time in the École Militaire, he came into contact with one of the greatest mathematicians of the era: Pierre-Simon Laplace (1749-1827). Laplace was the first unambiguous proponent of a view called determinism. We'll cover this view later in this lesson.

But before leaving Napoleon, let's consider for a moment what we can learn about Napoleon's military skill when we use quantitative methods. In other words, let's look at what mathematical techniques can teach us about Napoleon. Apparently, he was really a force to be reckoned with.

“Using the methods developed by military historians… it is possible to do a statistical analysis of the battle outcomes. Taking into account various factors, such as the numbers of men on each side, the armaments, position, and tactical surprise (if any), the analysis shows that Napoleon as commander acted as a multiplier, estimated as 1.3. In other words, the presence of Napoleon was equivalent to the French having an extra 30 percent of troops... Most likely, all of these factors were operating together, and we cannot distinguish between them with data. We do know, however, that the presence of Napoleon had a measurable effect on the outcome” (Turchin 2007: 314-15).

The use of statistical methods were, in fact, popularized by Napoleon's teacher, Laplace. Laplace did this by writing non-technical, popular essays on mathematical ideas. As Laplace's methods spread, scientists were fascinated by how all human actions become more predictable when you use statistical methods; everything seemed to fall into accordance with a (statistical) natural law. To these scientists, as well as to many non-scientists, this pointed towards a concerning idea: that maybe humans don't have free will. Indeed, soon after Laplace, the field of social physics is proposed, a field that understood the regularities in things like suicide rate and crime as the products of social conditions. In this way of viewing things, criminals aren’t doing things of their own free will; their actions are determined instead by their social conditions. Human actions, in other words, are just the product of natural law—not of human thoughts and desires. Stay tuned.

 

Carrying on
with the Cartesian project

“...Work on gravitation [by Newton, 1643-1727] presented mankind with a new world order, a universe controlled throughout by a few universal mathematical laws which in turn were derived from a common set of mathematically expressible physical principles. Here was a majestic scheme which embraced the fall of a stone, the tides of the oceans, the moon, the planets, the comets which seemed to sweep defiantly through the orderly system of planets, and the most distant stars. This view of the universe came to a world seeking to secure a new approach to truth and a body of sound truths which were to replace the already discredited doctrines of medieval culture. Thus it was bound to give rise to revolutionary systems of thought in almost all intellectual spheres. And it did...” (Kline 1985: 359; interpolation is mine).

When we last left off, we were exploring the ideas of Blaise Pascal, who died in 1662, and now we find ourselves suddenly catapulted to the early 19th century. Why? Well, the main reason is that we are attempting to salvage the Cartesian project. If we can somehow establish God's existence, as Descartes attempted, then perhaps we can defend foundationalism over Locke's indirect realism and Bacon-inspired positivism. However, the most pressing threat to belief in the existence of God comes from the problem of evil argument. And so, before advancing, we must solve the problem of evil. One frequently suggested solution to the problem of evil is the free will solution, the view that it is human free will that causes unnecessary suffering in the world. Although this solution doesn't really account for suffering caused by, for example, natural disasters—since it is nowadays unreasonable to assume that human actions cause, say, volcanic eruptions—there is a lot of suffering that is definitely caused by human actions. So the free will solution should be explored, and this is why we have jumped forward to the 19th century: a very credible threat to the free will solution manifests itself in 1814.

Before discussing the threat to free will, however, allow me to give you one bit of context. Now in the 18th century, we find ourselves in a period called The Enlightenment. Just when this era began is disputed. Some argue that it begins with Descartes' writings, since Descartes was the first to unambiguously formulate the principle that is now known as Newton's first law of motion. Many (if not most) instead claim that the Enlightenment began when Isaac Newton publishes his Principia Mathematica in 1687. Regardless of when it started, this intellectual movement further weakened an already reeling Catholic Church, it undermined the authority of the monarchy and the notion of the divine right of kings, and it paved the way for the political revolutions in the late 1700s, including the American and French revolutions. Right in the middle of this intellectual upheaval came a man named Laplace.

Pierre-Simon Laplace (1749-1827) was a French polymath who made sizable contributions to mathematical physics, probability theory, and other subfields. He even suggested that there could be massive stars whose gravity is so great that not even light could escape from their surface. This is, of course, a black hole. In any event, we come to his work for one very important reason. In 1814, in a work titled A Philosophical Essay on Probabilities, Laplace wrote the following:

"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes" (p. 4).

 

Pierre-Simon Laplace (1749-1827)
Pierre-Simon Laplace
(1749-1827).

Two important ideas are found in this passage. The first is the unambiguous articulation of determinism, the view that all events are caused by prior events in conjunction with the laws of nature; i.e., the view all events are forced upon us by past events plus the laws of physics. The second is the introduction of the idea that some sufficiently powerful intellect could very well predict and retrodict everything that will happen and everything that has happened. This intellect would, in fact, know every single action you'll ever take. This is Laplace's demon.

Just as God's omniscience seemed to threaten free will, it's seems like the very nature of causation—the relationship between cause and effect—leaves no room for human free will. It's even more complicated, though. Whereas not everyone believes in God, it does appear that all rational persons must believe that there is a natural order to the universe; in other words, there is a "way the universe works". If this "way the universe works" is in fact captured by the notion of determinism, then there appears to be no way to defend the view that humans have free will. This is because if every event that ever happened had to happen, then this includes your choices. This means that your choices didn't really come from you; they are just the result of the laws of nature working on the initial state of the universe. Free will is an illusion.2

However, it certainly seems like free will exists. It certainly seems like when I go to the ice cream shop, the act of choosing mint chocolate chip comes from me—not from the Big Bang and the laws of nature.

Moreover, at least at first glance, it really does seem like human free will and determinism are incompatible. In other words, it does appear to be the case that if determinism is true, then human free will is an illusion. Alternatively, it seems that if humans do have free will, then determinism must be false (because at least some events, i.e., human choices, are not determined by the laws of nature and the initial state of the universe).

Our intuitions about the nature of the universe and our capacity to make free choices appear to be at odds with each other. This is, in a nutshell, the next problem we must tackle. This, then, is dielemma #4: Do we have free will?

 

Napoleon invades Russia

 

 

Important Concepts

 

Decoding Laplace's Demon

 

 

 

Question:
Is determinism true?

“The first flowering of modern physical science reached its culmination in 1687 with the publication of Isaac Newton’s Principia, thereafter mechanics was established as a mature discipline capable of describing the motions of particles in ways that were clear and deterministic. So complete did this new science seem to be that by the end of the 18th century the greatest of Newton’s successors, Pierre Simon Laplace, could make his celebrated assertion that a being equipped with unlimited calculating powers and given complete knowledge of the dispositions of all particles at some instant of time could use Newton’s equations to predict the future and to retrodict, with equal certainty, the past of the whole universe. In fact, this rather chilling mechanistic claim always had a strong suspicion of hubris about it” (Polkinghorne 2002: 1-2).

Non-Euclidean geometry

Although determinism was not the first threat to free will—recall the problem of divine foreknowledge—it did carry a lot of weight in the scientific community. As such, it was a threat that had to be dealt with. Much ink has been spilled on this topic. Alas, confidence in the truth of determinism began to wane in two distinct time periods.

Projecting a sphere to a plane.
Projecting a sphere to a plane.

First, in the same decade in which Napoleon was being dethroned (for the second time), two mathematicians (Carl Friedrich Gauss and Ferdinand Karl Schweikart) were independently working on something that would eventually be called non-Euclidean geometry. Neither published their results, but they continued to work on this new type of geometry. Several other famous mathematicians enter the picture and by the 1850s there are several kinds of non-Euclidean geometries, including Bolyai-Lobachevskian geometry and Riemannian geometry. In all honesty, it is hard to convey the shockwave that non-Euclidean geometry sent through scientific circles (although this video might help). Instead, I'll just say that the discovery of non-Euclidean geometries forced many to rethink their understanding of nature. Here's how Morris Kline summarizes the event:

“In view of the role which mathematics plays in science and the implications of scientific knowledge for all of our beliefs, revolutionary changes in man’s understanding of the nature of mathematics could not but mean revolutionary changes in his understanding of science, doctrines of philosophy, religious and ethical beliefs, and, in fact, all intellectual disciplines... The creation of non-Euclidean geometry affected scientific thought in two ways. First of all, the major facts of mathematics, i.e., the axioms and theorems about triangles, squares, circles, and other common figures, are used repeatedly in scientific work and had been for centuries accepted as truths—indeed, as the most accessible truths. Since these facts could no longer be regarded as truths, all conclusions of science which depended upon strictly mathematical theorems also ceased to be truths... Secondly, the debacle in mathematics led scientists to question whether man could ever hope to find a true scientific theory. The Greek and Newtonian views put man in the role of one who merely uncovers the design already incorporated in nature. However, scientists have been obliged to recast their goals. They now believe that the mathematical laws they seek are merely approximate descriptions and, however accurate, no more than man’s way of understanding and viewing nature... The most majestic development of the 17th and 18th centuries, Newtonian mechanics, fostered and supported the view that the world is designed and determined in accordance with mathematical laws... But once non-Euclidean geometry destroyed the belief in mathematical truth and revealed that science offered merely theories about how nature might behave, the strongest reason for belief in determinism was shattered” (Kline 1967: 474-475).

Here's the way I like to summarize things. Mathematics had seemed to intellectuals like a gateway to the realm of truth, like an undeniable fact of reason, like something that could be known with certainty. But further developments in mathematics made the whole enterprise seems more so like a set of systems that we invent, and we invent them in ways that are useful to us. So, some began to believe that it's not that mathematics is a gateway to undeniable truth; it's that we invent mathematics in ways that are useful and this makes it seem like a gateway to undeniable truth. For this reasons, the idea that all physical events are governed by natural laws that could be expressed mathematically, determinism, just didn't seem like objective fact anymore. Physicists had just made theories and mathematical formulations that made it seem that way.

 

Quantum mechanics

The second event that undermined determinism is the advent of quantum mechanics (QM). QM is even more complicated than non-Euclidean geometry, but DeWitt (2018, chapter 26) helpfully distinguishes three separate topics of discussion relating to QM:

  1. Quantum facts, i.e., experimental results involving quantum entities, such as photons, electrons, etc.
  2. Quantum theory, i.e., the mathematical theory that explains quantum facts.
  3. Interpretations of quantum theory, i.e., philosophical theories about what sort of reality quantum theory suggests.

I cannot possibly explain the details of QM here. What is most relevant is the following. First there's this quantum fact: electrons and photons behave like waves, unless they are observed, in which case they behave like particles. This is called the observer effect. Also of relevance is the type of mathematics involved in QM. The mathematics used in predicting the movement of a cannonball is discrete mathematics, or particle mathematics; this is the kind of mathematics that Galileo and Descartes would've been familiar with. The mathematics used for quantum theory is wave mathematics. If this is all too much, don't worry. This is a puzzle that is still unresolved today. To understand why, please watch this cheesy but helpful video:

 

 

Why is this relevant? QM also undermines determinism. Physicist Brian Greene explains:

“We have seen that Heisenberg’s Uncertainty Principle undercuts Laplacian determinism because we fundamentally cannot know the precise positions and velocities of the constituents of the universe. Instead, these classical properties are replaced by quantum wave functions, which tell us only the probability that any given particle is here or there, or that it has this or that velocity” (Greene 2000: 341, emphasis added; see also Holt 2019, chapter 18).

In short, the dream of being able to predict future states of affairs with perfect knowledge evaporates. This is because it looks like reality, whatever it is, has "chanciness" built into it. In other words, it looks like, try as we might, deterministic predictions are not possible when it comes to the tiny stuff that we're all made out of.

 

And it doesn't even matter...

If you are rejoicing over the apparent downfall of determinism, you should hang on for a second. The version of causation that replaces determinism is strange, very open to interpretation, and seemingly random. Many thinkers feel that, whether it be determinism or quantum indeterminism, there is no theory of causation that leaves room for human free will.

The Dilemma of Determinism

  1. If determinism is true, then our choices are determined by factors over which we have no control.
  2. If indeterminism is true, then every choice is actually just a chance, random occurrence; i.e., not free will.
  3. But either determinism is true or indeterminism is true.
  4. Therefore, either our choices are determined or they are a chance occurrence; and neither of those is free will.

I close with an ominous quote:

“The electrochemical brain processes that result in murder are either deterministic or random or a combination of both. But they are never free. For example, when a neuron fires an electric charge, this either may be a deterministic reaction to external stimuli or it might be the outcome of a random event, such as the spontaneous decomposition of a radioactive atom. Neither option leaves any room for free will. Decisions reached through a chain reaction of biochemical events, each determined by a previous event, are certainly not free. Decisions resulting from random subatomic accidents aren’t free either; they are just random” (Harari 2017: 284).

 

 

 

To be continued...

FYI

Suggested Reading: A.J. Ayer, Freedom and Necessity

TL;DR: Crash Course, Determinism vs Free Will

Link: Student Health Center Info and Link

Supplementary Material—

Related Material—

Advanced Material—

 

Footnotes

1. Alexander Bevin makes two points that are of interest here. First, he claims that Carl von Clausewitz, a general and military theorist, completely misunderstood the main lesson of Napoleon's victories. This is important because Clausewitz's work was essentially the "bible" for German strategy during WWI, and this explains why the Germans used wrongheaded tactics during that conflict. Second, he points out that Napoleon didn't conceive of any of the strategies that he employed; rather Napoleon combined the strategies of many military theorists that came before him. This reminds me a bit of the technology company Apple. Apple didn't invent GPS, or touch-screen, or the internet, or any of the features that make a phone "smart". Apple merely integrated all these technologies (see chapter 4 of Mariana Mazzucato's 2015 The Entrepreneurial State).

2. This is not the first time in history that the reality of free will has been called into question. Two thousand years earlier, philosophers were already doubting our capacity to make free choices. “Epicurus [341-270 BCE] was the originator of the freewill controversy, and that it was only taken up with enthusiasm among the Stoics by Chrysippus [279-206 BCE], the third head of the school” (Huby 1967: 358).

 

 

The Union Betwixt

 

Somos nuestra memoria,
somos ese quimérico museo de formas inconstantes,
ese montón de espejos rotos.

[We are our memory,
we are that impossible museum of shapes that constantly shift,
that pile of broken mirrors.]

~Jorge Luis Borges1

 

The Illusion of Conscious Will

Last lesson, we learned that new theories about how the universe works (determinism, indeterminism) have led some philosophers (and physicists, psychologists, neuroscientists, etc.) to question human free will. But the attacks don't only come from physics. One of the most forceful recent arguments against free will comes from psychologist Daniel Wegner. His book The Illusion of Conscious Will, originally published in 2002, was recently re-published (2018). In it, he gives a sophisticated account of how the brain produces our choices, as well as the subjective experience of choosing. His argument is complicated and surveys experimental data from many domains of psychology, and so I can't give a summary of his view that gives justice to his writing. What I can say is the following. First, he clearly sides with the compatibilists. He believes that the will is "not some cause or force or motor in a person but rather is the personal conscious feeling of such causing, forcing, or motoring" (p. 3; emphasis added). He goes further and claims that if one is looking for the cause of our actions, then one will not find what the libertarian is looking for.

Wegner's apparent mental causation model.
Wegner's apparent
mental causation model.

Wegner believes that there are multiple systems associated with the production of an action. One psychological system "presents the idea of a voluntary action to consciousness and also produces the action" (p. 57). But this whole system operates in a non-conscious way, with the "self" playing no causal role. All our "self" is aware of is of the idea of a voluntary action (which it didn't produce but passively received) and the perception of its body engaging in some behavior (which it also didn't produce but merely passively received.) The interested student can see a diagram of Wegner's model pictured right (this model can be found in Wegner 2018: 63).

Wegner's model is intuitively plausible if one admits that the human brain is essentially an engine for generating predictions—that is its evolutionary function. And so, the system that produces the action (or some other related system) also produces the thought of performing that action and presents it to your consciousness. This is, in effect, the system's prediction of what the action will look like. And it is this coupling of the prediction of what the action will look like and the decision to engage in the action—neither of which came from the "self"—that gives the "self", the seat of consciousness, the feeling that it is in control. It feels like we intended to perform the behavior, and this intention makes use assume that the self is the cause of the action. But this little causal story is just a confabulation, or fabrication.

"The experience of will could be a result of the same mental processes that people use in the perception of causality more generally. The theory of apparent mental causation, then, is this: People experience conscious will when they interpret their own thought as the cause of their action. This means that people experience conscious will quite independently of any actual causal connection between their thoughts and their actions" (Wegner 2018: 60).

 

The Great Infidel

One of the most popular defenders of compatibilism was the Scottish philosopher David Hume. Hume began his philosophical career with the publication of his A Treatise on Human Nature, published in three volumes from 1739-1740. In this work, Hume sought to establish a brand new science of human nature that would undergird all other sciences, since—Hume reasoned—all other sciences rely on human cognition as part of its investigations. In other words, since all science begins with human experience and cognition, it is important to get our ideas straight about these first. But Hume makes the case that reason cannot truly take us to a complete knowledge of the world, as René Descartes famously believed. Rather, he believed that the experimental method is the only appropriate way. In other words, he made clear that sensory experience is at the root of all knowledge. In this belief, Hume himself acknowledged that he was not the first, counting John Locke, Bernard Mandeville, and Joseph Butler as predecessors. But Hume took it far further than they did.

Rasmussen's book on David Hume and Adam Smith

Hume concluded that if we reject the idea of courageous reasoning (that reason can take us to fundamental truth about reality, morality, etc.), then it turns out we can know very little about the world and ourselves with certainty. Hume's conclusions must've been extremely disconcerting to readers in the 18th century. For example, here are some things that we cannot be certain about per Hume: the reality of the external world, the constancy and permanence of the self, and that the laws of causation (cause and effect) are real. In early editions of the Treatise, Hume even had sections denying the existence of souls and arguments against the reality of miracles—sections he had to omit for fear of repercussions. In the end, Hume concluded, all that reason can come to know on its own are mathematical propositions and axioms of pure logic.

The great diminution of the role of reason in Hume’s system correlates with an expansion of the roles of custom, habit, the passions (what today we would call emotion), and the imagination. In other words, what some philosophers explain through reason, Hume explains through the mundane habits of culture. Moreover, since Hume doesn’t include the supernatural as part of his explanatory scheme, the work is purely secular. Thus, Hume is implicitly making the case that God isn’t necessary when explaining human nature—an invitation to accusations of atheism that would cause trouble for Hume throughout his life.

In volume 3 of the Treatise, which was the volume that was added later, Hume gives his views on morality (also without need of God). Virtues are merely those character traits that we collectively have deemed to have utility (i.e., usefulness) in society and in our interpersonal relationships. We are predisposed somehow to find these agreeable, and to find vices disagreeable—an extremely interesting argument given that this is all before Darwin's theory of evolution. It is our passions that flare up when they perceive virtue or vice, feeling approbation (i.e., a feeling of approval) for the former and disapprobation for the latter.

Hume's Enquiry

Unfortunately for Hume, the Treatise fell “deadborn in the press”, failing to secure commercial success. Hume then became a tutor and then a secretary to a distant relative during a military campaign. He then returned to Ninewells, his family's estate, and began working on the Enquiry Concerning Human Understanding, a rewrite of Book 1 of the Treatise. Whereas Hume says he deliberately “castrated” the Treatise, ridding it of its most controversial sections, Hume left them in the Enquiry Concerning Human Understanding. And so we know his views on, for example, miracles. Hume also delivers his objection to intelligent design in this first Enquiry. Put simply, Hume says we cannot rationally infer from the imperfect nature of our world that there is a perfectly knowledgeable and loving creator—the lamentable aspects of this world are too numerous to justify this inference. Moreover, even if we did infer some intelligent designer, there is no way to move from this belief to other religious doctrines, such as the existence of heaven and hell. Again, a skeptic until the end, Hume does not argue that God and heaven don’t exist; just that you can’t know that they do if they exist. They are untestable and, hence, useless hypotheses.

As is plain to see, if Hume wasn't a downright atheist, he was very close by. And this is the intellectual framework into which his compatibilism fits in. He saw free will as merely a harmony between our desires and our actions. There is no special causal power to free will, like the libertarians claim. It's just the sense that we are able to do what we want to do. Here it is in his own words:

"By liberty, then we can only mean a power of acting or not acting, according to the determinations of the will; that is, if we choose to remain at rest, we may; if we choose to move, we also may. Now this hypothetical liberty is universally allowed to belong to every one who is not a prisoner and in chains."

In short, as long as you can act on your desires, you have (compatibilist) free will, even if you don't choose your desires...

 

Decoding Compatibilism

 

 

Food for thought...

 

 

Summary of why compatibilism isn't what we want

Here are, once again, the three reasons why compatibilism isn’t a good option for us, given the goal of this course: escaping skepticism.

  1. First off, although compatibilism goes back to the Greek Stoics, the view was made very famous by 18th century Scottish philosopher David Hume. This might not initially seem like a very good reason for rejecting compatibilism, but it will seem stronger after a bit of context. Hume provided some of the most forceful arguments in the history of Philosophy for various skeptical conclusions. More importantly, compatibilism is part of his worldview—a worldview that challenged the Cartesian worldview at basically every turn. As a matter of fact, the way in which I have defined compatibilist free will is actually Hume's conception of free will, which is why I will refer to it as Humean-style free will from now on. In his A Treatise of Human Nature, Hume builds his arguments against several elements of the Cartesian worldview that we are currently trying to defend, such as deductive certainty about the world and that reason is the foundation of all knowledge (a view otherwise known as rationalism). In short, if we are trying to see if the Cartesian project can be successful, then Hume is not an ally. He is not only "in the other camp", but he is maybe the most formidable foe on the enemy's side. Any view that Hume accepts and defends, a Cartesian—which is the view we are assuming for now—should be weary of.

  2. If free will is construed in the way that the compatibilists conceive of it, then there might be some counterintuitive implications—as we learned in the Food for Thought. Suppose, for example, that I found a way to remote control my friend Josh Casper into robbing a bank. Moreover, the way that I did this is by activating certain clusters of neurons in his brain such that he really wanted to rob the bank. In this case, Josh's desires and actions would be in alignment. In fact, it would be his desires which caused his behavior. To a compatibilist, this would be free will! However, although it's not clear that many would find Josh fully morally culpable—in other words, you probably wouldn't want to prosecute him very harshly in a court of law—it really doesn't seem like we should call this free will. There's something funny about it.

  3. Lastly, it's not clear that compatibilism actually solves the problem of evil. This is because what is needed to solve the problem of evil is to resolve the tension between the perceived unnecessary suffering in the world and the existence of an all-powerful, all-loving, all-knowing god who presumably would not allow unnecessary suffering to exist. If we try to say that unnecessary suffering is caused by human free will, then we need the kind of human free will that is actually causally efficacious. In other words, we need the kind of free will where humans are actually causing suffering in the world through their non-determined actions (libertarianism), not just having the subjective feeling like they are making choices which are causing suffering (compatibilism).

 

 

 

 

To be continued...

FYI

Suggested Viewing: Think 101, Know Thyself?

Supplementary Material—

Related Material—

Advanced Material—

 

Footnotes

1. Translation by instructor, R.C.M. García.

 

 

One Possibility Remains…

 

“Without metaphysical freedom, the universe is just a divine puppet show. If there is to be any real creaturely goodness, any new and creative act of love, rather than the merely mechanical uncoiling of a wind-up universe, if there are to be any real decisions other than those made in the divine will, then there must be metaphysical freedom, and such freedom brings with it the possibility of evil as well as the promise of goodness.”

~Augustine of Hippo

 

Will the real free will please stand up?

The basic distinction between compatibilism (the focus of our last lesson) and libertarianism (the focus of the present lesson) is this: for compatibilists, it is sufficient for free will that one's desires, choices, and actions are in alignment (even if they're all determined by something else); for libertarians, free will requires that you actually cause things to happen—things that wouldn't have otherwise happened. This can get a little confusing. So, perhaps we should think of compatibilism and libertarianism as different types of free will. Compatibilist free will is the kind of free will that you have whenever you act on your desires. The philosopher Mark Balaguer (2014: 50-52) argues that it’s obvious we have this Humean-style compatibilist free will. Clearly, all adult humans have had a desired and then acted on that desire. In other words, we've all wanted a cookie, chosen to eat the cookie, and then eaten the cookie. That means that everyone has compatibilist free will. The metaphysically interesting question, then, is whether or not we also have Libertarian-style free will. That is, we want to know whether we can do more than just act on our desires; we want to know if we can cause things to happen that wouldn't have otherwise happened. The question is: Can we control our desires and truly make ourselves? When framed this way, libertarianism sounds like the "real" free will? Here is Balaguer on the subject: 

“The eighteenth-century German philosopher Immanuel Kant called Humean compatibilism ‘petty word jugglery’ and a ‘wretched subterfuge’... And the nineteenth-century American philosopher William James said this: '[Compatibilism] is a quagmire of evasion under which the real issue of fact has been entirely smothered... No matter what the [compatibilist] means by [free will]... there is a problem, and issue of fact and not of words'... These are strong words. But notice that Kant and James are not saying that compatibilism is false. They’re saying it’s irrelevant. They’re saying that compatibilists are just playing around with words and evading the real issue... And that’s exactly what I’m saying” (Balaguer 2014: 53-4).

I'm not entirely sure that talking about the "real free will" is ultimately accurate, but I do think that it's helpful to distinguish these two types of free will in this way. Libertarian free will is, in a sense, special. It is, in a nutshell, the power to cause things. In this context, it is clear that when we use the notion of free will as a solution to the problem of evil, we are using it in the libertarian sense, not the compatibilist one. Thus, the question of whether or not we have libertarian free will is the important one. Our solution to the problem of evil hinges on the truth of libertarianism about free will. 

Decoding Libertarianism

 

Soon to be forgotten?

 

Lisa Feldman Barrett
Lisa Feldman Barrett.

When libertarians, like Balaguer, accuse the compatibilists of just playing around with words, it makes it seem like libertarianism is the more dignified and serious approach. However, for all their clarity in their language, libertarianism has often been accused of incoherence, as we saw in the last lesson.1 For example, what even is a non-determined choice? What does it mean to "make yourself"? Is there any view of causation that even allows for these libertarian choices? The truth is that, even if the idea of libertarian free will can be made coherent, it's not at all obvious that we have it, as Balaguer (2012) speculates in his Free will as an open scientific problem.

We've already seen that views that have been around for millennia can be discarded. As we speak, there are movements in several fields that are attempting to dethrone established theories. For example, in the neuroscience of emotions, the classical theory of emotions, that the same basic emotions are built-in to all humans and that each emotion has a distinct pattern of physical changes in the face, body, and brain, is being challenged by the construction theory of emotion. For example, Lisa Feldman-Barrett (2017) argues instead that emotions are concepts that we learned from our caregivers and society, and these emotion concepts can be realized in multiple ways by the brain (using different combinations of neurons). This is contrary to the classical view which would suppose one dedicated neural structure for different emotions. More importantly, the findings seem to be on the side of the construction theory of emotion. We'll see what happens.

By the way, Barrett even calls into question the notion of trial by jury. Her basic point is that emotion isn't something that we recognize in someone, but something that we project onto them. Moreover, we are actually pretty bad at correctly predicting what someone's emotional state is. We are especially bad if they are of a different gender and/or race—which shouldn't be too surprising. This means that, in a court of law, juries are not very good at assessing key aspects of the defendant's emotional disposition, such as whether or not they are remorseful. Clearly, then, the perceptions and misperceptions of a jury can affect things like the verdict and the sentencing process. Barrett suggests that, since the Founding Fathers didn't know anything about 21st century neuroscience of emotion, they got key aspects wrong. And so, the whole idea needs to be reformed (see Barrett 2017, chapter 11).

Along with the classical theory of the emotion and trial by jury, might we see democracy go by the wayside too? Some are becoming less confident in the electorate's capacity to make good decisions about their elected officials. Brennan (2017), for example, makes the case that we should abandon democracy and replace it with an epistocracy, or rule by the learned. The basic idea is that voting is a privilege awarded only to those who have proven that they know basic things about American government—which unfortunately many people don't. Consider the following:

 

 

Even one of my favorite broadcasters, Dan Carlin, in a show titled Steering into the Iceberg, suggested that people that believe in too many conspiracy theories shouldn't be allowed to vote. This is surprising from someone who refers to himself as "a real 'We the people' kinda guy". What will become of democracy?

Will something similar happen with the idea of human free will? Some scientists think so. Robert Sapolsky, a neuroendocrinologist from Stanford University, believes that our notion of free will is outdated and that the criminal justice system needs to be reformed. The people who commit crimes really weren't able to help themselves. Although it is true that we should sequester those who are dangerous and make sure they can't harm people, he argues that it's not right to put them in inhumane prisons the way we are currently doing.

“People in the future will look back at us as we do at purveyors of leeches and bloodletting and trepanation, as we look back at the fifteenth-century experts who spent their days condemning witches... those people in the future will consider us and think, ‘My God, the things they didn’t know then. The harm that they did’” (Sapolsky 2018: 608).

 

 

 

Executive Summary

  • The classical problem of free will was generated as Newtonian mechanics grew dominant and the belief in determinism grew common in intellectual circles.

  • There are three views related to the problem of free will:

    • hard determinists, which claim that human free will and determinism are incompatible, accept the truth of determinism, and, thus, deny that humans have free will;
    • libertarians, which claim that some human actions are not determined and thereby deny the truth of determinism; and
    • compatibilists, which claim that human actions and determinism are not only compatible, but some sort of determinism is actually required for real human free will.
  • Determinism came to be questioned at the turn of the 20th century with the dawn of quantum mechanics.

  • Quantum mechanics, however, does not itself seem to be compatible with libertarian free will.

 

Footnote

1. Balaguer even titles one of his papers "A Coherent, Naturalistic, and Plausible Formulation of Libertarian Free Will". The adjective coherent wouldn't be necessary if there weren't a widespread assumption that libertarianism is incoherent. In fact, about 60% of professional philosophers are compatibilists according to a recent survey (see Bourget and Chalmers 2014). That same survey showed that belief in Libertarian free will and belief in God was one of the top ten highest correlations (see Table 6). This raises the possibility that some philosophers only believe in Libertarian free will because they need it to escape the conclusion of the problem of evil argument—a case of motivated reasoning. To further along the idea that this might be motivated reasoning, consider the following. It is the case that about three-quarters of professional philosophers are atheists and that most of the theists specialize in Philosophy of Religion (see Bourget and Chalmers 2014, section 3.3). In fact, the combination of theism and specializing in Philosophy of Religion is the highest correlation between a particular view and specializing in a particular field (see Table 10). It is definitely a possibility that theists only believe in libertarian free will because they have to in order to rescue their theistic beliefs.

 

The Thing-In-Itself

 

Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another.

~Immanuel Kant

 

Age of Enlightenment: 1680-1790

The goal of the preceding lessons was to give you a taste of what is sometimes called the "crisis of the Enlightenment." What I'd like to do now is to give you a general overview of the Enlightenment, as well as say a few things about humanism. Then I'll discuss this "crisis" that I'm trying to get you to feel.

Although it is very difficult to summarize in a few paragraphs what the Enlightenment was all about, I can give you a few tidbits (see Robertson 2020). In general, this time period is characterized by a move away from superstition and toward beliefs that can be rationally defended, preferably with physical evidence and air-tight reasoning. So, instead of believing, for example, that lightning was a sign of God's anger—a belief that was fairly widespread and can be found in the writings of, among others, Martin Luther—it came to be understood as a natural phenomenon. Lightning, in other words, came to be seen as just a gian spark of electricity moving between two electrically-charged regions of the atmosphere. In fact, one famous enlightener, Benjamin Frankling, famously demonstrated this with a (dangerous) experiment.

Benjamin Franklin and Kite

Another development of the time period is a renewed interest in the pursuit of happiness. This was, perhaps surprisingly, initiated by the re-discovery of a poem by Lucretius from the first century BCE. In it, Lucretius defended a view known as Epicureanism, a school of thought that includes belief in hedonism: the belief that happiness is the only intrinsic good. This means that happiness is the only good that is good for its own sake. All other goods, in other words, are good because they bring you happiness. So, as Lucretius’ defense of Epicureanism began to be take hold of European minds, thinkers moved toward the notion that the aim of life is happiness. That is, there was a resurgence of eudaemonism: the pursuit of a philosophy of life with the goal of leading to human flourishing. This led to all kinds of intellectual activity. Ways to reintegrate Christianity and hedonism were sought out. Governments, rather than merely being the means by which to preserve social order, were coming to be seen as the instruments through which widespread happiness was to be achieved.

Also during the Enlightenment, different conceptions about the power of reason were tried out. The rationalists, for example, attempted a geometric approach, where truths are built upon axioms (see Rationalism v. Empiricism). Others, like Niccolò Machiavelli, tried to produce a system for producing effectiveness in statecraft: reason of state. Rationalism eventually gave way, albeit slowly, to empiricism. The empiricists made use of probabilistic reasoning, accepting that we can never experience the true essences of things, only their surfaces, and thus our representations of the world are only approximate. There were also those, such as the philosophes, who advocated a general philosophically-minded way of thinking, one that required regular intellectual dialogue and reading. Finally, Kant, the subject of today's lesson, argued that enlightenment meant to stop being immature and have the courage to reason for ourselves. And courageous reasoning is exactly what he engaged in.

 

What humanism is

Fountain, by Marcel Duchamp
Fountain, by Marcel Duchamp.

The Enlightenment is perhaps also the apex of the influence of humanism. Before humanism, religion and supernatural beings gave meaning and order to the cosmos. Now, humans do. In fact, this is one way to define humanism. In chapter 7 of Homo Deus, Yuval Noah Harari argues that humanism means that humans are the ultimate arbiter on meaning. To make his point, Harari discusses the radical shift of perspective and subject matter in art. It is no secret that religious expression used to dominate all art forms: paintings, music, etc. Now, however, what counts as art is a purely secular matter. In other words, it's ultimately up to humans. Consider the sculpture pictured right. Is it art? Really that’s for humans to debate and decide on. And this notion—that humans can give value to something merely by deciding that it has value—was inconceivable prior to humanism.

A further example can be how we characterize war. Before humanism, war was seen “from above.” The justification for armed conflict was divine, the soldiers were faceless, and the General was a genius. If one army defeated another, the General of the victorious army would claim that it was God's will that they won. (Otherwise, why would they have won?) This is why Genghis Khan called himself "the punishment of God". In this quote by the Great Khan, you can see how supernatural beliefs were imbued even into the reasons for why one group defeats another: "I am the punishment of God. If you had not committed great sins, God would not have sent a punishment like me upon you." Even the losers in war sought answers from the divine. When the Mongols captured Baghdad in 1258, religious leaders found themselves asking why their people had lost favor with Allah. (Why did Allah abandon them?) But now, after humanism, portrayals of war revolve around the individual soldier and their loss of innocence. Think of movies like Full Metal Jacket. We support the troops, not just the general. We critique our politicians if they send our young ones to die for no good reason. The perspective has clearly shifted.

It is, I think, impossible to put yourself into the mindframe of someone prior to humanism. But perhaps you can approximate thinking like someone who was in the transitional period. Perhaps you find meaning in the inquiries of science, as many did during this time period. Also during this time period there was a rise in literacy rates and the proliferation of non-religious literature, like novels (see Hunt 2007). If you find reading this kind of literature liberating, then you are approaching this way of thinking. In other words, you are approaching the worldview of Immanuel Kant.

The Crisis

Kant is best understood in the context of the crisis of the Enlightenment. This crisis was caused by a newfound confidence in reason, beginning with the groundbreaking work of Isaac Newton. Through the scientific method, thinkers were gaining a bewildering amount of knowledge about the natural world. Moreover, this new information conflicted with older views about humans and their planet. It, for example, conflicted with some of the dates given in the Bible. There was also the beginnings of what today we would call biblical criticism. For example, Hermann Samuel Reimarus, albeit a believer, went to town on the Old Testament. For one, he made estimates about how many Jews crossed the Dead Sea during their escape from Egypt (based on numbers given in the Bible) and inferred that it would be impossible for that number of people to cross that distance in one night. Newton himself engaged in biblical criticism. Although he never admitted it during his lifetime, it was eventually discovered that he did not believe in the doctrine of the Holy Trinity. And he argued for this view (in his private writings) using evidence and reason. He compared every available version of I John 5:7, the only biblical testimony to the Trinity, and argued that a past biblical transcriber (specifically St. Jerome) had deliberately corrupted the text to support the doctrine of the trinity (with a comma). All this was leading many to question various traditional authorities, such as the church clergy. These threats to tradition led to questions that we have covered in this courses, including the matter of defining "knowledge", the question of whether or not God exists, the source of morality, and (of course) the problem of free will. Enter Kant.

 

 

Human Understanding

The Copernican Revolution (in perspective)
The Copernican Revolution
(in perspective).

Prior to Kant's time, various thinkers had been trying to establish the foundations of the natural sciences. These are thinkers like Francis Bacon, René Descartes, John Locke, and Thomas Hobbes. Many found it unsatisfactory to justify science by saying simply that it works. As you might know, if all you know is that something works, but you don't know why it works, then you're going to be in a lot of trouble when it stops working. This is obviously because you'll have no idea how to fix it. And so thinkers were trying to establish the foundations of science for precisely this reason. But before even establishing these foundations, thinkers had to settle on a much more mundane question: Do we perceive the world as it actually is? Each thinker had their own position, but Kant was not satisfied with their theories. In Critique of Pure Reason, originally published in 1781, Kant engages in his Copernican revolution. Just like Copernicus' heliocentric theory takes into consideration that the Earth is simultaneously in its own orbit around the sun when explaining the movement of the other planets, Kant takes up the hypothesis that, when we perceive the objects in the world, our perceptual systems change them. In other words, Kant took into consideration how our cognitive systems give form to our perceptions, a form that isn't actually in the objects in the world.

This idea is profoundly modern. In his (1999) book The Number Sense, Dehaene gives an overview of the countless mathematicians who have wondered why mathematics is so apt for modeling the natural world. By this point, Dehaene has spent some 250 pages arguing that we actually have an innate module programmed into us by evolution that helps us visualize a number line and learn basic arithmetic concepts. And so Dehaene concludes that it is not the case that "mathematics is everywhere." Instead, we can't help but to see mathematics everywhere. Our brains project a mathematical understanding onto the world.

"There is one instrument on which scientists rely so regularly that they sometimes forget its very existence: their own brain. The brain is not a logical, universal, and optimal machine. While evolution has endowed it with a special sensitivity to certain parameters useful to science, such as number, it has also made it particularly restive and inefficient in logic and in long series of calculations. It has biased it, finally, to project onto physical phenomena an anthropocentric framework that causes all of us to see evidence for design where only evolution and randomness are at work. Is the universe really 'written in mathematical language,' as Galileo contended? I am inclined to think instead that this is the only language with which we can try to read it" (Dehaene 1999: 252).

In the Critique, Kant wants to study the limits of abstract thought; in other words, he wants to know what he can know through reason alone. Can we know nothing of value from pure reason? Or can we discover fundamental reality with it, as Plato thought? It would be impossible for me to summarize Kant's most important argument in this work, the transcendental deduction, but I can give you his main conclusion: it is through human understanding that the laws of nature come to be. In other words, human understanding is the true law-giver of nature; it is through our cognitive systems, with their built-in ways of looking at the world, that we "formalize" the world and project onto it the laws of nature, very close to what Dehaene said above.

"Thus we ourselves bring into the appearances that order and regularity that we call nature, and moreover we would not be able to find it there if we, in the nature of our mind, had not originally put it there... The understanding is thus not merely a faculty for making rules through the comparison of the appearances; it is itself the legislation for nature, i.e. without understanding there would not be any nature at all" (Kant as quoted in Roecklein 2019: 108).

 

 

 

Kant's Metaphysics

 

 

 

To be continued...

 

FYI

Suggested Reading: Internet Encyclopedia of Philosophy, Immanuel Kant: Metaphysics

TL;DR: Philosophy Tube, Beginner's Guide to Kant's Metaphysics & Epistemology

Supplementary Material—

Advanced Material—

 

 

 

All Against All

 

A man always has two reasons for what he does—a good one and the real one.

~J. P. Morgan

 

The Puzzle of prosociality

Cooperation is actually quite common in nature. This is perplexing to us today given that we are often taught that evolution means "survival of the fittest"; we're told that nature is red in tooth and claw. This conception of natural selection seems to be the exact antithesis of cooperation, and so it would seem that competition should be the norm and cooperation should be very rare. But this is not the case. Across the animal kingdom, animals live in both short-term cooperative groups (like herds, flocks of migrating birds, etc.) and long-term cooperative groups (troops of non-human primates, human societies). Cooperation is actually widespread (Rubenstein and Kealey 2010).

Evolutionary theory can account for cooperation in nature, as you will see below, but a naive conception of the view will not lend itself to an understanding of the evolution of cooperation. And so, part of the reason why cooperation seems to be at odds with evolution is that evolution is poorly understood. There are many reasons for this. First off, and this shouldn't be surprising at this point, the teaching of evolutionary theory is often over-simplified in the USA during k-12 education. This is, at least in part, because evolutionary theory really is very complicated. It has been mathematized since Darwin introduced the view (Smith 1982), and there really are many conceptual issues—dare I say philosophical issues?—in biology that need to be worked out (Sober 1994). There's also the fact that, depressingly, as of 2019, 40% of Americans surveyed don't believe in evolution. And so, students often don't get a good grasp on evolutionary theory until college—if ever.

Even if one achieves a more nuanced understanding of evolutionary theory, however, the evolution of cooperation, also referred to as prosociality, is a complicated matter (Axelrod 1997). Cooperation arises for multiple reasons. For example, one reason for cooperation in nature is captured by theory of reciprocal altruism (Trivers 1971). The theory is simple. Consider a small group of animals that has some natural predator. One animal, because he is perched in a good position to see what's going on, sees the group's predator approaching in the sky. This animal could alert his groupmates, but this would alert the predator of his location. Should he take this risk, even if it's a small one, to alert his groupmates? Not alerting the group is in his own best interest in the short-term. However, he runs the risk of not receiving help from his group in the future for failing to alert the group about the predator now. So in this situation, where social interaction is long-term and repeated, the individual does have an incentive to take a risk for the rest of the group. And so, he behaves cooperatively and alerts the group of the predator's approach. Obviously, the calculations are not linguistic or conscious in most animals. The behavior is programmed into its genes. That's reciprocal altruism.

Honeypot ants
Honeypot ants.

Another form of cooperation is driven by the genetics of kin altruism. This is an evolutionary strategy that favors the reproductive success of one’s relatives and which can be seen in Hymenoptera (ants, bees, wasps), termites, and naked mole rats. In these creatures, the foundation of their ultrasocial cooperation is that they are all siblings, and it could lead to surprisingly self-less behavior. For example, some members of one species of ants spend their lives hanging from the top of a tunnel offering their abdomens as food storage bags for the rest of the nest. This ultrasociality bred ultracooperation, which is what enables the massive division of labor seen in these species.

Humans are also ultrosocial and ultracooperative, like ants. Unlike ants, though, we are not all relatives. Surely reciprocal altruism and kin selection play a role in our cooperative behavior, but there must be more to the explanation. Humans live in large-scale, hierarchical societies with a tremendous division of labor, and it doesn't seem like reciprocal altruism and kin selection alone can explain this. We love our social groups, and some willingly die for their groups. What is going on here?

“Humans invest time and effort in helping the needy within their community and make frequent anonymous donations to charities. They come to each other's aid in natural disasters. They respond to appeals to sacrifice them­selves for their nation in wartime. And, they put their lives at risk by aiding complete strangers in emergency situations. The tendency to benefit others—not closely related—at the expense of one­self, which we refer to here as altruism or prosocial behavior, is one of the major puzzles in the behavioral sciences” (Van Vugt and Van Lange 2006: 237-8).

And so it seems that evolution can explain some forms of cooperation. But ultracooperation of the kind that humans engage in is more difficult to understand, and there are various proposed theories to explain this (e.g., Turchin 2015, Haidt 2012). However, it is unclear which theory is true. Call this the puzzle of prosociality.

 


 

Sidebar image

Some students are initially unconvinced that humans are prosocial and cooperative at all. They argue that humans constantly go to war with each other, and that's not cooperation. War is actually an interesting example of the human capacity for ultrasociality. Just try to imagine one group of dogs getting together to go to war with another group of dogs. Some dogs make weapons, others train to fight, others make plans, and still others working on all the necessary resources for waging war, like making uniforms and securing food and water supplies. The imagery here should make you smile. You can't even really imagine it. This is because the whole idea of organized warfare requires a psychological capacity that dogs don't have: one that turns the individual into a nameless soldier beholden to the orders of a general. Humans sacrifice their lives and kill people they would've otherwise never met for their group. In fact, waging war is one of the most groupish, as opposed to selfish, things that we do. Some (Turchin 2015) even think that war is what made the modern world. I won't get into Turchin's views here, but just think of this. Human cooperation doesn't mean that everyone treats literally everyone with respect. It means that humans will sacrifice their own interests for the interests of their group. They'll even travel to faraway places and kill strangers for their group—true loyalty (or blind obedience?) to the group. More in the Storytime! below.

 


 

Storytime!

 

An Argument for Free Will via Moral Responsibility

In the last lesson, we saw the mounting threats to libertarian free will. We also saw that compatibilist free will is a non-starter if we want to solve the problem of evil. And so, in an attempt to preserve libertarian free will somehow, we turn to a discussion of morality. To many, the idea of morality seems real; it's not just an idea that humans made up. This is moral realism, also known as moral objectivism. This view, however, seems to only make sense if we have free will. And so, perhaps we can argue for libertarian free will by defending moral realism. Here's the argument:

  1. If humans do not have (libertarian) free will, then we cannot justifiably hold each other morally responsible for our morally wrong actions.
  2. But we do justifiably hold each other morally responsible for our morally wrong actions; i.e., some actions really are morally wrong and it is right to hold people accountable if they perform these actions.
  3. Therefore, it must be the case that we do have free will.

Naturally, this argument relies on the truth of moral realism, or something like it. And so, we turn next to ethical theory, a subfield of philosophy concerned with arriving at a system for telling what actions are morally wrong and which actions are morally permissible.

 

 

Niccolò Machiavelli

 

The Afflicted City

We begin our survey of ethical theories in more or less the order in which they appeared historically. Although we'll be covering more modern versions of them, the inspiration for the two theories we are covering today is ancient. This is because thinkers far back in the Western tradition have been thinking about why we behave in the way we do, why we build big cities, build empires; why we sometimes work together and at other times slaughter each other. We know that Western thinkers had theories about this because the works of some very important thinkers have survived to tell us about them. One such thinker is Plato (circa 425 to 348 BCE).

Plato
Bust of Plato.

Plato wrote primarily in dialogue form. Typically his dialogues would take the form of an at least partly fictionalized dialogue between some ancient thinker and Plato's very famous teacher, Socrates. In Plato's masterwork Republic, the character of Socrates attempts to define justice while responding to the various objections of other characters, which expressed views that were likely held by some thinkers of Plato's time. In effect, this might be Plato's way of defending his view against competing views of the time, although there is some debate about this.

In the dialogue, after some initial debate, the characters decide to build a hypothetical city, a city of words, so that during the building process they can study where and when justice comes into play. At first they build a small, healthy city. Everyone played their own role which served others. There was a housebuilder, a farmer, a leather worker, and a weaver so that they could have all the essentials. At this point, a character named Glaucon objected to the project. He argued that this is not a real city; it's a "city of pigs", a city where people would be satisfied with the bare minimum. A real city, with real people, would want luxuries and entertainment. So at Glaucon's behest, the characters expanded the city to give its inhabitants the luxuries they likely wanted. Soon after, the characters realized the city would have to make war on its neighbors; they would need an army and they would need rulers.

Glaucon's objection is rooted in a specific idea about human nature. This thread was picked up on millennia later, during the Enlightenment, by thinkers like Bernard Mandeville and Thomas Hobbes. It's this notion that all human actions are rooted in self-interest. That is, some thinkers have put forward the thesis that you can explain all human action through self-interest: our behaviors, our institutions, our moral codes, everything. Although it goes by many names, we will refer to this view as psychological egoism, and it takes us to the first ethical theory we will cover.

 

Ethical egoism

So our first theory, then, will be ethical egoism: an action is right if, and only if, it is in the best interest of the agent performing the action. Here is a simple argument for the view.

  1. If the only way humans are able to behave is out of self-interest, then that should be our moral standard.
  2. All human actions are done purely out of self-interest, even when we think we are behaving selflessly (psychological egoism).
  3. Therefore, our moral standard should be that all humans should behave purely out of self-interest.

In a nutshell, this argument states that if all we can do is behave in a self-interested way, that’s all we should do. Premise 1 seems reasonable enough. A thinker that we will come to know better eventually, Immanuel Kant, argued that if we should do something, then that implies that we can do it. Although Kant did not believe in psychological egoism, we can accept his dictum that we should be able to do what we are required to do. This implies that if we can't help but to act in a self-interested way, then that's the only rational standard we should be held against. In short, because psychological egoism is true, then ethical egoism is true as well.

Proponents of ethical egoism argue that psychological egoism can explain all human actions. It does seem to account for many of our behaviors. For one, sometimes people are selfish. Sometimes, however, people cooperate and behave in seemingly altruistic ways—that is, for the benefit of others. Egoists claim their view can also account for this sort of behavior because it’s possible people behave this way only to: get the benefits of working cooperatively, or enjoy moral praise (from themselves and others), or just avoid feeling guilt. Let's be honest, some of you don't lie or steal simply because you couldn't bare the guilt.

But does ethical egoism explain all human behaviors and institutions? Can self-interest alone really explain the full range of diverse actions we take? Can it explain the behavior of Mother Teresa? Well, egoists might argue that even Mother Teresa acted in a self-interested way. After all, if her faith was well-placed, she did get a reward for her life's work: eternal bliss in heaven—Pascal's infinite expected utility.

 

 

The Purge

Thomas Hobbes (1588-1679)
Thomas Hobbes (1588-1679).

To transition to our second ethical theory (contractarianism), let's begin with a question. If you found yourself in a situation where there was a total breakdown of central authority (no cops, no government aid), as in movies like World War Z or The Purge, what would you do to stay alive? Would you lie and steal? Would you kill?1 If you think you would, then you might agree with Thomas Hobbes (1588-1679). Although it is difficult to disambiguate his ethical theory from his political philosophy, it's safe to say that Hobbes had a dark ethical theory. Hobbes, assuming that psychological egoism is true, agrees with Glaucon that all prosocial behavior is merely a state of affairs we submit to purely out of self-interest. Morality is a convenient fiction. In short, we submit to an authority and give it a monopoly on violence because the alternative, the state of nature where everyone is at war with each other, is substantially worse. Justice and morality are mere social contracts; if society collapses, you can feel free to ignore these contracts. This looks real bad for moral realism...

“Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man... In such condition there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short" (Thomas Hobbes, Leviathan, i. xiii. 9)

We'll learn more about Hobbes in the video below.

 

Decoding Hobbes

 

 

 

The State of Nature

LA riots, 1992
LA riots, 1992.

Is Hobbes' right? Certainly we have seen unthinkable acts of violence and theft when there is a breakdown in central authority. From the LA Riots in 1992 to involuntary euthanasia during the Hurricane Katrina disaster, and even more recently during the looting after the George Floyd protests.2 We will give a more careful assessment of SCT in the next lesson, but for now I want you to think about how the breakdown of central authority is at least correlated with some instances of immorality and blatant disregard for the law. But Hobbes' theory mostly rests on one biological trait: psychological egoism. Is it truly the case that all human actions are driven by self-interest?

 

 

 

Executive Summary

  • Skepticism about libertarian free will, which is the only type of free will which will be considered moving forward, leads to skepticism about moral responsibility. However, it does appear that we justifiably hold people morally responsible for the morally wrong actions that they engage in. This suggests that perhaps a defense of free will can come from a defense of something like moral realism.

  • This lesson, however, introduces us to two anti-realist (or non-objectivist) ethical theories: ethical egoism and Hobbes' social contract theory.

  • Both theories ultimately depend on a view about human nature: that all human actions are rooted in self-interest. This view is called psychological egoism. The truth of psychological egoism, though, is ultimately an empirical question—one that scientists, not philosophers, will have to address.

FYI

Suggested Reading: Plato, The Republic, Book II

  • Note: Read from 357a to 367e.

TL;DR: The School of Life, POLITICAL THEORY: Thomas Hobbes

Supplemental Material—

Advanced Material—

 

Footnotes

1. I may as well tell you that I think all the Purge movies are awful and have led to the confusion among young people about what the philosophical position of anarchism really stands for.

2. I recommend the documentary LA 92 on the social injustice and unrest leading up to the LA riots.

 

 

The Impossibility of Translation

 

New York City, 1934

 

No man ever looks at the world with pristine eyes. He sees it edited by a definite set of customs and institutions and ways of thinking.

~Ruth Benedict

1934

Our next ethical theory requires us to jump forward all the way to 1934. We will be walking through the halls of Columbia University in New York City. In particular, we will be visiting the anthropology department, where one of the characters in this story—Franz Boas—taught for 40 years. The reason for studying the ideas of Boas, as well as those of his students, will become apparent below. For now, let's prime our moral intuitions with an example.

Members of an uncontacted tribe photographed in Brazil, 2012
Members of an uncontacted
tribe photographed in
Brazil, 2012.

In their popular (but controversial) book Sex at Dawn, Christopher Ryan and Cacilda Jethá (2010: 90-91) report on the sexual practices of some Amazonian tribes. In some of these tribes, pregnancy is thought to be a condition that comes in degrees, as opposed to either being pregnant or not. In other words, the members of these tribes believe you can be "a little" pregnant. In fact, all sexually active women are a little pregnant. This is because they believe that babies are formed through the accumulation of semen. In order to produce a baby, a woman needs a constant supply of sperm over the course of about nine months. Moreover, the woman is free to acquire the semen from any available men that she finds suitable. It may even be encouraged to do so in order for the baby to acquire the positive traits of each of the men that contributes sperm. As such, perhaps the woman will seek out a man who is brave, a man who is attractive, a man who is intelligent, etc. All told, up to twenty men might contribute their seed to this pregnancy, and all twenty are considered the father.

"Rather than being shunned... children of multiple fathers benefit from having more than one man who takes a special interest in them. Anthropologists have calculated that their chances of surviving childhood are often significantly better than those of children in the same societies with just one recognized father. Far from being enraged at having his genetic legacy called into question, a man in these societies is likely to feel gratitude to other men for pitching in to help create and then care for a stronger baby. Far from being blinded by jealousy as the standard narrative predicts, men in these societies find themselves bound to one another by shared paternity for the children they've fathered together” (Ryan and Jethá 2010: 92; emphasis in original).

This practice is called partible paternity, and it is not at all the way that paternity is viewed most everywhere else, especially for us in the West. We are privy to the way pregnancy actually works, and we readily distinguish between, say, the biological father and an adoptive father. But notice something interesting here. In my experience teaching, it is very rare that people judge the behavior of some of these Amazonian tribes to be immoral. Typically, students claim that it is ok for them to practice that "over there", but "over here" we do things differently. If you feel this way, then you might be a relativist.

 

Franz Boas (1858-1942)

 

Boas and his students

Ruth Benedict (1887-1948)
Ruth Benedict (1887-1948).

The reason for this shift into the 20th century is that the seeds of cultural moral relativism, the version of relativism that we'll be focusing on, are found in the work of anthropologist Franz Boas (1858-1942). Boas was a groundbreaking anthropologist who is often referred to as the "Father of American Anthropology." In chapter 2 of The Blank Slate, Steven Pinker explains how Boas’ research led Boas to realize that the people of more primitive cultures were not in any way deficient. Their languages were as complicated as ours, allowing for complex morphology and neologisms (new words). Their languages were also rich in meaning and could be updated rapidly, as when new numerical concepts were adopted as soon as a society needed them. Although Boas still thought Western civilizations were superior, he believed that all the peoples of the world could rise to this level.

Boas was himself likely not a relativist. He never expressed the particular views (which we will make explicit below) which are now known as cultural moral relativism. What he did express was a reluctance to definitively rank societies as either "more evolved" or "less evolved". Instead, he saw the members of different cultures as fundamentally the same kind of being, just ones with a different system of beliefs and ways of living. His students, however, took these ideas and morphed them. It was above all else Ruth Benedict (1887-1948), who like many other intellectuals were thrown into a moral panic after World War I, that developed what we now know as cultural moral relativism.

“The story of the rise to prominence of cultural relativism, [is] usually attributed to the work of Franz Boas and his students... Although Boas’s position on cultural relativism was in fact somewhat ambiguous, he laid the groundwork for the full elaboration of cultural relativism by redirecting anthropology away from evolutionary approaches... and by elaborating on Tylor’s notion that culture was an integrated system of behaviors, meanings, and psychological dispositions... The flowering of classical cultural relativism awaited the work of Boas’s students, including Ruth Benedict, Margaret Mead, and Melville Herskovits. Their articulation of a comprehensive relativist doctrine was appealing to intellectuals disillusioned by the pointless brutality of World War I, which undermined faith in the West’s cultural superiority and inspired a romantic search for alternatives to materialism and industrialized warfare… The ethnographer must interpret a culture on the basis of its own internal web of logic rather than through the application of a universal yardstick. This principle applies to everything from language and kinship systems to morality and ontology… Complementing the core principle of cultural coherence is insistence that societies and cultures cannot be ranked on an evolutionary scale. Each must be seen as sui generis [i.e., unique] and offering a satisfying way of life, however repugnant or outlandish particular aspects of it may seem to outsiders” (Brown 2008: 364-5; interpolations are mine).

As this idea of a specifically cultural moral relativism began to spread, other relativisms sprang up. The result is that the term relativism can now signify a variety of views—something we'll learn a little bit more about below.

 

 

 

Decoding Relativism

 

 

 

 

Cultural Practices from Around the World

Welcome to the intermission. In this section, we'll look at different cultural practices from around the world. Although there are many relativisms (as we learned in the video), we will focus once more on cultural moral relativism: the view that an act is right or wrong depending on the cultural context in which it is performed. The point of this activity is to challenge the cultural relativists, to make them see how difficult it really is to say that each culture can develop their own moral code. Warning: Some of the things you'll see here are graphic. However, this is the only way to show you the counterintuitive nature of CR. If you are sensitive, you may skip this section.

 


 

A Tidong community wedding.
A Tidong community wedding.

The Bathroom Ban of the Tidong (Indonesia)

An ancient custom of the Tidong tribe is to ban newlyweds from using the bathroom for three days after being married. Relatives go as far as staying with the newlyweds during this period of time to make sure that they don't eat or drink, or else they'll have to use the restroom. To break this taboo is considered bad luck on the marriage as well as the families of the newlyweds. I might add that holding in your urine causes bacteria build-up which is associated with urinary tract infections—and worse.

Is this morally permissible for them?

 


 

Living with the dead (Indonesia)

Is this morally permissible for them?

 


 

Dani Amputations (Papua New Guinea)

Is this morally permissible for them?

 


 






WARNING: Some of the following images are graphic in nature and might be disturbing to some.
Sensitive viewers may skip to the next section.






 

 

Cannibalism (various)

There are various reports of cannibalism throughout history and even isolated tribes that still practice the eating of humans.

If cannibalism is performed as a way of honoring the dead, is this morally permissible for them?

 


 

 

Baby Throwing (India)

Both Muslim and Hindu parents engage in a baby throwing ritual in some parts of India. The baby drops about 30 feet, and is caught in a sheet by a group below. It is said to bring good luck. You can read more in this article or watch the video below.

Is this morally permissible for them?

 


Americans

If we are extending cultural relativism to include sub-cultures, this might include fringe groups. What would a relativist say about:

  • anti-vaxxers?
  • the Followers of Christ (Idaho), who reject modern medicine and rely solely on faith healing?
  • Evangelical homeschoolers, who teach their children only creationism?

 

 

 

And another thing....

Joshua Greene (2013) gives what he calls the meta-morality argument against cultural moral relativism: Cultural moral relativism answers the question of how morality works within a “tribe”, but it does not and cannot guide us on how morality should work between “tribes.” What Greene is pointing out is that CMR does not resolve our moral debates. This is, of course, the most pressing problem in the 21st century. Of course it's the case that the Taliban has a moral code that is different from most people living in Southern California. The question is: How do we resolve these disputes? How do we come to a moral agreement? But CMR can't answer this question. In fact, it says that there is no answer. It thus fails to guide our actions, and, hence, one might argue that it fails as a moral theory.

This is just one objection to this view. We'll eventually see some more. Stay tuned.

 

 

Executive Summary

  • Cultural moral relativism is the view that an act is morally right if, and only if, the act is permitted by the code of ethics of the society in which the act is performed.

  • There are also other types of relativism, such as perceptual relativism, relativism about truth, and relativism about reason.

  • Tension rises between relativists and non-relativists when different practices around the world appear to be unacceptable or, in the very least, suboptimal.

FYI

Suggested Reading: Gilbert Harman, Moral Relativism Explained

TL;DR: Crash Course, Metaethics

Supplementary Material—

Related Material—

Advanced Material—

 

 

A Fact of Reason

 

Morality is not the doctrine of how we may make ourselves happy, but how we may make ourselves worthy of happiness.

~Immanuel Kant

 

Morality, à la Kant

This is the sequel to The Thing-In-Itself, which covered Kant's metaphysics. In it, we learned about how Kant made room for free will by distinguishing between "two worlds." There's the world as we see it, and the world as it really is. There's, in other words, the thing we see and the thing-in-itself. Kant claimed that the laws of nature only apply to the world as we see it. In other words, the laws of nature are imposed on our experiences of the world by our faculty of understanding, our cognition. But this is just how we structure our experience of the world. Moreover, we (as rational human beings) exist as things independent of our experiences of the world. That is, we are a thing-in-itself. This means that the laws of nature don't apply. In short, the threat against free will from determinism is no threat at all, according to Kant; this is because we exist fundamentally in the world as it really is, while determinism exists only in the world as we see it.

In his defense of (libertarian) free will, Kant makes the case that there is an objective morality. In fact, it is our realization that there is an objective morality that gets us to the realization that we are free. So, in one fell swoop, Kant argued for the freedom of the will and objective morality—not to mention the immorality of the soul and existence of God(!). In this lesson, we will take a closer look at Kant's ethics.

Before we do, though, let's recall that Kant is arguing for an objective morality—a view that is sometimes referred to as moral realism. On this, moral values are mind-independent; that is, they exist independent of humans minds, rather than just being human constructs. This is important because we've already looked at a few competing approaches to morality, such as cultural moral relativism and Hobbes' social contract theory. These two ethical theories in particular embrace a non-objectivist view of moral values. Hobbes, you'll recall, argued that morality is completely subjective; there's only right for you and right for me, but no right in general. This invariably creates conflict. So, Hobbes thought that we should give all authority on moral and legal mattes to a central power that will arbitrate for us, for the sake of peace and stability. Cultural moral relativism switched the frame of reference from the individual to the culture as a whole. According to this kind of relativist, an act is right or wrong only within the context of a cultural framework. Moreover, it is common for this type of relativist to argue that it is impossible to compare and/or juxtapose different cultural framework. Attempting to judge which cultural "sphere" is better is pointless, since there is no objective metric—that is, no objective value system—by which you can engage in this comparison. As you can see, these theories aren't holding out hope for an objective morality.

And so, Kant will have to convince us that the relativistic views we covered, along with any other non-objectivist competitors are wrong. This is a tall order, since many find relativism alluring. Keep this in mind as we wade through Kant's ethics.

 

 

Important Concepts

 

Human Reason

In the last lesson, we stuck to how the faculty of human understanding structures our perception of reality. Kant argued that the information that we get from our sensory organs (eyes, ears, etc.) is given a structure by our understanding. Put in more modern parlance, our brain interprets the electrical signals it's getting from the outside world and creates a model of reality for us (Swanson 2016). Our brains, in other words, create a virtual reality that more-or-less matches the reality that exists outside of our skulls, and that's how we navigate through the world (Seth 2021).

This lesson is devoted to human reason. For Kant, these are two different cognitive faculties. The basic distinction is as follows. Human understanding (through which we give order to the world, as seen in the previous lesson) contains forms of intuition, i.e., those built-in categories that shape the world when we cognize it.1 Human reason, on the other hand does not. Reason does not depend on the peculiarities of human cognition; what is reasonable is reasonable to all intelligent beings whether they be humans, angels, gods, or whatever. So, with human reason, we will discover those things that are rational across the board. And this, according to Kant, includes morality.

Kant's Critique of Pure Reason

Perhaps it will help if I put a slightly different gloss on this distinction. Human understanding is the means by which we understand the world as it appears to us. Human reason, on the other hand, is the means by which we consider how the world ought to be. So both faculties help us construct the world, although in different senses of the word. And just as we give ourselves the laws of nature through human understanding, we give ourselves the moral law through human reason.

We should be more specific on the kinds of laws that human reason gives us. There are two ways that reason commands us. A hypothetical imperative is the sort of imperative (or command) where: a. you have a particular desired outcome or consequence, so b. you do a particular action as a means to that end. For example, “Billy wants to get an A in the course, so he does all the homework and engages in class.” Also, “Wendy is thirsty, so she got up to get some water.” Billy and Wendy had a desire, and reason came up with a rational means by which to fulfill said desire. That's one way that reason commands us.

A categorical imperative is a command from reason that applies across any situation no matter what you desire, i.e. it’s a set of rules you must follow, since they always apply. Put another way, there are some rules that if you don't obey, you'd be contradicting yourself.2 Kant believes that morality is a categorical imperative. It is a moral law that is commanded upon us by our own reason.

It is the realization that there is a moral law, and that we can fail to abide by it, that makes us realize that we are free.3 That means we are Rational Beings; we are beings that can live according to principles. Non-human animals can't do this. This is why it is funny to see vegetarian sharks in Finding Nemo: it's not in a shark's nature to be able to choose to be vegetarian. But we can choose the principles by which we live. Kant argues that this is what gives us moral personhood (i.e., the status of having moral rights).

“The starting point of Kant’s ethics is the concept of freedom. According to his famous maxim that ‘ought implies can’, the right action must always be possible: which is to say, I must always be free to perform it. The moral agent ‘judges that he can do certain things because he is conscious that he ought, and he recognises that he is free, a fact which, but for the moral law, he would have never known’”(Scruton 2001: 74).

 

 

Decoding Kantianism

 

Some clarifications...

As previously stated, human understanding is for understanding the empirical realm (the world of phenomena that we perceive through our "forms of intuition"); human reason, then, helps us to come to know aspects of the transcendental realm (the realm of things-in-themselves), namely the general laws of logic that apply everywhere. Because Kant’s moral system is founded in this transcendental realm, he must rely solely on reason for making moral judgments. Indeed, Kant argued that we can arrive at fundamental moral truths through reason alone (or Pure Reason); in other words, we do not need to look at the consequences of the action (in the empirical realm) to see whether they are right or wrong. For example, you don't need to know what happened after someone betrayed a friend to know that betraying someone is wrong. This is why Kant develops a purely duty- or rule-oriented view.

What is freedom for Kant? Kant stresses that freedom is not just doing whatever you desire. This is because some desires are not genuinely coming from us. Desires have either biological or social origins. For example, our desire for food and sex have biological origins. Other desires, like our desire to have a bigger following on Instagram, clearly has a social origin. In any case, Kant argues that true freedom comes when you rid yourself of these non-rational desires. It is only when you allow yourself to be truly governed by reason that you are free. Getting rid of all your non-rational desires leads to pure practical reason. As previously mentioned, human reason is that through which we give ourselves the moral law. So once you've gotten rid of these non-rational desires you can follow the moral law.

For the reasons outlined in the preceding paragraph, Kant argues that an action only has real moral worth (i.e. moral value) if it is done out of duty. Doing something out of duty is to do something because one is motivated out of respect for moral law, even if one doesn’t really want to do it. The moral worth of the act is derived not from the consequences of the act, but from the principle, or maxim, that motivated that act. For this reason, good will is the highest moral virtue. Good will is what allows you to follow the moral law. In fact, other virtues wouldn’t be as good without the possession of good will first. For example, being loyal is clearly a virtue. But if you are loyal to a tyrant, like Vlad the Impaler, you are doing many immoral things, like impaling people. If you have good will (towards others), you wouldn't be loyal to a tyrant like that. It is good will that allows loyalty to truly be a virtue.

 

 

 

Executive Summary

  • The Enlightenment, which is typically considered to be—all things considered—a good thing, also sent some intellectuals into a moral panic, since many age-old institutions were being questioned.

  • Immanuel Kant proposed an ambitious theory that aimed to restore the moral law, the freedom of the will, the immortality of the soul, and the justification for belief in God.

  • Central to these theories are the transcendental deduction, which sought to establish the existence synthetic a priori claims, and the categorical imperative, Kant's duty-oriented, rule-based approach to ethics. I will typically refer to the categorical imperative as Kantianism.

  • Due to how ambitious it is, however, Kant's theory has many criticisms. Since we are focusing here on his ethics, we can give two brief objections. First, Kantianism appears to be too strict. For example, it never allows for lying, even small, inconsequential lies. Second, Kant's categorical imperative is vague in some cases. For example, when duties conflict, it is unclear which course of action is the right one.

FYI

Suggested Readings:

TL;DR: Crash Course, Kant & Categorical Imperatives

Supplementary Material—

Advanced Material—

 

Footnotes

1. Although it is not directly relevant to this lesson, the "forms of intuition" that Kant argued are a part of how we see the world but not a part of the world in-and-of-itself are actually space and time. In other words, Kant believed space and time are not actually features of the world but added to our sensory impressions of the world when we cognize it. Believe it or not, some physicists believe Kant is actually right: spacetime isn't objectively real, we construct it through our perceptual systems (see Rovelli 2018).

2. For example, here are some commands from reason: a. You may not conceive of a married bachelor; b. You may not conceive of a round square.

3. Roger Scruton puts it this way: “The law of cause and effect operates only in the realm of nature (the empirical realm). Freedom, however, belongs, not to nature, but precisely to that ‘intelligible’ or transcendental realm to which categories like causality do not apply” (Scruton 2001: 75).

 

 

Consequences

 

Like the other acquired capacities above referred to, the moral faculty, if not a part of our nature, is a natural outgrowth from it; capable, like them, in a certain small degree, of springing up spontaneously; and susceptible of being brought by cultivation to a high degree of development.

~John Stuart Mill

 

We've come so far...

We are a long way from the question that kicked off this quest: the question of how to define knowledge. We are currently knee-deep in ethical theories, and some might be puzzled as to how we got here. The primary reason for our survey of ethical theories is to explore the idea of moral realism. Moral realism, the view that there is an objective right and wrong, is—at least intuitively—only tenable if coupled with something like (libertarian) free will. In other words, moral realism only makes sense if we have something like libertarian free will. I didn't really justify this "coupling" of moral realism and (libertarian) free will when I presented the idea. But after seeing the work of Kant, I hope you can now see that moral realism, belief in God, and certainty about the world seem to form a worldview that various thinkers subscribed to.

Why do we want to defend libertarian free will? Well, besides the obvious reason that it'd be nice to have it, many of our beliefs and institutions implicitly assume libertarian free will. For example, does voting really make much sense without libertarian free will? How about income inequality? How about capital punishment? It's hard to tell. My colleague David Reed has made the case to me that, in Political Science, libertarian free will must simply be assumed. This, however, is not a political science class. It could very well be the case that none of those aforementioned institutions really make sense, in light of modern science. The views of great thinkers, like Aristotle, have fallen after a millennium-long reign. Why should ideas from political science be any different?

More than anything, though, in this class we have been trying to defend the Cartesian foundationalist project. Descartes thought he could reconcile science and faith, and his system utilizes the existence of God as the way to disprove skepticism. Since we cannot take the existence of God for granted, we explored arguments for and against God's existence. The main argument against God's existence was the Problem of Evil, and so we tried to solve it. One of the most popular proposed solutions is the free will solution, and that is why we are here.

We are six dilemmas into this quest. Today we come to know the seventh.

 

London, 1861

 

Kant v. the Utilitarians

Typically in an introductory ethics course, the view we learned about in the last lesson is taught in tandem with the view we are covering today: Utilitarianism (in particular the version of Utilitarianism advocated by John Stuart Mill, 1806-1873). I tried to move away from that while introducing Kantianism, but now that we are moving towards Utilitarianism, it's impossible to hide just how antagonistic these two views are to each other. It sometimes seems they are almost exact opposites in their approach to moral reasoning, and they disagree on almost every ethical issue of great import.

John Stuart Mill by G F Watts
"John Stuart Mill"
by G. F. Watts.

Why is this? There are many reasons. First, as you will learn in the Important Concepts, the Utilitarians are explicit moral naturalists. This is simply the view that moral properties are just natural properties (physical things we can see, touch, and/or study through standard scientific methods), and it is a central tenet of Utilitarianism. They're not commands from God or social constructs, like the law. Instead, moral properties are empirically discoverable, i.e., capable of being studied by science. How? Utilitarians believe that the moral property GOOD just is a positive mental state, namely pleasure. Pleasure, of course, is a natural phenomenon.1

Moreover, once they've opened up the natural realm as candidate for moral properties, they argue that positive mental states, hereafter referred to as utility, are actually the only intrinsic good. This view is called hedonism. This is where their empirical approach comes in handy to them: they pose a challenge to non-Utilitarians. What do you really want other than happiness (or the avoidance of pain, i.e., negative mental states)? The more "base" desires are obviously linked to the pursuit of pleasure and the avoidance of pain; I'm speaking here of things like sex and food. You might then claim you want a good job, a family, and a house. But the Utilitarian would only inquire further, "Why do you want that?" Ultimately, you'd have to concede that having a bad job (or no job), no family, and no home would be considerably damaging to your mental wellbeing. Having them, however, would make you happy. Pretty much anything you desire, the Utilitarian can find a way to show you that ultimately there is a desire for the utility that it brings you. This is why Mill considers hedonism to be an empirical truth: you can discover that this is what really drives people by just asking them. That is, check the drives of humans, and you'll find that hedonism is true.

“There is in reality nothing desired except happiness. Whatever is desired otherwise than as a means to some end beyond itself, and ultimately to happiness, is desired as itself a part of happiness, and is not desired for itself until it has become so... Those who desire virtue for its own sake, desire it either because the consciousness of it is a pleasure, or because the consciousness of being without it is a pain, or for both reasons united... If one of these gave him no pleasure, and the other no pain, he would not love or desire virtue, or would desire it only for the other benefits which it might produce to himself or to persons whom he cared for” (Mill 1957/1861: 48).

But the component of Utilitarianism that really puts it at odds with Kantianism is consequentialism. Recall that consequentialism is the view that an act is right or wrong depending on the consequences of that action. Kantianism, on the other hand, specifically stipulates that you need not check the empirical realm. The moral law must be transcendental, independent of context, argues Kant. This might seem like a small difference now, but you'll soon see that this puts these two theories worlds apart.

Making room...

Mill does not only differentiate his view from that of Kant; he also goes after other ethical theories. As you can see in his quote above, he argues that virtue theory doesn't hold any weight since it doesn't give the right theory of moral value. He claims that those who pursue virtue for its own sake are actually just pursuing pleasure or the avoidance of pain. Similarly, Mill goes after social contract theory. While reviewing the problems of past approaches to moral reasoning, Mill argues that social contract theory merely put a bandaid on the whole matter by inventing the notion of a contract. But he dismisses the theory outright. You'll get a better idea of why in the next section.

“To escape from the other difficulties, a favourite contrivance has been the fiction of a contract, whereby at some unknown period all the members of society engaged to obey the laws, and consented to be punished for any disobedience to them, thereby giving to their legislators the right, which it is assumed they would not otherwise have had, of punishing them, either for their own good or for that of society… I need hardly remark, that even if the consent were not a mere fiction, this maxim is not superior in authority to the others which it is brought in to supersede" (Mill 1957/1861: 69; emphasis added).

 

 

 

Important Concepts

 

The Theory

The Formula

The theory itself is simple. The Principle of Utility is derived by combining hedonism and consequentialism. It is as follows: An act is morally right if, and only if, it maximizes happiness/pleasure and/or minimizes pain for all persons involved.

Who counts as a person?

Here's another point of contention with Kantianism. Where as Kant believe that personhood, i.e. moral rights, are assigned to anyone who is a Rational Being, i.e. able to live according to principles, Mill believed that all sentient creatures deserve rights. Sentience is the capacity to feel pleasure and pain.2

Subordinate Rules

Mill also endorses subordinate rules, or what we might call “common sense morality.” This is because it is unfeasible to always perform a Utilitarian calculus when making a moral decision in your day to day life. In any case, according to Mill, these are rules that tend to promote happiness. They’ve been learned through the experience of many generations, and so we should internalize them as good rules to follow. These rules include: "Keep your promises", "Don’t cheat", "Don’t steal", "Obey the Law", "Don’t kill innocents", etc. However, note that if it is clear that breaking a subordinate rule would yield more happiness than keeping it, you should break said subordinate rule.

“Some maintain that no law, however bad, ought to be disobeyed by an individual citizen; that his opposition to it, if shown at all, should only be shown in endeavouring to get it altered by competent authority. This opinion… is defended, by those who hold it, on grounds of expediency; principally on that of the importance, to the common interest of mankind, of maintaining inviolate the sentiment of submission to law” (Mill 1957/1861: 54).

 

Famous Utilitarians

Perhaps you can figure out how you feel about Utilitarianism by reflecting on some famous figures who at least seemed to have been using utilitarian moral reasoning.

 

 

Decoding Utilitarianism

 

Objections (Recap)

Thought-experiments

Various thinkers have proposed thought-experiments that show that there is something counterintuitive about Utilitarianism. Enjoy the slideshow below:

 

 

Challenging moral naturalism

Some thinkers have challenged the utilitarian naturalist assumption that the moral property of moral goodness could be equated with the natural properties of positive mental states. In Ethica Principia, G.E. Moore argued for moral non-naturalism, the view that moral properties cannot be studied with the natural sciences. He used various arguments (such as the naturalistic fallacy argument, which many think was insufficient), but the open question argument is the most often referenced. The argument goes something like this. If “good” just means “pleasure”, then we can express it like an identity claim.

Eg,
BACHELOR = UNMARRIED MALE
GOOD = PLEASURE

But it doesn’t seem like asking “Is a bachelor an unmarried male?” is the same as “Is good the same as pleasure?” The first is a silly question. But the second requires an argument.

Even moral skeptics, individuals who question whether there are any objective moral values (like someone who endorses DCT atheist version), are unimpressed by moral naturalism. Richard Joyce, a prominent moral skeptic, doesn't see the appeal in equating moral goodness with mental states. Clearly, this is an issue we'll have to revisit.

“When faced with a moral naturalist who proposes to identify moral properties with some kind of innocuous naturalistic property—the maximization of happiness, say—the error theorist [moral skeptic] will likely object that this property lacks the ‘normative oomph’ that permeates our moral discourse. Why, it might be asked, should we care about the maximization of happiness anymore than the maximization of some other mental state, such as surprise?” (Joyce 2016: 6-7).

The theory is too demanding...

Lastly, just like some object that Kantianism is too strict, some object that Utilitarianism is far too demanding. Consider the most famous case...

 

 

 

What now?

Utilitarianism, to many, is simultaneously the most plausible ethical theory and, paradoxically, the most dangerous one, i.e., the most likely to be misapplied. As they say, the road to hell is paved with good intentions. Now that we've looked at several ethical theories, we should take a look around and look at the mess we've made.

We've seen an ethical theory that relies on the view that it is human nature to behave in self-interested ways (Hobbes' social contract theory). We've also looked at a view that claims that it is our culture which determines what is right and wrong for us (cultural moral relativism). We looked at Kantianism and utilitarianism, theories which dragged us into their centuries-long war, where deontology (a duty-oriented, rule-based approach to morality) and consequentialism face-off against each other. There's also views we didn't cover, like Aristotle's approach (known as virtue theory) which charges that good behavior is actually rooted in our dispositions to act (i.e., our virtues); and so we must train ourselves to behave well by developing certain virtues. There's also divine command theory: the view that act is right or wrong depending on God's commands. All of these views are intuitive in some sense, but many (if not all) are incompatible with each other. What do we do?

Remember: we only got into this mess to save (libertarian) free will. So let's assess these views through that lens. First off, it's not clear that all of these views are compatible with libertarianism. Divine command theory includes God, and we've seen in the problem of divine foreknowledge that it isn't clear that the notions of God and (libertarian) free will are compatible. It doesn't look like psychological egoism, which is a primary assumption in Hobbes' social contract theory, is compatible with (libertarian) free will either. If all we can do is behave in our own self-interest, then we are not free to not behave in our own self-interest. (Let me remind you that Hobbes' himself was a compatibilist.) Cultural moral relativism is pretty much orthogonal (i.e., philosophically independent) of free will. It's only Aristotle's virtue theory, Kantianism, and Utilitarianism that are unambiguously compatible with libertarianism. Not surprisingly, these three views are (arguably) different forms of moral realism. So, if we want to continue to defend the Cartesian project, we'd (most likely) have to defend one of these views. Going with divine command theory and fixing the problem of divine foreknowledge is another option.

This is easier said than done, however. It is true that, per Bourget and Chalmers 2014, these are the most popular ethical theories. But just because they (potentially) fit with the Cartesian worldview, that doesn't make them correct. We still need an argument against psychological egoism (solving Dilemma #5), as well as an argument against cultural relativism (solving Dilemma #6). And even if we had those, Kantianism and Utilitarianism—the two theories most subscribed to—are clearly incompatible. We'd have to choose one and argue against the other. That is, we'd have to solve Dilemma #7: Kantianism or Utilitarianism?

But then again, maybe both Kantianism and Utilitarianism are wrong. Maybe we don't have libertarian free will. Maybe our institutions don't really make sense, and it's a wonder they've worked at all for so long. Maybe there is no God. Maybe there are no solid foundations for knowledge. Maybe, if we look down, we'll realize that the human project has no solid base, that we're teetering on a knife's edge with collapse on both sides. Maybe we're floating in midair. Or maybe, just maybe... we're falling.

 

 

 

Executive Summary

  • The last ethical theory we'll be covering is act-utilitarianism. It is the view that an act is morally right if, and only if, it maximizes happiness for all sentient beings involved.

  • The general utilitarian approach, namely its consequentialism, has the distinction of being one of the most popular approaches to morality. However, it faces stiff competition from deontological thinkers, like Kant and his intellectual descendants. In fact, Kantianism and Utilitarianism appear to be almost exact opposites in their approach to moral reasoning, and they disagree on almost every ethical issue of great import.

  • We covered four ethical theories in the moral realism camp: divine command theory, virtue ethics, Kantianism, and Utilitarianism. None are without problems.

  • We leave ethics behind here, realizing that the field of ethics raises more questions than answers. Among the questions covered were:

    • Dilemma #5: Do humans only act out of self-interest?
    • Dilemma #6: Is morality relative?
    • Dilemma #7: Kantianism or Utilitarianism?

FYI

Suggested Reading: John Stuart Mill, Utilitarianism

  • (Note: Read chapters I & II.)

TL;DR: Crash Course, Utilitarianism

Supplementary Material—

Advanced Material—

 

Footnotes

1. I might add that the English word pleasure does not convey the complexity of positive emotions that Mill meant by it. There is, Mill argues, a hierarchy of positive mental states. On the low end you might find the pleasure of sex, the satisfaction of a good meal, or the clarity of mind you have when you are well-rested. On the higher end you might find the fulfillment of a life well-lived or the equanimity of coming to terms with your own mortality. Mill famously put it this way: "It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied."

2. I should add here that the meaning of sentience being used here is one of several senses of the word. The word sentience is also used, for example, as a synonym for consciousness, more broadly speaking. We are not using the term in this way. When I use the word sentience it will exclusively refer to the capacity to feel pleasure and pain.

 

 

 UNIT III

endgame3.jpg

2 + 2 = 4(?)

 

Et meme si ce nest, pas vrai, faut croire a I'histoire ancienne.

[Even if it's not true, you must believe in ancient history.]

~Leo Ferre

 

The man, the myth...

As you may have noticed, developments in mathematics have accompanied great intellectual breakthroughs in the past. It's even the case that various of the philosophers we've covered were themselves accomplished mathematicians—or they at least tried to stay up-to-date with developments in the field. Moreover, it really does seem to be the case that a field of inquiry (i.e., a discipline one studies, such as astronomy, psychology, history, etc.) seems to mature as it becomes mathematized (Turchin 2018). And so, what we're about to do will seem paradoxical: we are going to question mathematics.

The Cult of Pythagoras, by Alberto A. Martínez

As a sort of method of warming you up to this, I want to introduce you to the work of the mathematician and historian of mathematics Alberto Martínez (2012). In his book The Cult of Pythagoras, Martínez attempts to dispel many of the myths in mathematics. He begins with a mathematical legend: Pythagoras. Let's begin with a question. What do we actually know about the historical Pythagoras? Well, we know that Pythagoras was probably a real person. And we know that in the 6th century BCE various communities based on his teachings and beliefs originated and spread through Magna Graecia, which is how the Romans referred to southern Italy. We also know that, due to political instability in Italy, most Pythagorean communities had left Magna Graecia by 400 BCE, and that some of these refugees regrouped in Greece. And, according to Martínez, that's about it. All other "facts" that you think you know about Pythagoras are pure speculation.

Let's return to the Pythagorean communities (that we know with certainty existed) to see what Martinez means. Around 400 BCE, these communities were now at least a century removed from the historical Pythagoras, and so there were disputes as to how to interpret his teachings. There were two main factions. One stressed renunciation of material wealth. The other faction, the mathēmatikoi, focused on Pythagoras' "scientific" claims: his interpretations of the world and how best to understand it. Both factions, however, appeared to believe incredible things about their teacher.

 


 

Myths attributed to Pythagoras:

  • He never laughed.1
  • He infallibly predicted earthquakes, storms, and plagues.
  • He was so charismatic that every day almost the entire city turned to him, as if he were a god.
  • He once spoke to a river while crossing it and the river responded, "Hail, Pythagoras!"
  • He was the son of Apollo.

See (Martínez 2012: 2).

 


 

Perhaps most famously, in addition to these myths, Pythagoras was said to have personally made various mathematical discoveries. But even this has now come to be questioned, not to mention the unbelievable myths described above.

“In the end, what can we attribute to the Pythagoras (as opposed to contemporaries who shared his name) with certainty in the history of mathematics? Nothing. As argued by historian Walter Burkert, ‘The apparently ancient reports of the importance of Pythagoras and his pupils in laying the foundations of mathematics crumble on touch, and what we can get hold of is not authentic testimony but the efforts of latecomers to paper over a crack, which they obviously found surprising’... Historian Otto Neugebauer briefly remarked that the stories of Pythagoras’s discoveries ‘must be discarded as totally unhistorical’ and that any connection between early number theory and Pythagoras is ‘purely legendary and of no historical value’” (Martínez 2012: 14).

There are various things that I should point out here. If your early education was anything like mine, you were taught that Pythagoras was a mathematician, and his religious teachings were seldom mentioned, if ever. At least in my case, I don't recall being taught these myths about the historical figure. I certainly don't think that I was told that his followers considered him divine. Moreover, something about this doesn't sit well. Mathematics seems linked in a fundamental way to science, not to religion. Nevertheless, we have seen this time and time again. Thinkers from the past have married their mathematical pursuits with their religious convictions. Newton, Copernicus, Brahe, Kepler, Pascal, Galileo, Descartes, and Leibniz were all believers, and some of these saw their mathematical work as a religious task: an attempt to speak the language of God. Do you see what's going on here? Mathematics seems to somehow be imbued with a supernatural element.

None of this, of course, is to say that mathematics is not a worthwhile endeavor. Mathematics is both fascinating and shockingly useful. It boggles the mind that, for example, linear algebra—developed centuries before digital computers—ended up being useful for machine learning (not to mention quantum mechanics). Ditto for Boolean algebra. Non-Euclidean geometry, developed in the 1800s, is the language of Einstein's relativity theory, developed in the 1900s. And one can go on and on. However, there is something mystical about mathematics, and it can be seen from the very beginning in these Pythagorean communities which persisted and whose ideas spread...

 

 

The worship of numbers

Even though his 2012 book is only partially about Pythagoras, Martínez still named it The Cult of Pythagoras. This is because Martínez claims that mathematicians (like himself):

  • have a quasi-divinical approach to mathematics, either believing that they have a special sense that allows them to glimpse into the realm of mathematical truth, or they claim that some deity helps them discover mathematical theorems, or they believe triangles are somewhat supernatural in that they have existed before humans and will continue to exist after humans;

  • (like some religions) are guilty of embellishing their own history; and

  • (like some religions) have a track-record of brushing their past conflicts under the rug.

Martínez gives plenty of evidence for his claims from the history of mathematics, but I'd like to share two datum in particular. First, in 2001, the magazine Physics World ran a poll on the philosophical views of physicists. Among various questions (about the reality of electrons, genes, atoms, lightwaves, etc.), the survey also asked about beliefs regarding numbers. Two-thirds of the respondents claimed that real numbers (a value that represents any quantity along a number line) are "real"; 43% claimed that imaginary numbers (like √-1) are real as well. Martínez also surveyed his students each semester from 2005 to 2010. “Out of 245 majors in mathematics and the sciences over those five years, 77 percent of the students wrote that triangles existed before humans and will continue to exist forever. Almost 22 percent disagreed, and only 3 students chose not to reply and wrote instead ‘maybe,’ ‘neither,’ or ‘no idea’” (Martínez 2012: xx). Clearly students of mathematics and related disciplines believe there is something "special" about numbers, that they exist in a way independent of human minds.

Anyone with knowledge of the history of mathematics can recall other important figures who shared this quasi-divinical treatment of numbers. Take for example, legendary mathematician Kurt Gödel...

“The most commonly cited remark of Gödel’s on this topic involves a direct claim that [mathematical] intuition is ‘something like a perception’ of mathematical objects… Gödel believed not just that human minds are immaterial… but that we are led to this conclusion by reflecting on mathematics’” (Balaguer 2001: 27).

John Nash, who made fundamental contributions to game theory—such as the concept of the Nash equilibrium—and who was played by Russell Crowe in A Beautiful Mind, similarly claimed a sixth sense for mathematical intuition, a sense which wouldn't allow him to distinguish between his mathematical ideas and the onset of schizophrenia.

“‘How could you’, began [Harvard Professor George] Mackey, ‘how could you, a mathematician, a man devoted to reason and logical proof… how could you believe that extraterrestrials are sending you messages? How could you believe that you are being recruited by aliens from outer space to save the world? How could you…?’... ‘Because’, Nash said slowly in his soft, reasonable southern drawl, as if talking to himself, ‘the ideas I had about supernatural beings came to me the same way that my mathematical ideas did’” (Previc 2009: 69).

Mathematician and historian of mathematics Jeremy Gray reminds us that many great mathematicians, including Carl Friedrich Gauss, were "addicted to the calculation of examples" (Gray 2003: 135). This is not unlike the repetitive nature of some chants and prayers, like Buddhist chants and the saying of the Rosary.

Lastly, Indian math savant Srinivasa Ramanujan (1887-1920) claimed to receive his mathematical intuitions from a goddess. Ramanujan is the subject of the film The Man Who Knew Infinity.

If these great thinkers thought mathematics worthy of this near-worship, should we do the same?

 

Food for thought...

 

 

Important Concepts

 

On truth

As you learned in the Important Concepts above, there is some disagreement on what truth is. In fact, we discussed this briefly all the way back in Lesson 1.2 Agrippa's Trilemma. Recall that we were distinguishing between epistemology, the branch of philosophy concerned with the nature and limits of knowledge, and metaphysics, the branch of philosophy concerned with the nature of reality. Recall also that there is considerable overlap between these branches. For example, a quintessential question in epistemology is: How do we come to know which statements are true? A related metaphysical question is: What is truth? Notice that the first question (“How do we come to know which statements are true?”) is equivalent to asking what the right method for acquiring true beliefs is, not what truth itself is. “How do we come to know which statements are true?” is a question about what the best belief-forming practices are. “What is truth?” is asking about what all true statements have in common. Clearly there is overlap between the two branches, but they’re still not the same question.

Whereas when we kicked off this course, we focused on the epistemic side of things, let's circle back and think about metaphysics. The metaphysical question above is particularly strange. What is truth? The question almost seems poorly formed. I think that an approach to thinking about truth which was developed in the 20th century is helpful here. Some philosophers began to think about truth not as a physical thing (as in "the whole truth", whatever that means) or a place (as in "getting to the truth"), but rather as a property. In particular, it is a property of declarative sentences.

Some declarative sentences have this property called truth. For example, "Sacramento is the capital of California" enjoys having the property of being true. Other declarative sentences, however, lack this property of truth. For example, "R.C.M. García is a millionaire" is (unfortunately) not true. What do these true sentences have in common? Philosophers came up with the concept of a truthmaker, i.e., something that makes the statement true. What counts as a truthmaker ranges widely, depending on the context. For example, "Sacramento is the capital of California" is true due to some legal fiction. If you go to Sacramento, you won't find the property capital anywhere; rather, Sacramento is the capital of California by virtue of an agreed upon legal framework. Other statements, like "The cat is on the mat" depend on the state of physical things, such as whether or not whatever cat is being referred to is actually on whatever mat is being referred to. Sometimes you might say that there's a combination of truthmakers, as in "The cat is in the capital of California".

Be careful, though. Only a proposition, or the thought expressed by a declarative sentence, has a truthmaker. This might be tough to get a grasp on first, so let me give you some examples. Consider the following set of sentences:

  • "Snow is white."
  • "La neige est blanche."
  • "La nieve es blanca."
  • "Schnee ist weiß."

This set contains four sentences. That's clear enough. But how many propositions does it contain? In fact, it is only one proposition. This is because this is the same thought expressed in four different languages. These sentences, moreover, are clearly different from sentences like "What’s a pizookie?", "Close the door!", and "AHH!!!!!". It just simply is the case that we cannot reasonably apply the label of true or false to a question ("What’s a pizookie?"), a command ("Close the door!"), or an exclamation ("AHH!!!!!"), the way we can to a declarative sentence. As such, non-propositions are simply neither true nor false; they are said to lack truth-functionality.

And so, with the preliminaries out of the way, here is the question: what is the truthmaker of true mathematical statements? In other words, what makes “2 + 2 = 4” true? Notice that we are not denying that “2 + 2 = 4” is true—at least not yet. What we are wondering about is what it is that makes the statement true. We are looking for its truthmaker.

Decoding PHIL of Math

 

 

 

The thing about mathematical truth...

It is difficult to understate how different our modern conceptions of mathematical truth are from those of thinkers from the past, namely thinkers before the dawn of non-Euclidean geometry. This is not the place to get into the technical details of geometry, whether it be Euclidean or non-Euclidean, so instead I'll focus on what theorists thought about geometry. Perhaps the best example of pre-non-Euclidean thinking is none other than Immanuel Kant—again. Recall that Kant argued we can make synthetic a priori judgments, i.e., judgments whose justification is independent of experience but that are objectively true about things-in-themselves (the world itself). And recall also that one of his principal examples is (Euclidean) geometry. In other words, not only is geometrical truth known with absolute certainty, but it allows us to discover new facts about the world (things-in-themselves).

“The mathematics of space (geometry) is based upon this successive synthesis of the productive imagination in the generation of figures. This is the basis of the axioms which formulate the conditions of sensible a priori intuition (from which it can be shown) that two straight lines cannot enclose space" (Kant as quoted in Gray 2003: 84).

However, Kant was not the only thinker that felt this way. All scientists, philosophers, and anyone that had any thoughts on geometry believed that geometry yielded absolute truth. And this belief was shattered in the 19th century, as non-Euclidean geometries began to be developed and accepted. No one puts the point more clearly than mathematician and historian of mathematics Morris Kline:

“In view of the role which mathematics plays in science and the implications of scientific knowledge for all of our beliefs, revolutionary changes in man’s understanding of the nature of mathematics could not but mean revolutionary changes in his understanding of science, doctrines of philosophy, religious and ethical beliefs, and, in fact, all intellectual disciplines...The creation of non-Euclidean geometry affected scientific thought in two ways. First of all, the major facts of mathematics, i.e., the axioms and theorems about triangles, squares, circles, and other common figures, are used repeatedly in scientific work and had been for centuries accepted as truths—indeed, as the most accessible truths. Since these facts could no longer be regarded as truths, all conclusions of science which depended upon strictly mathematical theorems also ceased to be truths... Secondly, the debacle in mathematics led scientists to question whether man could ever hope to find a true scientific theory. The Greek and Newtonian views put man in the role of one who merely uncovers the design already incorporated in nature. However, scientists have been obliged to recast their goals. They now believe that the mathematical laws they seek are merely approximate descriptions and, however accurate, no more than man’s way of understanding and viewing nature” (Kline 1967: 474-75).

What is non-Euclidean geometry? An example of the various types of non-Euclidean geometries is the geometry of curved space which was eventually used in relativity theory. You can watch this lecture on the history of non-Euclidean geometry for more information. There's also a cool demo in the Related Material below.

All this is to say that the fact that there are counterintuitive implications to nominalism/fictionalism does not discount the theory. Some mathematicians explicitly endorse this view (Field 2016). They accept that mathematics is a formal language that we've invented. It's like a game with rules that we've specified, and we happen to have specified those rules such that they are useful to us in interacting with the physical world.

 

Comments on abstract objects

As we saw, some philosophers and mathematicians have a very different perspective about mathematical objects. They argue that: a. mathematical objects do exist objectively (i.e., they are real), and b. they are non-physical, abstract objects that exist independently of the mind. This to say that mathematical objects exist in a non-physical, mind-independent way. On this view, mathematical objects existed before humans and will exist once we're gone too. These mathematical objects are said to exist in some other plane of existence; it wouldn't be too much of a stretch if we say that they reside in another dimension. But, just like Gödel claimed, we can access these abstract objects through the use of reason (or some kind of mathematical intuition).

Non-physical, mind-independent objects are difficult to wrap your mind around. They are difficult to describe since they are not physical, but, importantly, they're also not things we made up—allegedly. If you want something to compare it to, an abstract object is like what a soul is—if you believe in souls. Another analogy is to God—if you believe in God. The idea here is that God isn't physical or bound by time. He's always existed and always will, which is what some mathematicians believe about numbers, triangles, etc. You learn more about these in the TL;DR.

How do we come to know these abstract objects? As previously mentioned, some mathematicians and philosophers claim that we have a special mathematical intuition for grasping mathematical truth. This view, I might add, isn't completely crazy. There are cases of extraordinary ability to perform mathematical calculations in minds that perform below-average in basically every other domain. In other words, there are people who perform amazing mathematical feats but are mentally disabled. In his book The Number Sense, Dehaene discusses these so-called "idiot savants." In one case, a subject named "Michael" is described as a "profoundly retarded autistic" young man. His verbal IQ cannot even be measured. Nonetheless, Michael can instantly see that 627 can be decomposed into 3×11×19. It takes Michael a little more than a second to assess whether a three-digit number is prime or not; it took a scholar with a math degree ten times that amount of time to perform the same task. Dehaene also reminds us that neurologist Oliver Sacks reports that he once caught two autistic twins exchanging very large prime numbers (see Dehaene 1999: 144-72). Can these individuals "see" numbers in a different way than the rest of us? Did Gödel "see" numbers in this way too? It is true that there is a tight link between mathematical and spatial aptitudes: the higher your spatial intelligence, the higher your aptitude for mathematics (Hermelin and O'Connor 1986). Do numbers exist independent of the human mind and are some minds capable of seeing into this mathematical realm?

The view that mathematical objects are non-physical and mind-independent, by the way, is called Platonism about mathematics. It's called this since it is the view of Plato...

 

 

 

To be continued...

 

FYI

Suggested Reading: Mark Balaguer, Stanford Encyclopedia of Philosophy Entry on Platonism in Metaphysics Sections 1, 2, 3, and 4.1

TL;DR:

Supplemental Material—

Related Material—

 

Footnotes

1. Interestingly, it's possible that what we think about the appropriate context and reasons for smiling and laughter are not at all what the ancients believed. Read Mary Beard's Laughter in Ancient Rome: On Joking, Tickling, and Cracking Up for a study of how modern day humor differs from ancient humor.

 

 

The Master

 

“Let no one ignorant of geometry
enter here.”

~Inscription written
over the door of Plato's Academy

 

Math on Plato

Our introduction to Plato was through his philosophy of mathematics (in the previous lesson). I admit that this is not the conventional way of discussing Plato in philosophy courses. However, mathematics had a profound effect on Plato. He was so taken by the subject that he believed that all knowledge could be like mathematical knowledge. In other words, math literally shaped how Plato understood the world. For Plato, mathematics is a portal into the transcendental, reality as it really is.

I’m not alone in this assessment. Mathematician and historian of mathematics Morris Kline sees that mathematics is central to Plato’s entire conception of knowledge, and he points out that Plato may have even been part of the quasi-religious sect of mathematicians headed by the intellectual descendants of Pythagoras (see Kline 1967: 62-63). There is definitely something to this idea. Notice, for example, the cryptic language that Plato uses when discussing the subject matter—that is, mathematical objects—of mathematicians:

"These very things that they [the mathematicians] model and draw, which also have their own shadows and images in water, they are now using as images in their turn, in an attempt to see those things themselves that one could not see in any other way than by the power of thinking"" (Republic, 510e-511a; emphasis added).

Agreeing that mathematics had a profound impact on Plato, Stewart Shapiro writes on how mathematics influenced Plato's views on knowledge:

“Plato’s fascination with mathematics may also be responsible for his distaste with the hypothetical and fallible Socratic methodology. Mathematics proceeds (or ought to proceed) via proof, not mere trial and error. As Plato matures, Socratic method is gradually supplanted. In the Meno Plato uses geometric knowledge, and geometric demonstration, as the paradigm for all knowledge, including moral knowledge and metaphysics... Plato finds things clear and straightforward when it comes to mathematics and mathematical knowledge, and he tries to extend the findings there to all of knowledge” (Shapiro 2001: 62-63).

Plato’s views on math, in other words, are a good introduction to his philosophical system. As such, let’s recap Plato’s views about mathematical objects before diving into his other views.

Plato on Math

Recall that if one searches for a truthmaker for "2+2=4", then one has a few options. One could side with physicalism, arguing that mathematical objects are just piles of physical stuff. But, unfortunately for the physicalist, physical objects can't account for mathematical concepts like CIRCULARITY and INFINITY, since there are neither perfect circles nor an infinite amount of stuff in the universe. You can also be a conceptualist, arguing that the truthmaker to "2+2=4" is a thought, presumably dependent on the firing of certain neural networks in my brain. But if we all build our mathematical ideas independently and subjectively in our brains, then conceivably this might make mathematical errors impossible. One might say, in other words, that the way they've mentally constructed the number FOUR is as a prime. You can also be a nominalist/fictionliast about mathematical objects, claiming that they are just convenient fictions. But this leaves you having to say that there is no truthmaker for "2+2=4", and that kinda sucks. And so, Plato's idea suddenly becomes appealing. He argued that mathematical objects are abstract objects. That is, mathematical objects are non-physical, unchanging, eternal objects that exist independent of human minds. They exist in another realm, and we typically refer to this other plane of existence where these objects exist as Platonic Heaven (in honor of Plato). It is these objects in Platonic Heaven that serve as truthmakers for "2+2=4".

As a metaphor for how he conceived of the world and the objects within, Plato gave us The Divided Line. What Plato has laid out for us is his hiearchy for the reality of objects. In other words, Plato is organizing types of objects from less fundamental to more fundamental. At the bottom are mere copies of things: reflections in the mirror, paintings, and the like. These are only copies of the real thing; for example, your reflection is a mere copy of the real you. The next level up is the realm of physical objects. This is where you and I live, along with all the physical things that we interact with on a day-to-day basis. We might think that this is the ultimate level of reality, but Plato disagrees. Upon reflection, we might come to think that Plato might have a point. After all, the reality that we see with our senses can't be the ultimate reality. We know that physicists tells us about a world of atoms and smaller subatomic particles that we can't see with the naked eye.

Figure 3.1 The Divided Line, from Shapiro (2000)
Figure 3.1
from Shapiro (2000).

As you can see in the diagram, the next level up consists of mathematical objects. This means that all mathematical objects (like numbers), as well as mathematical relations (like equality) and functions (like adding 1, squaring, and all the more complicated functions) exist in this realm, independent of all human thought. In other words, numbers are real and they are more fundamental than the reality we inhabit. This shouldn't sound too strange if you believe that mathematics has the power to help us understand our world: mathematics has this power because it is upon mathematics that our world is ordered. This is why mathematics is always involved in the sciences and in any other enterprise that involves knowing our world in a deeper way.

Above mathematical objects are The Forms on which our reality is based. These are the actual properties of the universe upon which everything we see is based, according to Plato. To see this realm is to have a "god's-eye-view". In other words, it is to understand reality as it really is. Try to imagine combining all the knowledge of all the disciplines and then extending that knowledge until it is complete, where everything that can be known is known. That is what it is to know The Forms. And of course, The Forms are ordered too, just like everything below it. At the top of the hierarchy is The Good.

And so Plato believed that this realm of The Forms could be understood via the realm of mathematical objects. In other words, to come to know The Forms, one must first know mathematics. Put counter-factually, if you don't go through mathematics, then you will not understand reality as it really is. All of our knowledge, then, will ultimately be based on mathematics; mathematics is the medium through which we can understand the basis of all reality (The Forms). I might add here that this is not at all a far-fetched idea. In his introduction, Turchin (2018) reminds us that academic disciplines mature as they are formalized, i.e., as mathematical modeling is incorporated into the discipline. Plato seems to have intuited that mathematics is fundamental.

Sidebar: Contemporary views

Was Plato right? Is there a primacy to mathematics such that it is more fundamental than our physical reality? What we can say with certainty is that many of the thinkers we've covered so far seemed to have thought so. They readily relied on mathematics as the avenue through which to understand the world. This is so much so the case that, in his intellectual history, John Randall made the following point:

“Science was born of faith in the mathematical interpretation of Nature, held long before it had been empirically verified” (Randall 1976).

This is what I've been repeating over and over again in this course: we're jaded. We know science is the best method for answering empirical questions. But thinkers in the past didn't have science's track-record to go on. Science as we know it today was still not in existence. And so, these thinkers from antiquity into the early modern period of philosophy tended towards a quasi-divinical, semi-mystical view of mathematics. They were, in other words, Platonists about mathematics.

I close this section with an interesting report. The cognitive scientist Donald Hoffman (2019) makes arguments, using evolutionary game theory, that imply that Plato was right—at least about mathematics.

 

 

Important Concepts

 

Decoding Plato

 

 

 

Why Plato?

Why cover Plato at this juncture? Well, of course, Plato is integral to intellectual history. No introductory course to philosophy would be complete without some Plato. Here are some reasons for why it's important to know Plato:

  • Plato's views are an essential part of the history of Christianity and for that reason and essential part of Western history (see chapter 10 of Freeman 2007). By the middle of the second century CE, Christians began making use of Greek philosophy and “disentangling” the Christian teachings within it. Essentially, Christian intellectuals were selecting aspects of Greek philosophy that could be Christianized and discarding the rest. Clement of Alexandria, for example, claimed that God gave philosophy to the Greeks as a schoolmaster in order to pave the way for the coming of the Lord. Justin Martyr, a Platonist by training, believed could draw from both scripture and Greek philosophy to argue for its positions.

    As it turns out, Platonism was ideally suited for providing the intellectual backbone for Christianity. Platonists were used to dealing with an immaterial world in which the Good is at the top of a hierarchy and where the material world is inferior to the immaterial world. This became the Christian views that God is the creator of all and that the material world is full of sin. Platonists also developed the idea of a soul that was independent of the body—an idea that Christians readily accepted into their doctrine. Importantly, Platonists taught that only a few could glimpse and come to understand the reality of the immaterial world, and this gave backing to the church’s hierarchy, where bishops understood God’s message and the rest had to rely purely on faith—something Protestants would take issue with centuries later.

  • As I've been stressing, it appears that Plato's views on the power of mathematics have been influential throughout intellectual history. Even if he was anti-positivist, it could be the case that Plato's emphasis on mathematics did pay dividends further down the line. In chapter 14 of his Worldviews, DeWitt reports that it is likely that Copernicus was inspired to work out his complex model of the solar system due to his Neo-Platonist leanings.

  • In Shenefelt and White (2003, chapter 2), the authors make the case that Aristotle's work on logic—i.e., the study of validity—took off for reasons other than the brilliance of the author. In particular, the social conditions were ripe for a study of that kind. Why was Aristotle’s audience so receptive to the study of validity? Because 5th century BCE Athenians made two grave errors that 4th century BCE Athenians never forgave: 1. They lost the Second Peloponnesian War (431-404 BCE); and 2. They fell for “deceptive public speaking.” Aristotle's teacher, Plato, blamed both of those errors on a group of professional teachers of philosophy and persuasive speaking referred to as sophists. These first teachers were more pervasive in Athens than elsewhere due to the opulence of the city. But once public opinion turned on the sophists, things got violent. One person accused of sophistry (Socrates) was even put to death. And so the populace was ready to restore the distinction between merely persuasive arguments and truly rational ones. The study of the elenchus (persuasive argumentation) was divided into rhetoric and rational argumentation largely thanks to Plato. It was one step from here to standardize rational argumentation and study its logical form, which is what Aristotle did. As we will learn later, logic was a key ingredient in the digital revolution that we are currently experiencing.

But, to be honest, this is not the main reason why I bring up Plato here. I bring him up because Plato defended the view that we have an immaterial soul that existed before we were born. This is where he fits into our story. We are still trying to solve the Problem of Evil, and it would be helpful to overcome the Problem of Free Will. And so, we will attempt to defend the existence of souls. After all, souls are non-physical, and so they cannot be affected by the laws of nature. As such, determinism (or quantum indeterminacy) have no bearing on our souls. Our souls are free. That is, if we have souls, then they are free. So, here it is... DILEMMA #8: Do we have souls?

 

 

 

Executive Summary

  • Although Plato's work ranges across many fields of inquiry, in this lesson we enter Plato's philosophical system through his views on mathematics.

  • With regards to mathematics, Plato believed that mathematical objects are more fundamental than physical reality itself. This means that mathematical objects are not mere ideas (conceptualism) and they are not merely physical objects (physicalism). Instead, mathematical objects exist independetly of humans; they are eternal and unchanging. Mathematical objects, in fact, are the basis on which physical reality is ordered. There is one level of existence higher than that of mathematical objects: the third realm, the realm of The Forms. The study of mathematics is the gateway by which we understand reality as it really is. This is called Platonism about mathematics.

  • Although Platonism about mathematics might sound strange to some, many of the thinkers we've covered so far seemed to have implicitly believed this.

  • We also covered various lenses by which one can interpret Plato, including as a solver of metaphysical puzzles, a math fanatic, a conservative reactionary, a mystic, an authoritarian, and a moral teacher.

FYI

Suggested Reading: Plato, Book VIII of The Republic

TL;DR:

 

Supplemental Material—

Related Material—

 

Footnote

1. The sophists were probably more scholarly and less mercenary than Plato makes them out to be. For instance, sophists generally preferred natural explanations over supernatural explanations (i.e., positivism) and this preference might’ve been an early impetus for the development of what would eventually be science. Nonetheless, sophists would often argue that matters of right or wrong are simply custom (nomos). Although this view—which is called subjectivism—is a respectable view in ethics and aesthetics today, the sophists posited it in a somewhat crude way.

 

 

The Mind/Body
Problem

 

Are you not ashamed that you give your attention to acquiring as much money as possible, and similarly with reputation and honor, and give no attention or thought to truth and understanding and the perfection of your soul?

~Plato

 

What are we?

At the heart of the debate that we'll be covering today is a simple question: what are we? Some of you, if you don't think about it too much (but still try to answer the question), might suggest a variety of physical things, like your brain or maybe a part of your brain (not the whole thing). You're also unlikely to answer that you are your foot or your elbows. If you follow this line of reasoning, you probably believe that you are really equivalent to your mind, and your mind is in some way related to your brain. Even if you agree with the analysis so far, there are various competing materialist positions, materialism being the view that the brain is a sophisticated material thing that produces consciousness. Some thinkers even think that you are your brain plus your environment. On this view, having a brain isn't enough for being a self. You must react to your environment to be a self. You are necessarily a relational self; you obviously wouldn't be you without the environment that shaped you. See Olson (2007) to see just how many materialist positions there are.

"If we are indeed made of matter, or of anything else, we can ask what matter or other stuff we are made of. Most materialists say that we are made of all and only the matter that makes up our animal bodies: we extend all the way out to the surface of our skin (which is presumably where our bodies end) and no further. But a few take us to be considerably smaller: the size of brains, for instance. Someone might even suppose that we are material things larger than our bodies—that we are made of the matter that makes up our bodies and other matter besides" (Olson 2007: 4).

 

Diagram from Descartes' 1644 Principles of Philosophy
Diagram from Descartes' 1644
Principles of Philosophy.

But some of you reading this might take an entirely different type of position altogether. You think it is mistaken to look for your self in material objects like the brain. You believe that you are really equivalent to your soul. There is less disagreement, I suppose, on what a soul is. Better said, at least there's agreement on what a soul is not: it's not physical and it will not die when the body dies. If you believe that minds are really souls, in other words that there are physical things in the world but also non-physical souls, then you endorse a view called substance dualism. But dualism is not without its problems. Historically, the biggest challenge for and debate within the dualist camp is to give a satisfactory account of how the soul interacts with the body (see the Suggested Reading). Because these debates originated at a time when most of the relevant thinkers believed that the soul was the mind, these words were used interchangeably for them. This is why we refer to arguments that try to address the relationship between the soul and the body as arguments attempting to solve the mind/body problem.

Back to Plato...

Often times, people fail to see the relevance of Plato in this course, so I'd like to make it clear now. Plato and his theory of Forms leave us with a puzzle: how can we ever come to know the Forms if we've never seen them before? Forms are non-physical, so we can't see them with our eyes, but the way we experience the world appears to be with our senses. We need to see things to come to know them, and we can only see physical things! But Plato argued that, through the use of reason, we could come to know the true nature of reality. How?

Plato argued that we recognize the Forms because our souls existed before we were born. They existed in the same way in which the Forms exist. Let's call this Platonic Heaven. That is when we came to know the Forms: in Platonic Heaven. When we are born, we forget what we knew. And it is through the power of reason that we can recollect our knowledge of the Forms. In other words, we already know the truth; we just have to remember. We call this the argument from recollection. But the key component in Plato's theory is the belief in the existence of souls.

Notice for a moment how convenient it would be for us if Plato's theory is true. If there really are Forms then there really is a Form of the Good. This means we can solve Dilemmas #5, #6, and #7. These are all questions in the field of ethics, and once we've recollected the Form of the Good, we could dispense with these questions. We could also solve Dilemma #4 (Do we have free will?). This is because the main threat to (libertarian) free will we're covering comes from the laws of nature either determining our actions or making them random. But if we are fundamentally non-physical (i.e., if we are souls), then natural laws do not affect us. We are, in a sense, immune from the dictates of natural law. And so we are free. If we can defend (libertarian) free will, then that takes us a long way towards solving Dilemma #3 (Does God exist?). We could argue that many kinds of unnecessary suffering come from human free will. This would take much of the sting off of the Problem of Evil. If we could successfully defend God's existence, we could defend Descartes' view that reason is the foundation of all knowledge (Rationalism) over Locke's view that all knowledge comes from the senses (Dilemma #2). And lastly, we could solve Dilemma #1 (What is knowledge?); we can defend Descartes' foundationalism over Bacon's pragmatism and Locke's empiricism. We can finally escape the pit of skepticism.

 

The Board

 

What type of thing are we?

A valuable skill that is learned when studying Philosophy is to keep track of the different levels of analysis you are working on. The question we are asking now has to do with what type of thing we are. Put more bluntly: are we physical or non-physical? Are we souls or not? As we've seen, an answer in the affirmative can significantly affect all past problems covered in this course. This debate, then, will occupy us the rest of the lesson. Call this Dilemma #8: Do we have souls?

 

Decoding Dualism

###video goes here

 

Sidebar

There are some responses to the mind/body problem that are worthy of mention, although I believe they are only of historical significance. What I mean by this is that they do seem to, in a way, "solve" the problem, but I don't think that they are terribly convincing to anyone nowadays. Here's the general response. The argument that we're calling the mind/body problem is implicitly assuming interactionism about the soul and body. In other words, the primary assumption is that the soul and body can and do causally affecting each other. But this isn't the only show in town. Another option is occasionalism. Occasionalism is the view that mental states (in the soul) are caused by God and physical changes in the body (desired by the soul) are also caused by God. In other words, there is no interaction between the soul and the body; God creates in our souls experiences appropriate to whatever situation our bodies find themselves in and causes are bodies to do whatever our soul intends for our body to do. God is the causal go-between for body and soul.

But wait! There's more! According to the parallelism, our mental and physical histories are coordinated so that mental events (in the soul) appear to cause physical events (and vice versa); but mind and body are like two clocks that are synchronized so that they chime at the same time. In other words, God doesn't have to constantly act as a go-between. On this view, the body and the soul were designed so that they would align for their entire existence. Each are "pre-destined" to do as they do—no need for interaction!

Obviously, there's quite a few problems here. With regards to parallelism, this doesn't seem to leave much room for (libertarian) free will. As far as occasionalism goes, it is not an appealing solution because it seems to limit God's omnipotence. Why would God have to constantly be the go-between between body and soul? Couldn't God have somehow made it so that the soul can interact with the body without the need for constant divine intervention?

 

 

Executive Summary

  • The existence of non-physical souls, a view called dualism, would be extremely convenient for the Cartesian foundationalist project we are attempting to defend in this course. It would both provide an avenue through which we can bypass the problem of free will as well as allow us begin to mount a counterattack to the problem of evil—giving us a fighting chance in refuting this argument against the existence of God.

  • Dualism, however, is mired with problems. Our subjective consciousness seems to be very much correlated with our physical brain, giving little credence to the view that we are fundamentally non-physical souls. Moreover, dualism seems to be functionally inert in that it cannot provide a mechanism through which the soul exerts an influence over the body, a challenge to the view which is called the mind/body problem.

  • What we are calling materialism stands in opposition to dualism. It is the view that consciousness is a product of purely physical things, including (but not limited to) the brain, its sensory organs, the autonomic nervous system, etc.

FYI

Suggested Reading: Andy Clark, Some Backdrop: Dualism, Behaviorism, Functionalism, and Beyond

  • Note: This file includes the Introduction, Appendix, and Chapter 1 of Andy Clark’s book Mindware. The suggested reading is the Introduction and the Appendix. However, Chapter 1 may be of interest to some students.

TL;DR:

Supplemental Material—

 

 

 

Atoms and Void

 

By convention sweet is sweet, bitter is bitter, hot is hot, cold is cold, color is color; but in truth there are only atoms and the void.

~Democritus

 

What we are

According to the atomists, a school of thought that sprang up from the teachings of Leucippus and Democritus in the 4th century BCE, it's quite clear what we are. All there really is, fundamentally speaking, is indivisible bodies from which everything else is composed and the empty space in which these tiny little "atoms" swerve. Initially, these atoms swerved in a random and haphazard way. But over time, some regularities manifested themselves. And these patterns came to be more regular and more robust: they would persist over time. In more modern language, we might say that these bundles of atoms acquired the property of being self-organizing. Over time, these bundles became more and more complex. Eventually, there were animals, plants, rocks, and of course humans—along with their human minds. On this view, there is no need for talk of souls. It could all be explained with little tiny particles swimming in the void.

This view, that minds are the product of complex material bodies (rather than non-physical souls) is known as materialism. Here's a more formal definition. Materialism is the view that the only things that exist are things that occupy space and things whose existence depends on things that occupy space. Bodies, of course, occupy space. And our minds depend on our bodies. And that's it. Nothing else is accepted into the materialist ontology, as can be seen in the epigraph above.

As you can probably guess, materialism is typically associated with atheism (since God is presumed to be non-physical) and naturalism (the view that the only valid explanations of natural phenomena are natural explanations). So, we can see that various theories we're covering naturally coalesce into camps. Materialism, atomism, empiricism, compatibilism (or hard determinism), and atheism cluster together quite nicely. In the other camp, dualism, rationalism, libertarianism, and theism tend to be held in tandem. These are, in other words, worldviews that are at odds with each other. Which one is right?

Storytime!

The history of atomism (and materialism) are fascinating in their own right, regardless of whether you think they are true or not. This is because atomism had its moment of glory during the Greco-Roman era, it then went dormant, and then it was re-discovered in the fifteenth century and helped usher in the Renaissance and Enlightenment (see Greenblatt 2011). I'll give you some of the highlights here.

Greenblatt's The Swerve

As already mentioned, atomism was first expressed by Leucippus and his student Democritus in the 400s BCE. As the idea developed, schools of thought sprang up that accepted atomism as a central tenet, as was the case with Epicureanism. The Epicureans believed not only in atomism but also in hedonism (the view that the only thing that is good for its own sake is pleasure) and, of course, materialism. This view was well-defended by various devotees of Epicurus. We actually have a philosophical poem written by Lucretius which is an elegant defense of Epicureanism in general.

But Epicureanism was quite controversial. This is because, among other things, the Epicureans were, by the standards of their day, very impious (or irreligious). In fact, Epicurus himself denigrated ancient pagan religions. He argued that, because pleasure is the only good, if there are gods, then they would only be concerned with the pursuit of pleasure and not bother with the affairs of humans. Epicurus also provided a type of philosophical therapy to enable living well and not concerning yourself with death—a set of teachings that probably filled some spiritual void in the lives of some. So, not only was Epicurus ridiculing the religious practices of his day, but he was providing an alternative way of life(!).

Then came the rise of Christianity. In their opposition to Epicureanism, Christians took an interesting tact: they distorted Epicurus’ views. This is the moment in history when Epicurus is depicted as living a life of indulgence: binging on wine and food, living only for today, etc. And so, the prestige of Epicureanism, along with many other "pagan" philosophies petered out. Epicureanism, along with atomism, laid dormant for a thousand years.

But things wouldn't stay that way. In 1417, Poggio Bracciolini discovered a copy of Lucretius' defense of Epicureanism—the poem I mentioned earlier. When this poem was rediscovered, interest in atomist philosophy resurged. Some scholars, in fact, attribute, much of the spirit of the Renaissance and Enlightenment to a rekindled respect for this ancient "pagan" philosophy. This is not mere speculation. Various influential thinkers are known to have had copies of Lucretius' poem. For example, Niccolo Machiavelli, Thomas More, and Giordano Bruno all read the poem. Shakespeare had a copy of Lucretius. Michel de Montaigne was also very much influenced by Lucretius. Even Galileo’s work shows traces of atomism, as has been confirmed by recent analyses of his original texts (which are held in the Vatican archive). We owe a great deal to the atomist philosophers.

Decoding Materialism

###video goes here

Closing comments

Even though most scientists who study the mind assume something like materialism, there is still widespread disagreement about how we should understand the nature of our consciousness and, in particular, how it is that material brains can produce conscious mental states. This, by the way, is known as the hard problem of consciousness, and it is still an open question...

Is this to say that dualism still has a chance? It depends on what the standard is. If we want definite proof that it's false, then dualism is still technically standing, since there is no definitive argument that souls do not exist. Instead, there are epistemic challenges to dualism that people who don't believe in souls think that the dualists have failed to answer. If instead, we are ok with a more-or-less type of answer, then dualism isn't the favorite theory. Most philosophers are backing something like materialism (see Brown and Ladyman 2019).

But the debate isn't over, and there are still people arguing on both sides. But out of this debate was borne a new debate, a debate about the potential of creating artificial minds...

 

 

 

 

Executive Summary

  • What we are calling materialism stands in opposition to dualism. It is the view that consciousness is a product of purely physical things, including (but not limited to) the brain, its sensory organs, the autonomic nervous system, etc.

  • Materialism is not without its woes. In particular, it is unclear just how physical things produce consciousness, a problem which is referred to as the hard problem of consciousness. In this lesson, we covered behaviorism and the mind/brain identity theory—both to no avail.

  • We also looked at two higher level problems with materialism. It looks like their view of reality, that all there is to reality is atoms and void, might be too simple. In particular, recent developments in physics (quantum mechanics and relativity theory) make us wonder whether materialism can really carry the weight of a theory of consciousness.

FYI

Suggested Reading: Robin Gordon Brown and James Ladyman, Chapter 1 of Materialism: A historical and philosophical inquiry

 

 

Universal Machines

 

It is possible to invent a single machine which can be used to compute any computable sequence.

~Alan Turing

 

Timeline I: The History of
Logic and Computation

 

Entscheidungsproblem

Our story begins in 1936, but, as you must've learned by now, all history is prologue. Choosing a starting point is arbitrary since everything that happens is influenced in some way by what happened before it. To understand what happened in 1936, you have to understand what happened at the turn of the 20th century.

In 1900, German mathematician David Hilbert presented a list of 23 unsolved problems in Mathematics at a conference of the International Congress of Mathematicians. Given how influential Hilbert was, mathematicians began their attempts at solving these problems. The problems ranged from relatively simple problems that mathematicians knew the answer to but that hadn't been formally proved all the way to vague and/or extremely challenging problems. Of note is Hilbert's second problem: the continuing puzzle over whether it could ever be proved that Mathematics as a whole is a logically consistent system. Hilbert believed the answer was yes, and that it could be proved through the building of a logical system, also known as a formal system. More specifically, Hilbert sought to give a finistic proof of the consistency of the axioms of arithmetic.1 His approach was known as logicism.

"Mathematical science is in my opinion an indivisible whole, an organism whose vitality is conditioned upon the connection of its parts. For with all the variety of mathematical knowledge, we are still clearly conscious of the similarity of the logical devices, the relationship of the ideas in mathematics as a whole and the numerous analogies in its different departments. We also notice that, the farther a mathematical theory is developed, the more harmoniously and uniformly does its construction proceed, and unsuspected relations are disclosed between hitherto separate branches of the science. So it happens that, with the extension of mathematics, its organic character is not lost but only manifests itself the more clearly" (from David Hilbert's address to the International Congress of Mathematicians).

As Hilbert was developing his formal systems to try to solve his second problem, he (along with fellow German mathematician Wilhelm Ackermann) proposed a new problem: the Entscheidungsproblem. This problem is simple enough to understand. It asks for an algorithm (i.e., a recipe) that takes as input a statement of a first-order logic (like the kind developed in PHIL 106 course) and answers "Yes" or "No" according to whether the statement is universally valid or not. In other words, the problem asks if there's a program that can tell you whether some argument (written in a formal logical language) is valid or not. Put another (more playful) way, it's asking for a program that can do your logic homework no matter what logic problem I assign you. This problem was posed in 1928. Alan Turing solved it in 1936.

Alan Turing
Alan Turing.

Most important for our purposes (as well as for the history of computation) is not the answer to the problem, i.e., whether an algorithm of this sort is possible or not. Rather, what's important is how Turing solved this problem conceptually. Turing solved this problem with what we now call a Turing machine—a simple, abstract computational device intended to help investigate the extent and limitations of what can be computed. Put more simply, Turing developed a concept such that, for any problem that is computable, there exists a Turing machine. If it can be computed, then Turing has an imaginary mechanism that can do the job. Today, Turing machines are considered to be one of the foundational models of computability and (theoretical) computer science (see De Mol 2018; see also this helpful video for more information).

A representation of a Turing machine
A representation of
a Turing machine.

What was the answer to Hilbert's second problem? Turing proved there cannot exist any algorithm that can solve the Entscheidungsproblem; hence, mathematics will always contain undecidable (as opposed to unknown) propositions.2 This is not to say that Hilbert's challenge was for nothing. It was the great mathematicians Hilbert, Frege, and Russell, and in particular Russell’s theory of types, who attempted to show that mathematics is consistent, that inspired Turing to study mathematics more seriously (see Hodges 2012, chapter 2). And if you value the digital age at all, you will value the work that led up to our present state.

Today, the conceptual descendants of Turing machines are alive and well in all sorts of disciplines, from computational linguistics to artificial intelligence. Shockingly relevant to all of us is that Turing machines can be built to perform basic linguistic tasks mechanically. In fact, the predictive text and spellcheck that are featured in your smart phones use this basic conceptual framework. Pictured below you can see transducers, also known as automatons, used in computational linguistics to, for example, check whether the words "cat" and "dog" are spelled correctly and are in either the singular or plural form. The second transducer explores the relationship between stem, adjective, and nominal forms in Italian.3 And if you study any computational discipline, or one that incorporates computational methods, you too will use Turing machines.

 

An automoton that can spell dog(s) and cat(s)

A much more complicated automaton

 

 

Timeline II:
Turing from 1936 to 1950

 

Important Concepts

 

Decoding Artificial Intelligence

###video goes here

 

 

Turing Test

A Turing test
A Turing test.

With no conceptual impediment to stop machines from being intelligent, Turing began to dream up ways of testing machines for this most-human of traits. He was clearly influenced by the behaviorism of the age, as well as by some philosophical movements that were popular at the time. Most of the members of the Vienna Circle, for example, subscribed to the verification theory of meaning, which claimed that a statement is meaningful if and only if it can be proved true or false, at least in principle, by means of some sensory experience.4 And so, likely influenced by these trends in psychology and philosophy, Turing thought of a purely behavioral (and thus measurable) test for intelligence.

How will we know if a machine is intelligent? A Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It is modeled after the imitation game. In the imitation game, one man and one woman are in one room, while a judge is in a second room. The man and the woman are to pass notes to the judge without being seen or heard by the judge. The judge can ask questions of both the man and the woman, and they must respond. The man is supposed to be honest and try to help the judge figure out who the woman is and who the man is. The woman's role is to deceive. She must trick the judge into thinking that she's the man. If she can do this, she won the imitation game.

The Turing test is the machine version of this imitation game (see image). There is a judge who is communicating with both a human and a computer pretending to be a human. The human is supposed to be honest, saying things like, "I'm the human." The machine is lying. If the machine can trick the judge into thinking that it is human (at least a certain percentage of the time), then the machine passed the Turing Test. Here's a question for you to ponder: If not via the Turing Test, how would we know if a machine is intelligent or conscious? If you're stumped, then you've just come face-to-face with what's known as the problem of other minds: since consciousness can only be experienced from the first-person, you cannot know for certain that anyone else is conscious(!).

 

 

Decoding Turing

 

Timeline III: Turing's end

 

Artificial Intelligence today...

In a recent study, researchers presented machine learning experts with 70 occupations and asked them about the likelihood that they will be automated. The researchers then used a probability model to project how susceptible to automation an additional 632 jobs were. The result: it is technically feasible to do about 47% of jobs with machines (Frey & Osborne 2013).

The jobs at risk of robotization are both low-skill and high-skill jobs. On the low-skill end, it is possible to fully automate most of the tasks performed by truckers, cashiers, and line cooks. It is conceivable that the automation of other low-skill jobs, like those of security guards and parcel delivery workers, could be accelerated due to the global COVID-19 pandemic.

High-skill workers won't be getting off the hook either, though. Many of the tasks performed by a paralegal could be automated. The same could be said for accountants.

Are these jobs doomed? Hold on a second... The researchers behind another study disputed Frey and Osborne’s methodology, arguing that it’s not the entire job that will be automated but separate tasks within the job role more generally. After reassessing, their result was that only about 38% could be done by machines...

It gets worse...

 

 

 

Executive Summary

  • In 1936, Alan Turing proposed an abstract computational mechanism in order to solve a problem proposed by the German mathematicians David Hilbert and Wilhelm Ackermann. We now know this as a Turing machine, and it is a foundational concept in theoretical computer science.

  • Soon after Turing's 1936 paper, he (and others) began to think that Turing machines might be useful models for how the mind works. Then, rapid progress in computer science prompted many to contemplate whether it was possible to build a computer capable of thought.

  • Seeing no viable theoretical objections to the possibility of machine capable of thought, Turing began to consider how to test for a machine's capacity to 'think'. Given the influence of behaviorism as well as that of a group of philosophers known as the Vienna Circle, Turing went with a purely behavioral test: check to see if a machine's behavior is indistinguishable from that of a human. We now call this the Turing test.

  • The field of artificial intelligence has taken off as of late and is showing signs that it might be both socially and economically disruptive.

  • Turing, despite being a war hero and computer science pioneer, was arrested for “gross indecency” (since acting on homosexual desires was illegal in the UK at the time) in 1952. To avoid imprisonment, Turing accepted probation along with a chemical castration (where anaphrodisiac drugs are injected so as to reduce libido and sexual activity). In 1954, Turing is found dead from apparent suicide.

FYI

Suggested Reading: Alan Turing, Computing Machinery and Intelligence

TL;DR: Crash Course, Artificial Intelligence and Personhood

Supplemental Material—

Advanced Material—

 

Footnotes

1. It was Kurt Gödel, whom we covered in the lesson titled 2 + 2 = 4(?), who dealt the deathblow to this goal of Hilbert. This was done through Gödel's incompleteness theorem in 1931. For more info, you can check out the Stanford Encyclopedia of Philosophy Entry on Gödel's incompleteness theorems or this helpful video.

2. Part of the inspiration for Turing's solution came from Gödel's incompleteness theorem (see Footnote 1). 

3. Both transducers pictured were made by the instructor, R.C.M. García, using a program called Foma. 

4. The view the Vienna Circle put forward was dubbed logical positivism. Among their tenets was a critique of metaphysics. Metaphysical statements are not empirically verifiable and are thus forbidden: they are meaningless. This is a result of their verification theory of meaning, which states that a statement is meaningful if and only if it can be proved true or false, at least in principle, by means of the experience. In other words, if a statement isn’t empirically verifiable (or a logical truth), then it is worse than false; it is meaningless. The only role of philosophy, according to most members of the Vienna Circle, is the clarification of the meaning of statements and their logical interrelationships via the building of linguistic frameworks, i.e., theories. For more information, see Karl Sigmund's (2017) Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science.

 

 

The Chinese Room

 

Achilles:
That is a weird notion. It means that although there is frantic activity occurring in my brain at all times, I am only capable of registering that activity in one way—on the symbol level; and I am completely insensitive to lower levels. It is like being able to read a Dickens novel by direct visual perception, without ever having learned the letters of the alphabet. I can't imagine anything as weird as that really happening.

~Douglas Hofstadter1

 

What is Artificial Intelligence?

This is actually a difficult question to answer. Various theorists lament over the “moving goal posts” for what counts as artificial intelligence (e.g., Hofstadter 1979: 26). The basic complaint is that once some particular task which appears to be sufficient for intelligence (of some sort) has been mastered by a computer, then (all of a sudden) that particular task is no longer sufficient for intelligence. Hofstadter even has a name for this tendency, which he named after Larry Tesler—apparently the first person to point out the trend to him. Tesler's theorem states that AI is whatever hasn't been done yet (ibid., 621).

There are two things to point out here. First, the goal of AI research really has evolved over time. In the early days of AI, the goal really was to create a machine that was capable of thinking (even if people couldn't agree on what thinking meant). Today, AI researchers (in general) define the goal of their field much more narrowly. For example, Eliezer Yudkowsky (2015) defines intelligence, very abstractly, as the capacity to attain goals in a flexible way. So, AI researchers typically work within a narrow domain: perhaps they work on image recognition, or natural language processing, or sentiment analysis, etc. In other words, today's AIs are essentially good at one, or perhaps a handful of tasks; but they're not capable of solving problems in general (the way humans are capable of). If a researcher is working on developing an artificial intelligence that is good at solving problems generally, then he/she is said to be working on an artificial general intelligence. But, according to AI pioneer Marvin Minsky, most aren't working on that. Minsky, in fact, lamented the direction of the field of AI until the day he died in 2016 (see Sejnowski 2018, chapter 17).

Second, there certainly is a lack of coordination between different disciplines and standardizing their jargon. Take for example the dominant paradigm of AI research from the mid-1950s until the late 1980s: symbolic artificial intelligence. Symbolic artificial intelligence is an umbrella term that captures all the methods in artificial intelligence research that are based on high-level "symbolic" formal procedures. It is based on the assumption that many or all aspects of intelligence can be achieved by the manipulation of symbols, as in first-order logic; this assumption was dubbed as the “physical symbol systems hypothesis” (see Newell and Simon 1976). However, this is just the predecessor of a view advocated by the philosopher Jerry Fodor—a view he called the representational theory of mind, where Turing-style computation is performed on a language of thought. Notice the proliferation of labels(!): the physical symbol systems hypothesis, the representational theory of mind. So, given this muddy conceptual landscape, it is perhaps not too surprising that a solid conception of what intelligence is was never decided on.

 

The rise and fall of the physical symbol systems hypothesis

AI pioneers at Dartmouth College
AI pioneers at
Dartmouth College.

The field of AI is said to have begun in the summer of 1956 at Dartmouth College. That summer, ten scientists interested in neural nets, automata theory, and the study of intelligence convened for six weeks and initiated the study of artificial intelligence. During this early era of AI, the practice typically involved refuting claims about machine intelligence in limited domains. For example, it was said that a machine could not make logical proofs, and so the AI practitioners made a machine called “the logic theorist” which proved most of the theorems of chapter 2 of Whitehead and Russell’s Principia Mathematica. It even made one proof much more elegant than Whitehead and Russell’s version.

During these early days, at least some AI researchers believed in the physical symbol system hypothesis, hereafter PSSH. In a nutshell, PSSH is the view that Turing-style computation over certain symbols is all that is needed in order to complete some task. Moreover, any system that could engage in this symbol-crunching to perform some task could be said to be intelligent (see Newell and Simon 1976: 86-90). In other words, any machine that could process the code that performs some task has some kind of intelligence. In fact, some practitioners who assumed the physical symbol systems hypothesis (e.g., Newell and Simon) believed that once they programmed their machines to perform some particular task, then the code for that task is the explanation of the processes behind that task. In other words, if you want to understand how some task is performed, looking at the code for a machine that can perform that task is as good as any explanation you're going to find.

Clearly, though, reading the code that enables some task to be performed by a machine, say play chess, does not actually help you understand the mental processes behind playing chess. It's a little like if I were to give you some brain scan print outs of your own brain and then asked you what you were thinking at that moment. Sure, your thoughts are had by your brain. But even if you could see all the different patterns of neurons being activated, you won't really know what precise thought that corresponds to. Try it yourself. Can you tell what's being thought of in the image below? My guess is "no". So the advocates of the physical symbol systems hypothesis were lacking when it came to what they meant by "explanation".

 

fMRI scans

 

There are various reasons why PSSH is no longer the dominant paradigm in AI research. One of them is practical. There was an upper limit to what can be accomplished with the physical symbol systems hypothesis. Everything that a machine did had to be meticulously programmed by a team of researchers. This is a bit of an oversimplification, but basically the machine was only as smart as the programmers who coded it.

There were also philosophical/theoretical objections to the physical symbol systems hypothesis. One of the most famous ones was by John Searle. Searle asked you to imagine yourself in a room with a large rule book and where you received messages from the outside world through a little slot on the wall. All the messages received were in Mandarin Chinese (and let's pretend that you don't know anything about Mandarin for the sake of this example). Your task was to receive the message from the slot, go to the rule book to find the symbol that you just received, and then follow the arrows to see which symbol you were supposed to respond with. Once you figured out which symbol should be output, you send out that symbol through the slot and wait for a new message. As long as you are performing your role correctly, any outsider would think that you are a well-functioning machine. But ask yourself this: would you have any idea what's going on? Would you understand a single word of the messages going in and out? Of course not.

“It seems to me obvious in the example that I do not understand a word of Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. Schank’s computer, for the same reasons, understands nothing of any stories, whether in Chinese, English, or whatever…” (Searle 1980: 186).

What does Searle conclude? Searle clearly believes that machines can perform narrowly-defined tasks (i.e., weak AI). But Searle, through his thought-experiment, is denying the possibility of strong artificial intelligence under the assumption of the physical symbol system hypothesis. In other words, symbolic AI will never give you thinking machines.

“Whatever purely formal principles you put into the computer will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything, and no reason has been offered to suppose that they are necessary or even contributory, since no reason has been given to suppose that when I understand English, I am operating with any formal program at all” (Searle 1980: 187).

 

Representation of Chinese Room Thought-experiment

 

 

But that was 1980...

Today, we are in the age of machine learning. Machine learning (ML) is an approach to artificial intelligence where the machine is allowed to develop its own algorithm through the use of training data. Sejnowski claims that ML became the dominant paradigm in AI in 2000. Since 2012, the dominant method in ML is deep learning. The most distinctive feature of ML (and deep learning) is its use of neural networks, as opposed to Turing-style deterministic computation. Please enjoy this helpful video to wrap your mind around neural networks:

 

 

To be clear, deep learning has been around since the 1960's (with ML in general being around since the 1950's) as a rival to the symbolic artificial intelligence paradigm. Interestingly, according to Sejnowski (2018, chapter 17), it was the work of an AI pioneer, Marvin Minsky, that steered the field of AI away from the statistically-driven approach of machine learning and towards deterministic Turing-style computation. In particular, Sejnowski argues that Minsky’s (erroneous) assessment of the capabilities of the perceptron (an early machine learning algorithm) derailed machine learning research for a generation. It was not until the late 1980s that the field of AI gave ML a second chance—and it has had radical success. ML, unlike symbolic AI, is based on subsymbolic statistical algorithms and was inspired by biological neural networks. In other words, AI researchers are plagiarizing Mother Nature: they are taking inspiration from how neurons in a brain are connected. If you've ever heard the phrase "Neurons that fire together wire together", this is essentially what machine learning algorithms are doing. This is why they are called artificial neural networks. Early neural networks had one hidden layer, but deep neural networks today have several hidden layers (see image). And the results are shocking...

 

 

Important Concepts

 

Decoding Connectionism

 

 

The Chinese game of Go

 

Defining AI (Take II)

The deep learning revolution is just beginning, and we'll have to wait and see just what will happen. However, given the dominance of deep learning, we must attempt to redefine what we mean by AI once more. For his part, Yudkowsky (2008: 311) argues that artificial intelligence refers to “a vastly greater space of possibilities than does the term Homo sapiens. When we talk about ‘AIs’ we are really talking about minds-in-general, or optimization processes in general” (italics in original, emphasis added). We'll be assuming this definition for the rest of the lesson: AI refers to optimization processes.

And so symbolic AI is no longer dominant. It is now referred to as Good Old-Fashioned AI (GOFAI). Deep learning is in. But the future is uncertain. And there is reason to long for the good ol' days...

 

Best case scenario...

 

 

Various forms of automation

 

 

Sidebar: Scientific Management

The threat of automation is sometimes downplayed by those who, correctly, remind us that machine learning is only successful within very narrow domains. In other words, ML-trained robots are essentially good at one, or perhaps a handful of things. This argument, however, misses a key development in labor in the 20th century. Thanks to Taylorism and Fordism, many manufacturing and industrial labor roles are themselves confined to very narrow domains. In the push to increase efficiency, scientific management made laborers dumber (as Adam Smith argued) and also more like machines, in that they basically perform just a few skills repetitively, day after day. Of course, a machine can do that. Moreover, even partial automation should worry you... It’s already the case that efficiency algorithms are being used in the workplace, not to replace workers but managers. This approach is taking Taylorism to the next level. Here's the context...

The "evolution" of management...

Frederick Taylor, 1856-1915
Frederick Taylor,
1856-1915.

To understand this issue, we must first understand the history of management. Taylorism, also known as "scientific management", is a factory management system developed in the late 19th century by Frederick Taylor. His goal was to increase efficiency by evaluating every individual step in a manufacturing process and timing the fastest speed at which it could be performed satisfactorily. Then he added the times needed to perform a set of tasks for a given worker and set that as their time goal. In other words, workers had to perform their assigned tasks as quickly as it is feasibly possible to perform them consistently throughout the day not accounting for naturally-occurring human fatigue. Taylor, and then later Ford, also pioneered the breaking down of the production process into specialized, repetitive tasks.

Scientific management, however, may in fact have deeper and more sinister roots. In his The Half Has Never Been Told, Edward Baptist devotes chapter 4 to various aspects of slave labor in the early 19th century. Most relevant here, is that the push and quota system, where slaves were whipped if they didn’t reach their daily goal and the goal is progressively increased as time passes, was devised during this time period. With the exception of the whipping, this is not terribly dissimilar from factory management in the 20th and 21st century. Workers are pushed with the threat of getting their hours cut if they don't perform at a level satisfactory to the managers, and the quota system is what led to such scandals as that of the Wells Fargo quota scandal.2

This is not just hyperbole. The Cold War, which has had numerous effects on social life in the United States, unsurprisingly, also had an effect on management. During the late 1960's there was widespread agitation and unrest among the working classes and disenfranchised racial groups in America. Their radical message, at least of the working class, was reminiscent of the views of a romantic early Karl Marx, emphasizing an engaged revolutionary stance that sought for creative release, not the technocracy of the Soviet Union. Their fundamental complaint was that the threat of hot war with the Soviet Union had ushered in a military-industrial welfare state, with an emphasis on continued high-pace industrialization, under a strict managerial hierarchy and in which workers submitted to the scientific management philosophies on Taylorism and Fordism. The work was alienating, and although it was perhaps justifiable during a time of hot war with a major power, that had not come to be. So, the activists fought for an end to this approach (see Priestland 2010, chapter 11).

 

Back to the 21st century...

How does partial automation make things worse for us today? When viewed in the context of its history, scientific management is a way for maximally exploiting an employee's labor power. Emily Guendelsberger (2019) gives various examples of how companies are using efficiency algorithms for scheduling and micromanaging which have adverse effects on workers. Think about it. Your boss might be overbearing enough as it is. Imagine that instead your boss was an automated system that could keep track of your daily tasks down to the microsecond. It drives people to physical collapse. Click here for an interview of Guendelsberger.

 

Food for thought...

 

Does this mean war?

Is it possible that a growing and disaffected lower class will revolt and attack the upper classes? It wouldn't be the first time this happened in history...

Perhaps there's something that can be done, however, as a sort of a "pressure release". Although some political candidates have recently advocated for a universal basic income to address income inequality, Kai-fu Lee (2018, chapter 9) lays out his vision of a social investment stipend. The universal basic income, Lee argues, will only handle bare minimum necessities but will do nothing to assuage the loss of meaning and social cohesion that will come from a jobless economy. This is where Lee's social investment stipend comes in. These stipends can be awarded to compassionate healthcare workers, teachers, artists, students who record oral histories from the elderly, the service sector, botanists who explain indigenous flora and fauna to visitors, etc. By promoting and raising the social status of those that promote social cohesion and emphasize human empathy we can build an empathy-based, post-capitalist economy.

Or else... What's the alternative?

 

 

 

To be continued...

FYI

Suggested Reading: John Searle, Minds, Brains and Programs

TL;DR: 60-Second Adventures in Thought, The Chinese Room

Supplemental Material—

Related Material—

 

Turing's Test

 

We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.

~Stephen Hawking

 

Important Concepts

 

Superintelligence

One thing that people are disappointed about when I talk to them about superintelligent AI is that I can't tell them how a superintelligent AI can harm them. But, by definition, a superintelligence's reasoning is beyond that of any human. We cannot know how or why a superintelligence might harm us. We don't even know if a superintelligence would want to harm us. The only aspect of superintelligence that we can speculate about is the process by which it will come about.1

Diagram from Bostrom (2014:63)
Diagram from Bostrom 2014:63.

A recurring topic in this area of research is that of an intelligence explosion. This occurs when machines finally surpass human-level intelligence. Call this point the singularity. Once the singularity has been reached, the machines themselves will be best equipped to program the next generation of machines. And that generation of machines will be more intelligent than the previous generation, and, again, will be best equipped to program the next generation of machines. After a few generations, machines will be much smarter than humans. And at a certain point, machine intelligence will rise exponentially, thus creating an irreversible intelligence explosion. This is one of the reasons why author James Barrat calls superintelligent AI our final invention (see Barrat 2013).

Routes to superintelligence

Bostrom details the multiple possible routes to machine superintelligence, some of which are detailed below. Bostrom also points out that the fact that there are multiple paths to machine superintelligence increases the likelihood that it will be reached...

Genetic algorithms

Genetic algorithms are algorithms that mimic Darwinian natural selection. Through this process, the "fittest" individuals ("fittest" being defined by whatever criteria the researcher desires) survive to "produce" offspring that will populate the next generation. It is already the case that natural selection has given rise to intelligent beings at least once. (Look in the mirror.) Although computationally intensive, this approach might serve to make minds again, albeit this time digital ones. Moreover, by applying machine learning to this process, we can expedite natural history. This is since machines might be intelligent enough to skip all the evolutionary missteps that nature might've taken. Before too long, we may be able to reach the singularity and an intelligence explosion will ensue.

AI seeds

This is an idea that Turing himself promoted. The goal here is not to create an intelligent adult mind, but rather a child-like mind that is capable of learning. Through an appropriate training period, researchers will be able to arrive at the adult mind. This will be faster than the training of humans, though, since the machine will not have natural human impediments (like the need for sleep, food, etc.). Once the machine reaches human-level intelligence, it is only a matter of time before the singularity is reached.

“Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education, one would obtain the adult brain” (Turing 1950: 456).

Whole-brain emulation

Perhaps it is even possible to emulate human intelligence in a computer, without really knowing the basis of human intelligence. This could be done by first vitrifying a human brain (i.e., turning it into glass) and then sending in high-precision electron microscope beams to map out the entire neural architecture of the human brain. A sufficiently powerful computer will be able to process the data yielded from this method and emulate the functions of a human brain. Building off this, the singularity would be near indeed...

 

 

Why would any governments fund these kinds of research programs?

Bostrom reminds us that heads of state reliably enact programs that enable their nation to be competitive technologically, or, in some cases, to achieve technological supremacy. Our history is littered with examples. For example, China monopolized silk production from 3000 BCE to 300 CE. China also monopolized the production of porcelain (~600-1700 CE). More recently, the USA achieved military supremacy (or something close to it) with its atomic weaponry (1945-1949).

Relatedly, we have reason to believe that a war between the Great Powers will occur in the 21st century. In Destined for War, Graham Allison argues that in most historical cases where there is an established hegemonic power (e.g., U.S.A.) and a rising competitor (e.g., China) the result is war. In Climate Wars, Gwynne Dyer details the existing war plans of the major powers in the case that the effects of climate change begin to take place, thereby rendering certain geographical regions agriculturally barren and displacing entire populations. As you can see, there is a real threat of war in the near future. Moreover, as you will see later in this lesson, artificial intelligence would be an invaluable asset to have during conflict. It's also the case that artificial intelligence may converge with nanotechnology, which would almost ensure that the possessor of these technologies would have absolute military supremacy (if not a global dictatorship). So the answer is simple: governments invest in AI because other governments do. The race for mastery over AI may be the race for the mastery of war.

Our only hope...(?)

The potential threat of this line of research necessitates that we devote resources to answering questions surrounding the possibility of superintelligence. One of the main problems, of course, is figuring out a way to know if the singularity has been reached. But prior to this, we must figure out how to know if a computer is intelligent at all. Is the Turing test our best bet? Unlikely. In fact, it's already been passed.

Clearly we need a better test...

 

 

 

Atoms controlled
by superintelligence...

 

 

The end...

 

 

 

What is to be done?

Preparing for an intelligence explosion...

In his chapter of Brockman's (2020) Possible Minds, Estonian computer programmer Jaan Tallinn reminds us that the dismissal of the threat of AI would be ridiculous in any other domain. For example, if you’re on a plane, and you’re told that 40% of experts believe there’s a bomb on that plane, would you wait around until the other 60% agreed or would you get off the plane? Many share this alarm that Tallinn expresses over the lack of political will to invest more time and energy on safety engineering in AI. In his chapter of the same volume, physicist and machine learning expert Max Tegmark dismisses the accusations that this is fear mongering. Just like NASA thought out every thing that might go wrong during its missions to the moon, he argues, the risk analysis of AI is simply the basic safety engineering that must be performed on all technologies.

Having said that, our political system looks to be highly dysfunctional and certainly not capable of mustering the political will to begin to legislate on AI safety engineering. Moreover, elected officials are unlikely to be knowledgeable in the relevant science to intelligently debate the issues—apparently the average age in the Senate a few years ago was 61 years old. And even if they were up-to-date on machine learning, the gridlock of the two-party system would block progress. In fact, some (including Los Angeles mayor Eric Garcetti) think that political parties are doing more harm than good.2

The idea that the American political system is not poised to address complicated issues is nothing new. Physicist Richard Feynman (1918-1988), who took part in the building of the atomic bomb as part of the Manhattan Project and who taught at CalTech for many years, had this to say about what he called our "unscientific age":

"Suppose two politicians are running for president, and one goes through the farm section and is asked, 'What are you going to do about the farm question?'' And he knows right away—bang, bang, bang. [Then they ask] the next campaigner who comes through. 'What are you going to do on the farm problem?' 'Well, I don't know. I used to be a general, and I don't know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem. And it must be a hard problem. So the way I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can't tell you ahead of time the solution, but I can give you some of the principles I'll try to use—not to make things difficult for individual farmers, if there are any special problems we will have to have some way to take care of them,' etc., etc., etc. Now such a man would never get anywhere in this country, I think... This is in the attitude of mind of the populace, that they have to have an answer and that a man who gives an answer is better than a man who gives no answer, when the real fact of the matter is, in most cases, it is the other way around. And the result of this of course is that the politician must give an answer. And the result of this is that political promises can never be kept... The result of that is that nobody believes campaign promises. And the result of that is a general disparaging of politics, a general lack of respect for the people who are trying to solve problems, and so forth... It's all generated, maybe, by the fact that the attitude of the populace is to try to find the answer instead of trying to find a man who has a way of getting at the answer" (Richard Feynman, Lecture III of The Meaning of It All).

 

Automation

Regardless of whether there is ever an intelligence explosion, it is certainly the case that automation will continue. As stated in the Food for Thought in the last lesson, some possible solutions include a universal basic income, Kai-fu Lee's social investment stipend, and even Milton Friedman's negative income tax—the latter of which is a conservative proposal (although there is some conservative support for a universal basic income too). Only time will tell if our political system will be able to pass any legislation that moves in any of these directions.

 

García's Two Cents

My own views are not the point of this lesson or this course, so I'll keep my comments brief here. I've been making the case, for several years now, that one direction we ought to move towards is that of expanding the information-processing power of the population. We should, I believe, boost human intelligence as far as is possible. For example, we should ensure the population is well-nourished, we should eliminate neurotoxins (such as lead), and we should engineer safe and effective nootropics (substances that enhance cognitive functioning). At the very least we should make college free for all. The idea here is not linked to socialism or any partisan framework. It is simply the case, I believe, that problems like global climate change, the threat of AI, and others will only be solved through human intelligence. And so, we should expand the catch-net of human talent. Currently, too many minds are not being given the chance to contribute because they are dealing with issues that, I think, a rich industrial country should have ameliorated by now: lack of housing, food insecurity, environmental toxins, racial and gendered prejudice, etc.3

 

 

 


 

<

Executive Summary

  • Early in the history of the field of artificial intelligence, the dominant paradigm was the physical symbol systems hypothesis (PSSH). In a nutshell, this is the assumption that many or all aspects of intelligence can be achieved by the manipulation of symbols according to strictly defined rules, an approach to programming that rigorously specifies the procedure that a machine must take in order to accomplish some task.

  • In the 1980s, PSSH was replaced by connectionism as the dominant paradigm in AI research. Connectionism is the assumption that intelligence is achieved via neural networks—whether they be artificial or biological—where information is stored non-symbolically in the weights, or connection strengths, between the units of the neural network. In this approach, a machine is allowed to "update itself" (i.e., update the connection strengths between its neurons) so as to improve at some task throughout a training period, much like human brains rewire themselves during learning.

  • Machine Learning (ML), which utilizes neural networks, has been the dominant method in AI since around 2000, with deep learning being the dominant form of ML since about 2012. ML has led to breakthroughs in the field of AI, primarily with regards to narrowly-defined tasks. Nonetheless, these breakthroughs are accumulating such that many tasks and jobs that humans used to perform are now being automated. We also covered four possible futures that further progress in AI might bring about: a loss of meaning in human activities, a jobless economy, the merging of AI and nanotechnology for nefarious ends, and "unfriendly" AI.

  • As of now, there is still a dispute about what mental states are, what mental processes are, and what constitutes intelligence. The hard problem of consciousness is far from solved, there is little political will to solve large-scale existential risks for humanity, and the clock is ticking.

FYI

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. In chapter 7 of Superintelligence, Bostrom gives some views about the relationship between intelligence and motivational states in superintelligent AI. He first defines intelligence as flexible but efficient instrumental reasoning, much like the definition that we used last time of an AI being an optimization process in general. With this definition in place, if one subscribes to the view that basically any level of intelligence can be paired with basically any final goal, then one subscribes to the orthogonality thesis. This means that one cannot expect a superintelligence to automatically value, say, wisdom, renunciation, or wealth. The orthogonality thesis pairs well with the instrumental convergence thesis, the view that any general intelligence might arrive at roughly the same set of instrumental goals while attempting to achieve their final goals. These instrumental goals might include self-preservation, enhancing one’s own cognition, improving technology, and resource acquisition. Personally, I reject the orthogonality thesis. There appears to be a necessary connection between the system that produces motivational states and the system that makes inferences (see Damasio 2005). In particular, it appears that the system that makes motivational states performs a dimensionality reduction on the possible inferences that can be made—an inference strategy that is known as bounded optimality. And so, it is unlikely that any motivational system can be paired with any degree of intelligence. An intelligence equivalent to or greater than that of humans must in some way resemble human cognition, since it needs a motivational system that performs a dimensionality reduction. Absent this, it is not really an intelligence but a data cruncher. Data crunching, however, is not intelligence, even at the scale of a supercomputer. As Bostrom admits, intelligence is efficient. This is the case in humans (Haier 2016). It seems fair to assume that real machine intelligence will face a similar requirement.

2. The interested student can check out this episode of Freakonomics, in which the idea that the two dominant political parties are essentially a duopoly that is strangling the American political system is explored.

3. For the sake of fairness, I should add two criticisms of my view here, for those that are interested. First, one criticism of my view is that boosting the population's information-processing power might actually lead to the aforementioned intelligence explosion, and so my solution actually leads to the problem I'm trying to solve. I don't think this is a good argument, but it has been made against my view. Second, as far as I know, there is no name for my set of political beliefs. I sometimes joke and call myself a neo-prudentist—a made-up political view. However, some friends and colleagues disparagingly (but lovingly) refer to my views as a form of techno-utopianism, somewhat like that of Francis Bacon.

 

 

Self-less

 

You only lose what you cling to.

~Siddhartha Gautama

 

Important Concepts

 

Origins

 

Awakening

Upon enlightenment, Buddha realized that duḥkha is ever-present in our lives; we live perpetually in a condition of dissatisfaction. Buddha realized that in our natural state, our lives are permeated with desires. We also have a desire to fulfill our desires (taṇhā). And so we are constantly searching for ways to fulfill our desires. But it's always the case that as soon as we fulfill one desire another one arises. Our quest for fulfillment is never completed. We spend our lives endlessly trying to quench our desires.

A person, perhaps in the grip of dukkha
A person, perhaps
in the grip of duḥkha.

Let's take a modern-day example to make this more real. You are, at this moment, studying and working towards completing this course. You are most likely doing this because acquiring a college degree is important to you, and you want to be able to pursue a meaningful career after graduation. This is all to say that you have a desire to complete this course, complete your studies, and start your career. Let's just say you accomplish all those feats. Are you finally rid of desire? Not at all. Next come the stresses of work-life. Perhaps you want to advance in status or get a raise; maybe you want to balance your time at work with your family-life. This just means that you have more desires: a desire for more status, more money, more time to spend with your family. Will you finally be satisfied when you get more status, more money, and more time with your family. Almost definitely not. You will spend much effort maintaining this position. You will have a desire to keep what you have. Perhaps on a more psychological note, it may be the case that once you enjoy some success, your desire for success will grow. You don't merely want to keep your position; you'll want more.

In this example, we are only considering the major components of your life: schooling, career, family. If you think about it, though, you will realize that even your day-to-day existence is just one desire after another. You desire to stay in bed. You desire to skip class. You desire food, sex, money. And when you don't fulfill these desires, you're left with the subjective feeling that something is off. And even if you do fulfill these desires, more desires will follow, so that you're never content. You are never truly satisfied. There's always something else. It never ends.

Going back to Buddha, Buddha realized that the cause of this is our desire to fulfill our desires (taṇhā). If we could just stop trying to fulfill all of our desires, we would be satisfied. We would no longer be troubled. We can just witness and accept that we have desires, but we won't be slaves to the never-ending cycle of trying to fulfill our desires. We would thus be released from this form of bondage. Buddha summarized his philosophy in The Four Noble Truths.

  1. The truth of suffering: All life is duḥkha.
  2. The truth of the cause of suffering: Duḥkha is caused by taṇhā.
  3. The truth of the end of suffering: The cessation of duḥkha is possible, ie nirvana.
  4. The truth of the path that frees us from suffering: The way to accomplish this is the Eightfold Path.

 

The Eightfold Path

  • Right View- Buddha recommends that you do your best to see the world as it is. In other words, realize the Four Noble Truths.
  • Right Intention- Once you've reached the right view, dedicate your life to helping yourself (and others) reach enlightenment in whatever way you can.
  • Right Speech- Controlling your speech acts is one of the most straightforward ways you can work towards enlightenment. Do not hurt feelings, do not lie, do not use deceptive or intentionally confusing language, do not intentionally make people angry with your speech.
  • Right Action- Move on to your non-verbal behaviors and make sure your actions go towards helping and not harming.
  • Right Livelihood- Make sure what you do for a living is not causing suffering, but rather that it is helping others.
  • Right Effort- Make a continued effort to devote your waking energies towards the liberation of yourself and others.
  • Right Mindfulness- Make a continued effort of being present, living in the here and now (as opposed to living with regrets or neglecting those in front of you (for example, by spending too much time on your phone!)).
  • Right Concentration- Practice meditation diligently.

 

 

 

No Self

So what happens if you live the Eightfold Path diligently? The Buddha claimed that you will eventually reach enlightenment. Enlightenment occurs when one awakens to the realization that there is no self; in other words, it's when you realize the self is just an illusion. To be clear, Buddha is not denying that you have a body. Limbs, organs, skin and nails all exist. Instead, he's arguing that whatever the word "I" refers to (when you say it) is an illusion.

Let's think about this a bit (in the first person). I am not my body, since my body does all sorts of things that I don't do: process my food, regulate my sleep cycle, beat my heart, etc. Or is it the case that I actually will those things to happen? It doesn't seem like it. So I'm not my body. Am I my brain? This seems unlikely too. My brain also does many things that I don't consciously do. To take one example, I don't process the input from my eyes. In fact, I passively receive the output from my visual cortex. It's almost like I'm in a theater, relaxing, allowing my brain to process data from my senses and I just receive the output. So then what do I really do? I don't even manufacture my desires, since those seem to arise independent of me. In fact, sometimes I wish I didn't have, say, my desire to sleep in. Instead, it seems like I'm also a passive recipient of my desires. I don't seem to do much of anything. The Buddha argues that this is because the "I" is an illusion (see Siderits 2007, chapter 3).

This might seem far-fetched at first... But the idea receives some backing from modern science, in particular neuroscience and psychology. For example, neuroscientist Lisa Feldman-Barrett, whom we've met before, explicitly affirms it:

 

“The fiction of the self, paralleling the Buddhist idea, is that you have some enduring essence that makes you who you are. You do not. I speculate that your self is constructed anew in every moment by the same predictive core systems, including our familiar pair of networks (interoceptive and control), among others, as they categorize the continuous stream of sensations from your body and the world” (Barrett 2017: 191-2; see pages 190-194 for a general discussion).

 

Similarly, the dual-process theory of psychologist Daniel Kahneman, whom we've also met before, strongly lends itself to a Buddhist interpretation:

 

“You do not believe that these results apply to you because they correspond to nothing in your subjective experience. But your subjective experience consists largely of the story that your system 2 [your higher-cognitive faculties] tells itself about what is going on. Priming phenomena arise in system 1 [the automatic, emotion-based system], and you have no conscious access to them” (Kahneman 2011: 57; interpolations are mine; emphasis added).

 

In other words, Kahneman is saying that your subjective experience of yourself and the world is "largely" just a story that your system 2 tells itself. However, recall that system 2 doesn't call the shots—system 1 is usually in charge. And so this story is more fabrication than anything else.

Perhaps psychologist Bruce Hood puts the point more forcefully:1

 

 

 

 

Eastern Thought
and the Problem of Evil

Buddhists, like Christians, are very concerned with unnecessary suffering. Buddhists argue that humans are full of desire as well as the desire to fulfill their desires (taṇhā). As such, they are attached to material objects. This leads to war, fighting, greediness, stealing, and the like. But with liberation, humans will be free of desire. And hence, human wickedness will stop. That is how Buddhists plan on addressing unnecessary suffering. But can Eastern Thought solve the Problem of Evil? This is Dilemma #10.

Yin/Yang Symbol
The Yin/Yang Symbol.

As I mentioned in The Problem of Evil (Pt. II), students often attempt to respond to the Problem of Evil with use of some Eastern concepts, like that of Yin/Yang or Karma. For example, some students argue that unnecessary suffering only appears to be unnecessary, but it is actually the case that you can't have goodness without suffering (Yin/Yang). They've also argued that what appears to be unnecessary suffering could be the universe punishing that person/animal for the past wrong they've done (Law of Karma).

However, I'm not sure that Eastern Thought can help theists who are attempting to solve the Problem of Evil, or at least not Buddhism. Buddhism, at least in its original philosophical form, is a non-theistic worldview (see Siderits 2007: 6-7). They do not have a God. In fact, some people deny Buddhism is even a religion, and argue that it is more like a philosophy of life. It seems, then, that the Problem of Evil would never arise for them in the first place.

But the argument in the preceding paragraph only makes the case that Buddhists wouldn't need to address the Problem of Evil as we've been dealing with it. I have not argued that Eastern Thought can't help solve D#10. I will do that now. Here's are the basic problems I will cover:

 

The Yin/Yang Paradox

Can God create moral goodness without moral wrongness? Good without evil? Pleasure without pain? If He can’t, then He’s not all-powerful. If He can, then He’s not all-loving (since He didn’t).

 

The Karma Paradox

If someone claims that the notion of karma helps solve the Problem of Evil, this only raises more questions. Why should we believe the Law of Karma even exists? If suffering is a result of karma, why doesn’t God ameliorate it, since God is omnipotent? What's the point of hell, then? Is God bound to the Law of Karma? If He is, then He's not all-powerful. But how would we know anyway?

 


 

I further discuss all of this below...

 

 

 

 

Executive Summary

  • The founding Buddha, Siddhartha Gautama, argued that duḥkha, the subjective feeling that a basic and important aspect of our lives isn’t right, permeates throughout our lives; we live perpetually in a condition of dissatisfaction. The Buddha realized that, in our natural state, we are full of desire as well as the desire to fulfill our desires (taṇhā). He concluded that the path to liberation from duḥkha requires that we stop trying to fulfill all of our desires; only then would we be satisfied.

  • One of the most interesting aspects of Buddhist philosophy, which is backed by modern psychology and neuroscience, is the view that there is no robust self; i.e., that our sense of self is an illusion, a fiction that is fabricated by our brain.

  • Buddhism, at least in its early stages, was a non-theistic religion. This means that they did not have a god—Siddhartha never claimed to be a god. This means that there is no all-powerful, all-loving, all-knowing being in their philosophy, and so the problem of evil would never arise for them in the first place.

  • Even if one were to try to integrate concepts from Eastern Thought into the Western tradition, it is not clear how they would fit in. For example, what is the relationship between God and the law of Karma? Is God bound by the law of Karma? Did God create it? Are they co-equal? Why isn't the law of Karma mentioned in any Judeo-Christian sacred texts? In short, there are more questions than answers.

FYI

Suggested Reading: Mark Siderits, Buddhism as Philosophy: An Introduction, Chapter 2

  • Note: This PDF includes chapters 2 and 3. Only chapter 2 is the suggested reading, but some students may also have interest in chapter 3.

TL;DR: Crash Course, Buddha and Ashoka

Supplemental Material—

Related Material—

Advanced Material—

 

Footnotes

1. The interested student can take a closer look at Hood's work through his (2012) The Self Illusion.

 

 

The Labyrinth

 

There's no need to build a labyrinth when the entire universe is one.

~Jorge Luis Borges

 

 

 

 

 

 

Do we have souls?

Only about a quarter of professional philosophers surveyed by Bourget and Chalmers (2013) still believe in dualism, and these are, with few exceptions, all theists who also believe in Libertarian free will (see Table 6). To me, this seems a bit like motivated reasoning, or a conclusion in search for premises. But that's a matter for another course...

Of course, that fewer and fewer philosophers believe in souls does not prove that souls don't exist. However, the topic of dualism may be what some members of the Vienna Circle called a pseudo-problem: an empty question, a problem without a solution.

 

 

 

Kant or the Utilitarians?

The field of ethics is a mess. The debates between the ethical theorists covered in class have been amplified both by the entry of new ethical theories into the fray (for example, Scanlon's contractualism), as well as attempts by both scientists and scientifically-minded philosophers to "biologicize" ethics (see Ruse and Wilson 1986 and Kitcher 1994). Recently, in fact, some scientifically-minded philosophers (sometimes referred to as naturalists) have waged an all-out empirical attack on various classical ethical theories. For example, John Doris (2002) goes after virtue ethics. Most relevant to us is neuroscientist and philosophers Joshua Greene's fMRI studies. Greene (2001) has used fMRI scanning to isolate the parts of the brain that are used for making Utilitarian- and Kantian-type judgments. He argues that he has “debunked” the theories, arguing instead that although neither is strictly-speaking true, we are better off being consequentialists.

 

 

 

Is morality relative?

One argument that we might give against cultural relativism is that it is a dangerous view. In particular, there are plenty of cases where what seem like different moral codes are actually just disagreements about basic matters of fact. Here's a little slideshow from my course on philosophical ethics:

 

 

Further, Rachels (1986) argues that accepting cultural relativism has some counterintuitive implications, such as:

  1. We could no longer say that the customs of other societies are morally inferior to our own. But clearly bride kidnapping is wrong.
  2. We could decide whether actions are right or wrong just by consulting the standards of our society. But clearly if we were to have lived during segregation, consulting the standards of our society would've resulted in believing that segregation is morally permissible (and that's obviously false).
  3. The idea of moral progress is called into doubt. For example, if we are relativists, then we could not say that the end of the Saudi ban on women driving is moral progress (but it clearly is).

It is a good principle to respect the cultural practices of others, if there is no fundamental disagreement on the facts. But too many cultures that believe blatantly false propositions (for example that having sex with a virgin can cure HIV or that women are not competent to represent their own interests) are protected by this invisible shield of cultural relativism. But the notion is ludicrous...

“If only one person in the world held down a terrified, struggling screaming little girl, cut off her genitals with a septic blade, and sowed her back up, leaving only a tiny hole for urine and menstrual flow, the only question would be how severely that person should be punished and whether the death penalty would be a sufficiently severe sanction. But when millions of people do this, instead of the enormity of being magnified millions fold, suddenly it becomes culture and thereby magically becomes less rather than more horrible and is even defended by some Western moral thinkers including feminists” (Pinker 2003: 273).

As if all this weren't bad enough for the cultural relativism, the notion of relative truth appears to be self-defeating. By its own logic, the theory of relative truth would have to be relative. So if you believe that truth is relative, as a relativist you can only really say that that's the truth for you. That's pretty bad.

And so the quest for the tree of knowledge of good and evil continues...

 

 

 

Do we only act from self-interest?

This view has been widely held for millennia, including by Niccolò Machiavelli (pictured above). Several disciplines have something to say on this topic. Please enjoy the slideshow below.

 

 

In sum, the consensus from various empirical disciplines is that psychological egoism is false.

 

 

 

Do we have free will?

It's tough to say... Some (e.g., Balaguer 2012) think this is an open question. Others, like neuroscientist Michael Gazzaniga, think that the 17th century concept of free will doesn’t survive the findings of the mind sciences; he thinks we have, at best, a very mitigated free will (see Gazzaniga 2012). Some thinkers not only agree with Gazzaniga (2012), but they think that to continue to think we are as free (as Descartes thought we were) is not only unscientific but even politically dangerous (see Harari 2018, chapter 3; see also this interview).

 

 

 

Does God exist?

It's tough to say... What we can say is that 73% of professional philosophers are atheists and that most of the theists in Philosophy specialize in Philosophy of Religion (see Bourget and Chalmers 2013, section 3.3). In fact, the combination of theism and specializing in Philosophy of Religion is the highest correlation between a particular view and specializing in a particular field (see Table 10). That same survey showed that belief in Libertarian free will and belief in God was one of the top ten highest correlations (see Table 6). So it could be the case that some thinkers are guilty of motivated reasoning. They already had a constellation of beliefs (in God and in Libertarian free will) and they found a field that pretty much paid them to argue for that position.

Again, this is not to say that we have proof that God does not exist. However, some scientists and scientifically-minded philosophers have attempted to "debunk" belief in God by showing that it is a natural outgrowth of our cognitive capacities. For example, beginning with Dennett (2006), various cognitive scientists have tried to make the connection between one or several of our evolved traits and the belief in God. Dennett's work is certainly very interesting. You can see a lecture of his here. I obviously can't get into all of this here. Instead, I'll just give you a brief overview of some of my favorite recent empirical work on religion.

Food for Thought

 

 

 

Empiricism or Rationalism?

Locke was wrong (about some stuff, at least)...

In his 2003 The Blank Slate, Steven Pinker dispels the commonly held Lockean view that we are born with a blank slate. We, in fact, have several mental mechanisms built into us by evolution, such as a language acquisition device and an intuitive physics (see Pinker 2003: 220-9). Some thinkers go even further than the list given by Pinker. For example, Richard Joyce (2007) argues that we have an innate morality module that was programmed into us so that we can coordinate our behavior with each other, inform each other about who’s a good, say, foraging partner, and form more cohesive groups through shared norms and practices. In either case, we are not born with a blank slate.

Even closer to his own time, Locke's views came under attack. Famously, David Hume thought that Locke’s prefered type of reasoning, induction, was unjustified. The basic idea behind induction is that what has happened in the past will likely happen again. For example, if copper dissolved in nitric acid today, then it will likely do the same again tomorrow. But you only believe that because, in the past, what happened in the past happened again. In other words, the only justification for induction is more induction, which is fallacious circular reasoning. This is called The Problem of Induction.

 


Sidebar: Hume didn’t even believe we understood causation accurately. For Hume, causation is just a constant conjunction of a cause and an effect; he claimed we never actually witness causation and that it is just a habit of the mind. See this helpful video for more information.


 

It’s even the case that Locke’s views on how we represent the world are most likely false. Locke believed that our sense impressions of the outside world very likely actually resemble the world itself; we can be reasonably sure that the world is somewhat close to the way our senses interpret it. Kant famously disagreed with him, saying (among other things) that we can never know the world as it really is with our senses alone but that we must use reason to attempt to learn some things about the world-in-itself. Relatedly, Kant argued that time and space do not exist independently of human sensibility; that time is actually a part of how we interpret reality. Interestingly enough, Kant was right and Locke was wrong! Your senses do not at all represent reality as it is; they only track what is fitness-enhancing (Hoffman 2019). Moreover, the future of physics is one in which spacetime is not a feature of the fundamental equations of physics (Rovelli 2018). In other words, time really is a construct.

 

But the rationalists were wrong too...

With regards to Descartes and the rationalists, according to some theorists, e.g., Mercier and Sperber 2017, it appears that the evolutionary origin of our capacity to reason has a social origin: we evolved the capacity to reason not to think about and understand the world, but because it helped us win arguments. This is why unaided reason, such as Descartes' method of doubt, simply confirms our pre-existing beliefs, i.e., confirmation bias. After all, isn't it surprising that during his Meditations, Descartes ended up just discovering that Catholicism was true?

There are other theories about the function of our capacity to reason, but most of these theories also stress the social function of reason, not the intellectual function of reason (see also Tomasello 2018). In short, reason doesn’t do what the rationalists thought it did. This is why scientists and philosophers have moved away from this sort of rationalism (see Wilson's quote below).

“But history shows that logic launched from introspection alone lacks thrusts, can travel only so far, and usually heads in the wrong direction. Much of the history of modern Philosophy, from Descartes and Kant forward, consists of failed models of the brain. The shortcoming is not the fault of the philosophers, who have doggedly pushed their methods to the limit, but a straightforward consequence of the biological evolution of the brain. All that has been learned empirically about evolution in general and mental process in particular suggests that the brain is a machine assembled not to understand itself but to survive” (Wilson 1998: 96).

The Second Great Schism

Just as the field of Philosophy loosely-speaking broke up into two camps in the early modern period, more recently the field has split up gain. Beginning the late 1800's, some philosophers were impressed by the progress of science and sought to build their philosophies with input from the sciences. In other words, they wanted their work to be continuous with the sciences. They are generally referred to as analytic philosophers.1 Others decided that Philosophy was really autonomous, and that you can build sound philosophical theories without input from the sciences. They are generally referred to as continental philosophers. This schism continues today.

"[In the 19th century,] the ideal personage of the scientist was taking shape, and only then was Philosophy, for its part, forced to split into two camps. There were those who found this new figure of the scientist impressive and longed to share in his new cultural caché. Others, by contrast, found his purview, that of building upon, improving upon, and channeling the forces of the natural world, ‘hacking through nature’s thorns to kiss awake new powers,’ in James Merrill’s words, inadequate for the central task of Philosophy as it had been understood by one prominent strain of thinkers since antiquity: that of understanding ourselves, our interiority, and the gap between what we experience in our inner lives and what the natural world will permit to be actualized or known” (Smith 2019: 126).

 

 

 

 

 

Further Down the Spiral...

 

Footnotes

1. I am not only in the analytic branch, but I am considered a radical even within this branch. My position is officially referred to either as philosophical naturalism or neopositivism, but I've also been referred to as an empirical philosopher, if the person is being kind, and as a ruthless reductionist or logical positivist when they don't like my views very much.

 

 

The Circular Ruins

 

The end of his meditations was sudden, though it was foretold in certain signs. First (after a long drought) a faraway cloud on a hill, light and rapid as a bird; then, toward the south, the sky which had the rose color of the leopard's mouth; then the smoke which corroded the metallic nights; finally, the panicky flight of the animals. For what was happening had happened many centuries ago.

~Jorge Luis Borges

 

 

 

Descartes' Fate
and Closing comments

Generally speaking, Descartes' foundationalist project is considered a failure by philosophers. This is true in his own day, as can be seen in the case of Thomas Hobbes, as well as today—a time when new epistemic theories are more prevalent. This is part of a general trend of moving more and more towards testable claims. This is a type of thinking that is generally referred to as the scientific worldview, although we've been calling it positivism in this class.

However, the transition to engaging primarily with testable (positivist) claims is incomplete. In fact, there has been much push-back. For example, Aristotelianism survived in biology up until the time of Charles Darwin and Alfred Wallace (see DeWitt chapter 29 and 30; Barrett 2017). Today, some anti-empirical sociological views are very fashionable, even among non-academics (Campbell 2024). Clearly, some ideas die hard.

The same could be said for Descartes' notion that the emotions are lesser than the intellect. First of all, the work of neuroscientist Antonio Damasio dispels the notion that emotion and reason are completely separate, a notion that is Kant’s as much as it is Descartes’ error. Moreover, emotion may itself be a kind of information-processing (see Damasio 2005).

Intellectually, the road ahead will not be easy. This is a point touched on by both historian of science Richard DeWitt and theoretical physicist Roland Omnès: that we are past the intuitive when it comes to scientific knowledge. We are analogy-less, metaphor-less. The findings of science will continue to be less and less intuitive. Some think this is necessarily a bad thing; I see it as an opportunity to restructure society, to change the culture so that we are all more scientifically-literate.

Much like the people of the 17th century were living through a transitional period, we might be seeing the collapse of one or more worldviews. Oddly enough, it's tough to say which worldviews are at risk. We seem more fragmented than ever. Not only is there a complete lack of political unity, we can also mourn the lack of intellectual unity. Philosophy and science used to be connected, but the push for formalism in science rendered it so technical that, today, only the initiated understand it. Philosophers stopped looking over the shoulders of scientists, stopped interpreting and understanding science. The population became even more distanced from scientific understanding. Now belief in conspiracy theories is rampant, scientific literacy is low, it feels like we can't agree on what "truth" is, and it sometimes feels like we're going to tear each other apart... Interestingly enough, philosophy began during a time of social upheaval, crisis, and war. Philosophy might be dead now, but, perhaps, it may soon start up again.