Problems/Solutions
The past is a foreign country;
they do things differently there.
~L.P. Hartley
Philosophy is dead
Teaching an introductory course in philosophy requires more assumptions than this instructor is comfortable with. First off, you have to (at least provisionally) demarcate, or set some boundaries, regarding what philosophy is and what philosophy isn't. Unfortunately for this instructor, however, philosophy has been different things at different times and different places.
For example, in classical Greece, it is difficult to distinguish philosophy from what today we would call the beginnings of natural science. Is philosophy the same as science, then? I think we can all agree that that's definitely not the case. But there is a bit of philosophy in science. Allow me to clarify.
The origins of natural science come from multiple, sometimes unexpected, origins, many of them going back to ancient Greece. For example, Freeman (2003: 7-25) discusses the competitive nature of Greek debate and the rejection of supernatural explanations by philosophers, beginning with Thales of Miletus. This competitiveness and rejection of non-natural causes are obviously an important part of what would eventually become science. But, and this is important, it's not the whole picture.
Here's another important ingredient. Chapter 4 of G.E.R. Lloyd's (1999) Magic, Reason, and Experience makes the case that practices in political debate made their way into the study of the natural world too. He shows how the Greek word for witness was the root of the word for evidence (specifically in scientific discourse). He also shows that the word for cross-examination of a witness is related to the language behind the testing of a hypothesis. So aspects of the legal tradition from Greece made their way into the study of nature.
We often times want simple and intuitive explanations, but the complexity of the world does not allow for that. And so we should understand the origins of complex institutions to be multifactorial, and we should expect that the essential ingredients of complex institutions are sometimes spread thinly across time. The origins of natural science are, in fact, extremely variegated. There are some elements of science that came about in ancient Greece, yes. But there are some elements of science that were not fully formed until the early modern era, from about 1500-1800 CE. Other elements had to wait longer still before they were fully codified; this is something that we will see in this course.
And so, I would never for a second want to suggest that philosophy single-handedly gave birth to science; but I will say that it was there during its conception, it was present during science's long gestation period, and it played a non-negligible role in its birth. And yet, this doesn't get us any closer to defining philosophy, and anyone who is tasked with teaching it still has a problem on their hands.
Here's another problem that arises if someone is trying to teach an introductory course in philosophy: many are predisposed to think of philosophy as bullshit. Bullshit, by the way, is actually a technical term in philosophy. I'm not kidding. Harry Frankfurt (2009; originally published in 2005) argues that bullshit (a comment made to persuade, regardless of the truth) has been on the rise for the last several decades. How do you know a bullshitter? “The liar cares about the truth and attempts to hide it; the bullshitter doesn't care if what they say is true or false, but rather only cares whether or not their listener is persuaded” (ibid., 61).
Students/colleagues who instinctively think little of philosophy don't usually come out and say the word "bullshit", but it lurks somewhere in their minds. To be honest, I don't completely disagree with them. Some philosophy is bullshit. In fact, in a follow-up to Frankfurt's 2005 monograph, Oxford philosopher G. A. Cohen charges that Frankfurt overlooked a whole category of bullshit: the kind that appears in academic works. Cohen argues that some thinkers write in ways that are not only unclear but also unclarifiable. In Cohen's second definition, which applies to the academy, bullshit is the obscure that cannot be rendered unobscure. And so, there is bullshit in philosophy, because there are (unfortunately) some philosophers who write/wrote in a completely unintelligible way. It sounds like word salad to yours truly. They were completely indifferent to having precision in meaning, and this is exceedingly regrettable for the discipline (see Sokal and Bricmont 1999). In fact, we're still trying to get past that caricature of what philosophy is.
But here's where the problem lies. It's not all bullshit. If you argue that much of philosophy is unnecessarily obscure and even pointless, consider me an ally. But if you think no philosophy is or has ever been worthwhile, then you have another thing coming.
This latter objection to philosophy, the notion that no philosophy is or has ever been worthwhile, is (I think) symptomatic of a profound anti-intellectualism that we find in our society. I'm sure you've met people that don't recognize when their beliefs are inconsistent (or self-contradictory). Perhaps you've met people who believe in completely incoherent conspiracy theories about new world orders, vaccines, and/or reptilian aliens. I've certainly met people who proudly claim that they read no books but the Bible, which (alarmingly) they believe to have been originally written in English (which is literally not possible). This, and I hope you share my sentiments, is not okay.
Decoding Philosophy
Sidebar
I actually have a theory as to why some people instinctually dislike philosophy. It has to do with something called the halo effect. First, take a look at the Cognitive Bias of the Day below.
Cognitive Bias of the Day
The halo effect, first posited by Thorndike (1920), is our tendency to, once we’ve positively assessed one aspect of a person (or brand, company, product, etc.) to also positively assess other unrelated aspects of that same entity (see also Nisbett and Wilson 1977 and Rosenzweig 2014).
Rosenzweig's analysis of this bias is particularly enlightening. He shows that this bias is rampant in the business world. Consider Company X. Let me first tell you that Company X has had record bonuses for higher management two years in a row. Now let me ask you a question: Do you think that management at Company X is exceptionally good? In order to make a truly educated guess, you'd have to look at more than the bonuses of the executives. You'd have to also look at retention rate, overall profits, how competitors are doing, etc. But the mind naturally wants to say, "Yes! There must be good management there, because why else would there be record bonuses for high management!" This is the halo effect at work, and if you don't pay attention to how this bias influences your assessments of firms and market behavior, then you are going to make mistakes that will cost you money.
How does this explain why philosophy is looked down upon? My general sense is that, prior to a college introductory course, most students know very little or nothing about philosophy. The reasons for this are complicated, as stated above, but suffice it to say that people don't know how to feel about this discipline. Hence, the mind feels uneasiness about this subject. Uneasiness is not something that the human mind is built to endure. We naturally look for resolutions to any kind of cognitive dissonance (see Tavris and Aronson 2020). And so, when faced with a discipline we've never been exposed to before, we think to ourselves, "Had this subject been worth a damn, I would've already known about it." And so, this dissonance is resolved by deciding that the subject isn't really worthwhile.
It could also be the case that as students are choosing their majors, they lack deep and substantive reasons for choosing said majors. After all, they don't know much about the field yet. And so, some feel the need to denigrate other disciplines so as to ease cognitive dissonance. In other words, they feel a need to have good reasons for why they chose their discipline, they don't have them, and so they construct reasons why other disciplines "suck." The general idea is that the mind is doing something like this, "My discipline [which I don't know much about yet] is way better than these other disciplines since [insert whatever I can think of in the moment]."
That's at least part of my theory...
Solutions
None of the preceding, by the way, even remotely solves our demarcation problem, the setting of boundaries between what counts as philosophy and what doesn't. A standard "solution" to this problem is to use a question-based approach in introductory classes. In other words, the idea is to pose questions that are traditionally accepted as being philosophical, whatever that may mean, and look at the responses of professional philosophers. This is, more or less, the approach that I will take. However, I will not limit myself to philosophers. Being a voracious reader, I am able to add the input of thinkers from various disciplines to this conversation. As such, I will pose questions that are traditionally accepted as being philosophical and look at the responses of conventional philosophers (in a historical context) as well as the responses of psychologists, neuroscientists, mathematicians, and anyone else who might have something relevant to say.
Frankly, though, this instructor doesn't want to waste your time. To me, it wasn't enough to just cover traditional philosophical questions. So instead, I've decided to build a course that tells a story—actually, a history. I'm going to teach you a history that I think is worth knowing. It's not necessarily a history of philosophy. In a way, in fact, the philosophy is incidental. But don't worry. There'll be plenty of mind-bending philosophical ideas, and I'm bound to upset a few individuals (if I haven't already). Nonetheless, I think it's a story worth telling.
Collapse
The story that will be told in this course is, in a way, about collapse. If we are taking this negative perspective, we will be looking at how ideas can "collapse". But this is much too vague, especially after going on a tirade against some bullshit philosophers above. Let me begin with a more concrete example of collapse. Please enjoy your first Storytime! of this course:
It is difficult to imagine that our own civilization could collapse. It is even harder to imagine that our civilization could collapse and that humans centuries from now—if we still exist—would have no idea how our gadgets and technologies were made (although perhaps this is easier to imagine post-COVID). And yet, we know from the historical record that civilization has its highs and lows. At one point, Rome was the most splendid city in the world. Still it fell (and tragically so). In fact, historian Ian Morris called the fall of the Roman Empire the single greatest regression in human history.
We can speculate as to why it's difficult to think of our collapse in any realistic way. We do seem to have an end of history illusion. This is the bias that makes us believe that we are, for all intents and purposes, done "growing". We believe that our experiences have resulted in a set of personal tastes and preferences which will not substantially change in the future. We pretty much believe we're "done". This is, of course, not true at all, but other biases come in to block us from realizing this. The recall bias, for example, does not allow us to remember previous events or experiences accurately, especially if they conflict with input we are actively receiving. So, if your preferences did change, you are likely to think you've always held those preferences and not even notice the change.2
And so maybe these biases are active not only in our assessments about ourselves, but also in how we view history itself. Perhaps we have a predisposition to thinking that history is "done". We're "it". Civilizational complexity won't dip any lower than we are, and it won't rise much higher either.
But reason compels us to dive deeper. I don't have to remind you that civilizational collapse is a real threat. Global warming, nuclear weapons, superintelligent artificial intelligence, and other existential risks might change your life dramatically. Rest assured: The world will change; the only question is whether it will change through wisdom or through catastrophe.
This is the nature of collapse. It's the flipside of progress, it seems. And just like civilizations can rise and fall, so can ideas. In fact, ideas are sometimes a causal actor in the collapse of some civilizations (Freeman 2003). In this class, we'll look at one such collapse.
Important Concepts
I've taken the liberty of isolating most of the important concepts in each lesson and giving them their own section. Please take a moment to review the concepts below.
Some comments
Although in an introductory philosophy class you will not dive deep into the nature of logical consistency, since that is reserved for a course in introductory logic, I believe that you intuitively (and non-consciously) care about logical consistency. If you've ever seen a movie with a plothole and it bothered you, then you care about logical consistency. It's just that simple. Consistency just means that the sentences describing the event in the movie can all be true at the same time. But in plotholes, this is not possible. Some parts of the movie contradict other parts. And so, the movie contains inconsistencies and we intuitively sense something is wrong.
Logical consistency will be a cornerstone of everything we will be doing. We will build up conceptual frameworks (i.e., philosophical theories) and we will be perpetually on the lookout to ensure that the ideas in question do not contradict each other. As such, it is important that you understand this concept well. Logical consistency only means that it's possible that all the sentences in a set are true at the same time. That's it. Don't equate consistency with truth. Two statements can be consistent while being false. For example, here are two sentences.
- "The present King of France is bald."
- "Baldness is inherited from your mother's side of the family."
"The present King of France is bald" is false. This is because there is no king of France. Nonetheless, these sentences are logically consistent. They can both be true at the same time, it just happens that they're not both true.3
Arguments will also be central to this class. Arguments in this class, however, will not be heated exchanges like the kind that couples have in the middle of an IKEA showroom. For philosophers, arguments are just piles of sentences that are meant to support another sentence, i.e., the conclusion. The language of arguments has been co-opted into other disciplines, such as computer science, but, although I'll cover this briefly, you'll have to learn about that mostly in a logic or computer science course.
Food for Thought...
Informal Fallacy of the Day
As you learned in the Important Concepts, there are two types of fallacies: formal and informal. A course in symbolic logic focuses on formal reasoning through the use of a specialized language. Formal fallacies are most clearly understood in this context. We won't be covering them here. There are also classes that focus more on the informal aspect of argumentation, such as PHIL 105: Critical Thinking and Discourse. Typically all the informal fallacies are covered in courses like these. In our case, we will only infrequently discuss some informal fallacies. Here's your first one!
Eurocentrism
On multiple occasions, well-meaning colleagues have questioned why I don't feature some non-Western philosophies in my introductory courses. Even some students have asked me about this. I'd like to now give you a summary of the reasons I have for focusing mostly on the Western Tradition in PHIL 101.
Three Reasons Why I Focus on the Western Tradition in PHIL 101
Although my friends and colleagues are well-meaning, there is a hint of a potentially pernicious implication in their line of questioning. It almost seems like they are asking, "Why don't you focus on material that is more yours?" The implication is that neither I, nor my students of non-European descent, are really Western. I hate to be the bearer of bad news, but Europeans (and the descendants of Europeans) have dominated much of the world since the middle of the 20th century, whether it be culturally, economically, politically, or militarily (as in the numerous American occupations after World War II). It's also true that European standards (of reasoning, of measuring, of monetary value, etc.) have become predominant in the world as part of a general (if accidental) imperial project (see chapter 21 of Immerwahr's 2019 How to Hide an Empire). Thus, those of us who are Latinx, Native American, of African descent, etc., are immersed in Western culture, whether we like it or not. It is our right to learn about Western culture, because it has become our culture, even if it was through conquest (or worse). It is our duty as good thinkers, however, to also be critical of this tradition, which is how we will progress.
Moreover, as I think you will come to see, whether you are of European descent or not, Western ideas really do permeate your mind. I believe this because students that should, according to my well-meaning colleagues, be more closely aligned with a non-Western ideology actually become very defensive when I question Western ideas. As we go through this class, you will notice that some of your most deeply held convictions are Western in origin. And it's going to bother you when I go after them. Fun times ahead.
The Europeans who gave rise to the Western tradition had some really good ideas! I'm not going to not cover them just to assuage white guilt. These great ideas include, by the way, democratic governance, classical liberalism, humanism, and the modern scientific method. We must give credit where credit is due.
The Europeans who gave rise to the Western tradition also had some really bad ideas, and I want to cover them. But it is unfair, I think, to look at only the bad. In order to assess a tradition, we must approach it from all directions. I will tell you about the good and the bad. And if there's one thing you should know about yours truly it's that I don't pull punches. This will be, without a doubt, a critical introduction.
I promise I'll try to give you an even-handed assessment of this tradition. Know that I'm doing it this way because I don't want us to fall into the traps laid on us by our biases, from the halo effect to the end-of-history illusion. We can't just say, "Philosophy bad" or "Yay non-Western philosophy". The world is nuanced. Ideas are nuanced. In the days to come, you'll have to fight your urge to either completely accept or completely reject an idea. It's time to be ok with cognitive discomfort.
One last thing...
I designed this course so that it all revolves ultimately around one simple question. It may seem unlikely, but every single topic we'll be covering will, in the final analysis, be connected to this question. As such, I call this the fundamental question of the course. Stay tuned.
Anyone tasked with teaching introductory is faced with two problems: demarcating what philosophy actually is (and then teaching it), and overcoming the initial apprehension that some have about the discipline.
The main theme of the course is, depressingly(?), collapse. We will study the collapse of an idea.
The notion of logical consistency and the analysis of rational argumentation will play essential roles throughout the course.
The influence of the Western Tradition is pervasive. You will find that many ideas that are traditionally considered to be "Western" reside within your mind, whether you are of European descent or not.
Lifestyle Upgrade
As we learned in today's lesson, what philosophy is varies between time periods and even thinkers within certain time periods. In these sections, I'd like to focus on philosophy as a way of life, the so-called lifestyle philosophies of, for example, the Stoics. The Stoics, above all else, sought to live life in accordance with nature. For them, this meant living wisely and virtuously. This was done by using reason to help them achieve excellence of character: presence of mind, wisdom, understanding reality as it truly is, not letting emotions get the better of them, and the development of generally desirable character traits.
What can we do in this course that is Stoic-approved? Well, in your current role, your task is to learn this material. So, a Stoic would emphasize dedicating yourself to fulfilling your duties.
How does one learn material like this? As it turns out, most students are completely new to the discipline of philosophy, since it is not taught at the high school level usually. This means you're not always sure how to go about studying. So, in what follows, I hope to make some recommendations that you can implement during the rest of this course.
To learn philosophy, you need to engage in both declarative and procedural learning. You might not know what that means, and that's ok, so let me just tell you what to do. First off, the first step is to learn the relevant concepts. Before really diving into a lesson, you should skim it and look for all the concepts that are underlined, bolded, or that otherwise seem important. (In the next lesson, they are as follows: deduction (deductive arguments), induction (inductive arguments), validity, soundness, imagination method, modus ponens, modus tollens, epistemology, metaphysics, JTB theory of knowledge, skepticism, the regress argument, and begging the question.) Write them down and define them using the content in the lesson. Practice these words for a bit. Read the name of the concept, cover up the definition, and try to recall what it is that you wrote. This is called retrieval practice. Once you do this a few times, take a break. After a five minute break or so, you're ready to read the lesson.
As you're reading the lesson take careful notes on the topic that is being argued, what is being argued (i.e., the different positions on the topic), and what the arguments for and against these views are. This might take a while. I recommend you do this in 25-minute "bursts". Set a timer, put away all distractions (like phones), and start to work through the material. Once the timer goes off, take a break. Then repeat until you complete the lesson.
All these recommendations, by the way, come from the most up-to-date findings in educational neuroscience and should work for any class. You can check out A Mind for Numbers, How We Learn, and Uncommon Sense Teaching for more info on this. If you don't time to read these at the moment, here's a TEDTalk by the author of A Mind for Numbers and co-author of Uncommon Sense Teaching, Barbara Oakley:
I'll give you some more tips next time. Until then, remember that your mind is the most fundamental tool you have in life. All other tools are only utilized well once you have mastered your own mind. Training and taking control of your mind is, in a sense, literally everything.
FYI
Supplemental Material—
- Video: School of Life, What is Philosophy for?
-
Philosophy Bites, What is Philosophy?
-
Reading: Internet Encyclopedia of Philosophy, Entry on Fallacies
-
Note: Most relevant to the class are Sections 1 & 2, but sections 3 & 4 are very interesting.
-
-
Text Supplement: Useful List of Fallacies
Footnotes
1. For an introduction to the history and philosophy of science, DeWitt (2018) is definitely a good start.
2. An interesting example of the recall bias can be found in breast cancer patients. Apparently, getting diagnosed with breast cancer changes a woman's retrospective assessment of her eating habits. In particular, it makes patients more likely to remember eating high-fat foods, as compared to women who were not diagnosed with breast cancer. This is the case even in longitudinal studies where records of the subjects' food diaries suggest no discernible difference in eating patterns between the two groups. It is simply the case that cancer patients believe they ate more high-fat foods because they are sick. It is the reality of being faced with a deadly cancer (an active input) that predisposes their minds towards remembering actions and events that might've led to this sickly state, whether they actually happened or not (see Wheelan 2013, chapter 6).
3. Contrary to popular belief, apparently baldness is not all your mother's fault. In the very least, smoking and drinking have an effect.
Agrippa's Trilemma
Not to know what happened before you were born is to remain forever a child.
~Cicero
On the possibility of the impossibility of learning from history
There are so many cognitive traps when one is studying history. As we mentioned last time, we have biases operating at an unconscious level that don't allow us to perform an even-handed assessment of persons, ideas, products, etc. These biases might also disallow us from properly assessing a culture from the past (or even a contemporary culture that is very different from our own). In addition to this, however, it appears that we sapiens are surprisingly malleable with regards to the tastes and preferences that culture can bestow in us. Cultures, past and present, range widely on matters regarding humor, the family, art, when shame is appropriate, when anger is appropriate, alcohol, drugs, sex, rituals at time of death, and so much more.1
Recognizing our limitations, we have to always be cognizant of the boundaries of our intuition and of our prejudices. More than anything, we need to keep in check our unconscious desire to want to know how to feel about something right away. Whether it be some person, some practice, some event, or some idea, our mind does not like dealing in ambiguities; it wants to know how to feel right away.
The psychological machinery that underlies all this is very interesting. The psychological model endorsed by Nobel laureate Daniel Kahneman, known as dual-process theory, is illuminating on this topic. Although I cannot give a proper summary of his view here, the gist is this. We have two mental systems that operate in concert: a fast, automatic one (System 1) and a slow one that requires cognitive effort to use (System 2). Most of the time, System 1 is in control. You go about your day making rapid, automatic inferences about social behavior, small talk, and the like. System 2 operates in the domain of doubt and uncertainty. You'll know System 2 is activated when you are exerting cognitive effort. This is the type of deliberate reasoning that occurs when you are learning a new skill, doing a complicated math problem, making difficult life choices, etc.2
How is this related to the study of history? Here is what I'm thinking. It appears that cognitive ease, a mental feeling that occurs when inferences are made fluently and without effort (i.e., when System 1 is in charge) makes you more likely to accept a premise (i.e., to be persuaded). In fact, it's been shown that using easy-to-understand words increases your capacity to persuade (Oppenheimer 2006). On the flip side, cognitive strain increases critical rigor. For example, in one study, researchers displayed word problems with bad font, thereby causing cognitive strain in subjects (since one has to strain to see the problem). Fascinatingly, this actually improved their performance(!). This is due to the cessation of cognitive ease and the increase of cognitive strain, which kicked System 2 in to gear (Alter et al. 2007).
It is even the case that cognitive ease is associated with good feelings. When researchers made it so that images are more easily recognizable to subjects (by displaying the outline of the object just before the object itself was displayed), they were able to detect electrical impulses from the facial muscles that are utilized during smiling (Winkielman and Cacioppo 2001).3
The long and short of it is that if you are in a state of cognitive ease, you'll be less critical; if you are in a state of cognitive strain, you've activated System 2 and you are more likely to be critical.
Thus, when you hear or read about cultural practices that are very much like your own, you often unquestioningly accept them, since you are in a state of cognitive ease. However, when you read about cultural practices (or ideas or whatever) that are very much unlike your own, this increases cognitive strain. Because of this, we are more likely to be critical about such practices (or ideas, etc.). Of course, it is ok to be critical, but we are often overly critical, applying a strict standard that we don't apply to our own culture. Moreover, this is compounded by the halo effect. We've found one thing we don't like about said culture, and so we erroneously infer the whole culture is rotten.
I'm not, of course, saying that this effect manifests itself in everyone all the time, or even in some people all the time. But it is a possible cognitive roadblock that might arise when you are going through the intellectual history that we'll be going through.
High-water mark
We'll begin our story soon, but let me give you two bits of historical context. The first has to do with the recent memory of the characters in our story. We will begin our story next time in the year 1600, but to begin to understand why the thinkers we are covering thought the way they did, you have to first know what they had seen, what their parents had seen, what they had been raised to accept. The 16th and early 17th centuries were, in my assessment, the high-water mark of religiosity in Europe. This period was characterized by a religious conviction that is only rivaled, to my mind, by the brief period of Christian persecution at the hands of the Romans under Nero.4
I am not alone in believing that this period stands out. In their social history of logic, Shenefelt and White (2013: 125-130) discuss the religious fanaticism of the 16th and 17th centuries, which are stained with wars of religion. The wars of this time period included the German Peasant Rebellion in 1524 (shortly after Martin Luther posted his Ninety-five Theses in 1517), the Münster Rebellion of 1534-1535, the St. Bartholomew’s Day Massacre (1572), and the pervasive fighting between Catholics and Protestants that culminated in the Thirty Years War (1618-1648). Shenefelt and White also discuss how these events inspired some thinkers to argue that beliefs needed to be supported by more than just faith and dogma. Thinkers became increasingly convinced that our beliefs require strong foundations. My guess is that if you lived through those times, you would've likely been in shock too. You would've longed for a way to restore order.
And so this is why I call this the high-water mark of religiosity. But notice that embedded in this claim is a very improper implication. By saying that this was the highest point of religious fervor in the West, and by grouping (most of) you into the greater Western tradition, I'm implying that these 16th and 17th century believers were more religious than you are (if you are religious). How very devious of me! Notwithstanding the outrage that many have expressed when I say this, I stand by my claim. It really does seem to me that the religious commitment of 16th and 17th century Christians was stronger than that of Christians today.
At this point, we could get into a debate about what religious commitment really is; or how the institution that one is committed to might change over the centuries so that commitment looks different in different time periods. I'd love to have those conversations. My general argument would be that the standards of what it meant to be a believer back then were higher than the standards of today (where going to a service once a week suffices for many). But without even getting deep into that, let's just look at the behavior of believers 400 years ago. Maybe once you learn about some of their practices, you'll side with me.
When I first wrote this lesson, I knew I needed something shocking to show you just how different things were in the early modern period in Europe. And, to be honest, I didn't think for too long. I knew right away what would drop your jaw. I'm going to describe for you an execution, in blood-curdling detail—an execution steeped in religious symbolism and where the "audience" was sometimes jealous of the person being tortured and executed.
Stay tuned.
Important Concepts
Distinguishing Deduction and Induction
As you saw in the Important Concepts, I distinguish deduction and induction thus: deduction purports to establish the certainty of the conclusion while induction establishes only that the conclusion is probable.5 So basically, deduction gives you certainty, induction gives you probabilistic conclusions. If you perform an internet search, however, this is not always what you'll find. Some websites define deduction as going from general statements to particular ones, and induction is defined as going from particular statements to general ones. I understand this way of framing the two, but this distinction isn't foolproof. For example, you can write an inductive argument that goes from general principles to particular ones, like only deduction is supposed to do:
- Generally speaking, criminals return to the scene of the crime.
- Generally speaking, fingerprints have only one likely match.
- Thus, since Sam was seen at the scene of the crime and his prints matched, he is likely the culprit.
I know that I really emphasized the general aspect of the premises, and I also know that those statements are debatable. But what isn't debatable is that the conclusion is not certain. It only has a high degree of probability of being true. As such, using my distinction, it is an inductive argument. But clearly we arrived at this conclusion (a particular statement about one guy) from general statements (about the general tendencies of criminals and the general accuracy of fingerprint investigations). All this to say that for this course, we'll be exclusively using the distinction established in the Important Concepts: deduction gives you certainty, induction gives you probability.
In reality, this distinction between deduction and induction is fuzzier than you might think. In fact, recently (historically speaking), Axelrod (1997: 3-4) argues that agent-based models, a new fangled computer modeling approach to solving problems in the social and biological sciences, is a third form of reasoning, neither inductive nor deductive. As you can tell, this story gets complicated, but it's a discussion that belongs in a course on Argument Theory.
Food for Thought...
Alas...
In this course we will only focus on deductive reasoning, due to the particular thinkers we are covering and their preference for deductive certainty. Inductive logic is a whole course unto itself. In fact, it's more like a whole set of courses. I might add that inductive reasoning might be important to learn if you are pursuing a career in computer science. This is because there is a clear analogy between statistics (a form of inductive reasoning) and machine learning (see Dangeti 2017). Nonetheless, this will be one of the few times we discuss induction. What will be important to know for our purposes, at least for now, is only the basic distinction between the two forms of reasoning.
Assessing Arguments
Some comments
Validity and soundness are the jargon of deduction. Induction has it's own language of assessment, which we will not cover. These concepts will be with us through the end of the course, so let's make sure we understand them. When first learning the concepts of validity and soundness students often fail to recognize that validity is a concept that is independent of truth. Validity merely means that if the premises are true, the conclusion must be true. So once you've decided that an argument is valid, a necessary first step in the assessment of arguments, then you proceed to assess each individual premise for truth. If all the premises are true, then we can further brand the argument as sound.6 If an argument has achieved this status, then a rational person would accept the conclusion.
Let's take a look at some examples. Here's an argument:
- Every painting ever made is in The Library of Babel.
- “La Persistencia de la Memoria” is a painting by Salvador Dalí.
- Therefore, “La Persistencia de la Memoria” is in The Library of Babel.
At first glance, some people immediately sense something wrong about this argument, but it is important to specify what is amiss. Let's first assess for validity. If the premises are true, does the conclusion have to be true? Think about it. The answer is yes. If every painting ever is in this library and "La Persistencia de la Memoria" is a painting, then this painting should be housed in this library. So the argument is valid.
But validity is cheap. Anyone who can arrange sentences in the right way can engineer a valid argument. Soundness is what counts. Now that we've assessed the argument as valid, let's assess it for soundness. Are the premises actually true? The answer is: no. The second premise is true (see the image below). However, there is no such thing as the Library of Babel; it is a fiction invented by a poet. So, the argument is not sound. You are not rationally required to believe it.
Here's one more:
- All lawyers are liars.
- Jim is a lawyer.
- Therefore Jim is a liar.
You try it!7
Pattern Recognition
The second bit
I said we needed two bits of historical context before proceeding. I gave you one: the high-water mark of religiosity. Here's the other.
In 1600, the Aristotelian view of science still dominated. Intellectuals of the age saw themselves as connected to the ancient ideas of Greek and Roman philosophers. It's even the case that academic works were written in Latin. So, in order to understand their thoughts, you have to know a little bit about ancient philosophy. Enjoy the timeline below:
Storytime!
There is one ancient school of philosophy that I'd like to introduce at this point. I'm housing this section within a Storytime! because very little is known about the founder of this movement. For all I know, none of what I've written in the next paragraph is true. Here we go!
Pyrrho (born circa 360 BCE) is credited as the first Greek skeptic philosopher. It is reputed that he travelled with Alexander (so-called "the Great") on his campaigns to the East. It is there that he came to know of Eastern mysticism and mindfulness. And so he came back a changed man. He had control over his emotions and had an imperturbable tranquility about him.
Here's what we do know. He was a great influence on Arcesilaus (ca. 316-241 BCE) who eventually became a teacher in Plato's school, the Academy. Arcesilaus' teachings were informed by the thinking of Pyrrho, and this initiated the movement called Academic Skepticism, the second Hellenistic school of skeptical philosophy. This line of thinking continued at least into the third century of the common era.
Skepticism is a view about knowledge, namely that we cannot really know anything. The branch of philosophy that focus on matters regarding knowledge is epistemology. You'll learn more about that below in Decoding Epistemology.
Decoding Epistemology
The Regress Argument
Although Pyrrhonism is interesting in its own right, we won't be able to go over its finer details here. In fact, we will only concern ourselves with one argument from this tradition. In effect, the last piece of the puzzle in our quest for context will be the regress argument, a skeptical argument whose conclusion states that knowledge is impossible. The regress argument, by the way, is also known as Agrippa’s Trilemma, named after Agrippa the Skeptic (a Pyrrhonian philosopher who lived from the late 1st century to the 2nd century CE).
To modern ears, the regress argument seems like a toy argument. It seems so far removed from our intellectual framework that it is easy to dismiss. But, again, this is easy for you to say. You are, after all, reading this on a computer. You are assured that the state of knowledge of the world is safe. You didn't live through the Peloponnesian War, or the fall of the Roman Empire, or the Thirty Year's War. You are comfortable that science will progress, perhaps indefinitely. In other words, you don't really think that collapse is possible for your civilization. But thinkers of the past didn't have this luxury. They were concerned with basic distinctions like, for example, the distinction between knowledge and opinion.8
As such, try to be charitable when you read this argument. Today, epistemology, the branch of philosophy concerning knowledge, is more like a game that epistemic philosophers play. But in the ancient world, when the notion of rational argumentation was still in its infancy, the possibility that perhaps we can never really know anything (i.e., skepticism) was a real threat.
The argument
- In order to be justified in believing something, you must have good reasons for believing it.
- Good reasons are themselves justified beliefs.
- So in order to justifiably believe something, you must believe it on the basis of an infinite amount of good reasons.
- No human can have an infinite amount of good reasons.
- Therefore, it is humanly impossible to have justified beliefs.
- But knowledge just is justified, true belief (the JTB theory of knowledge).
- Therefore, knowledge is impossible.
The general idea is quite simple. Consider a belief, say, "My dog is currently at home". How do you know that belief is true? You might say, "Well, she was home when I left the house, and, in the past, she's been home when I get back to the house." A skeptic would probe further. "How do you know that today won't be the exception?" the skeptic might ask. "Perhaps today's the day she ran away or that someone broke in and stole her." You give further reasons for your beliefs. "Well, I live in a safe neighborhood, so it's unlikely that anyone broke in" and "She's a well-behaved dog so she wouldn't run away" are your next two answers. But the skeptic continues, "But even safe neighborhoods have some crime. How can you be sure that no crime has occurred?" Eventually, you'd get tired of providing support for your views. Even if you didn't, it's impossible for you to continue this process indefinitely (since you live only a finite amount of time). If knowledge really is justified, true belief, then you could never really justify your belief, because every justification needs a justification. I made a little slideshow for you of the "explosion of justifications" required, where B is the original belief and the R's are reasons (or justification) for that belief. Enjoy:
According to Agrippa, you have three ways of responding to this argument (and none of them work):
- You can start providing justifications, but you’ll never finish.
- You could claim that some things don’t need further justification, but that would be a dogma (which is also unjustified).
- You could try to assume what you are trying to prove, but that’s obviously circular.
The third possibility that Agrippa points out is definitely not going to work. In fact, that form of reasoning is considered an informal fallacy, which brings us to the...
Begging the Question
This is a fallacy that occurs when an arguer gives as a reason for his/her view the very point that is at issue.
Shenefelt and White (2013: 253) give various examples of how this fallacy appears "in the wild", but the main thread connecting this is the circular nature of the reasoning. For example, say someone believes that (A) God exists because (B) it says so in the Bible, a book which speaks only truth. They might also believe that (B) the Bible is true because (A) it comes from God (whom definitely exists). Clearly, this person is reasoning in circles.
An even more obvious example is a conversation I overheard in a coffee shop once. One person said, "God exists. I know it, man." His friend responded, "But why do you believe that? How do you know that God exists?" The first person, without skipping a beat, said, "Because God exists, bro." Classic begging the question.
Two more things...
Now that you know about the high degree of religiosity and some tidbits about ancient philosophy, the setup is complete. We can begin to move towards 1600. Let me just close with two points. First, I know I left you hanging last time. Let me correct that now. The fundamental question of the course is: What is knowledge? I know it doesn't seem like it could take the whole term to answer this question, but you'd be surprised.
Second, that execution that I mentioned... Don't worry. It's coming.
There are two important bits of context that are important to understand the history being told in this class:
- The first century of the early modern period in Europe (1500-1800 CE) was characterized by a high degree of religiosity;
- There was still an active engagement between the thinkers of this early modern period and the philosophies of ancient Greece.
The distinction between deduction (which purports to give certainty) and induction (which is probabilistic reasoning) is important to understand.
The jargon (i.e., technical language) for the assessment of arguments, namely the concepts of validity and soundness are essential to know.
Epistemology is the branch of philosophy that concerns itself with questions relating to knowledge.
The ancient philosophical schools of skepticism posed challenges to the possibility of having knowledge which early modern thinkers were still thinking about and working through.
The regress argument, one argument from the skeptic camp, questions whether we can ever truly justify our beliefs, thereby undermining the possibility of having knowledge (at least per the definition of knowledge assumed in the JTB theory of knowledge).
FYI
Suggested Reading: Harald Thorsrud, Ancient Greek Skepticism, Section 3
TL;DR: Jennifer Nagel, The Problem of Skepticism
Supplementary Material—
-
Video: Steve Patterson, The Logic Behind the Infinite Regress
Related Material—
-
Video: Nerwriter1, The Death of Socrates: How To Read A Painting
Advanced Material—
-
Reading: A. J. Ayer, What is Knowledge?
Footnotes
1. To add to this, there was a major philosophical debate in the 20th century over the possibility of translation (e.g., see Quine 2013/1960). Consider how, in order to translate the modes of thought and concepts of an alien culture, you need to first interpret them. But the very process of interpretation is susceptible to a misinterpretation—distortion due to unconscious biases. Perhaps the whole process of translation itself is doomed.
2. The interested student should consult Kahneman's 2011 Thinking, fast and slow or watch this helpful video.
3. Zajonc argues that this trait, to find the familiar favorable, is evolutionarily advantageous. It makes sense, he argues, that novel stimuli should be looked upon with suspicion, while familiar stimuli (which didn’t kill you in the past) can be looked on favorably (Zajonc 2001).
4. Emperor Nero took advantage of the Great Fire of 64 CE to build a great estate. Facing accusations that he deliberately caused the fire, he heaped the blame on the Christians, and a short campaign of persecution began. However, the Christians appeared to revel in the persecution. Martyrdom allowed many who were otherwise of lowly status or from a disenfranchised group (like women or slaves) to become instant celebrities and be guaranteed, they believed, a place in heaven. Martyrdom literature proliferated, and Christians actively sought out the most painful punishments (see chapter 4 of Catherine Nixey's The Darkening Age).
5. By the way, I'm not alone in using this distinction. One of the main books I'm using in building this course is Herrick (2013) who shares my view on this distinction.
6. Another common mistake that students make is that they think arguments can only have two premises. That's usually just a simplification that we perform in introductory courses. Arguments can have as many premises as the arguer needs.
7. This argument is valid but not sound, since there are some lawyers who are non-liars—although not many.
8. Interestingly, in an era of disinformation where there is non-ironic talk of "alternative facts" and "post-truth", the distinction between knowledge and opinion is once again an important philosophical distinction to make.
The Advancement of Learning
No fact is safe from the next generation of scientists with the next generation of tools.
~Stuart Firestein
Worldviews
The year 1600 marks a turning point, according to historian and philosopher of science Richard DeWitt. It was in this year that a worldview, and its set of accompanying beliefs and ideas, died. The Aristotelian worldview, which had dominated Western Thought starting at about 300BCE, finally imploded. In its wake, a deterministic and mechanical worldview came to permeate in the minds of intellectuals, scientists, and philosophers.
Now, of course, the death of abstractions is always hard to pin down. The year 1600, more than anything, is perhaps best considered a convenient record of the hour of death. Even more evasive than determining the time of death of an abstraction may be coming to an agreement over its cause of death. Even though there is nothing analogous to, say, cardiac arrest for an abstraction, there is what we might call a natural death: some ideas simply run their course and are abandoned. This, we can say with considerable certainty, was not what happened to the Aristotelian worldview. Speaking crudely, this worldview lived past its prime and its usefulness, and so it was put down deliberately via the concerted effort of many of the most famous names in science and philosophy—Copernicus, Galileo, Descartes, Kepler, and Newton (to name a few). Its cause of death, in short, was violent.
To understand how a worldview dies, however, we should probably begin with some basics. What is a worldview, anyhow? And how do abstractions die? What was the Aristotelian worldview? What replaced it? We take these in turn.
Jigsaws
DeWitt begins his Worldviews by clarifying the notion of a worldview. He likens beliefs to a jigsaw puzzle. To have a worldview is to have a system of beliefs where the beliefs fit together in a coherent, rational way. In other words, the beliefs of our worldview should fit together like the pieces of a puzzle. We wouldn't, for example, both believe that the Earth is the center of the universe and that our solar system revolves around the center of the Milky Way galaxy. There, of course, can't be two centers. And so, at least when we are being our best rational selves, develop consistent, rational jigsaw puzzles of beliefs.
Moreover, the central pieces tend to be more important. These are what allow the rest of our beliefs to fit in or cohere with each other. Without the central pieces the outer pieces are just "floating", with no connection to the center or to each other. It may be that, as it happens to be the case with me, when you are assembling a jigsaw, sometimes you change the location of the outer pieces. You thought it went in the lower-right quadrant, but it turns out it belongs in the upper left. Analogously, less central beliefs in our worldview might be updated or even discarded, but the central beliefs are integral to the system. The central beliefs give the system meaning and form.
The Aristotelian Worldview
Before proceeding, it should be said that what is called the Aristotelian worldview is not exactly what Aristotle believed. Rather, it takes as a starting point several beliefs held and defended by Aristotle and grows from there. Having said that, let's take a look at some of these beliefs:
- The Earth is located at the center of the universe.
- The Earth is stationary.
- The moon, the planets, and the sun revolve around the Earth in roughly 24hr cycles.
- The region below the moon, the sublunar region, contains the four basic elements: earth, water, air, fire.
- The region above the moon, the superlunar region, contains the fifth element: ether.
- Each of the elements has an essential nature which explains their behavior.
- The essential nature of the elements is reflected in the way the elements move.
- The element earth has a tendency to move towards the center of the universe. (This explains why rocks fall down, since the Earth is the center of the universe.)
- The element of water also has a tendency to move toward the center of the universe, but this tendency is not as strong as that of earth. (This is why when you mix dirt and water, the dirt eventually sinks.)
- The element air naturally moves away from the center of the universe. (That’s why when you blow air into water, the air bubbles up.)
- Fire also tends to naturally move away from the center of the universe. (That is why fire rises.)
- Ether tends to move in circles. (This is why the planets, which are composed of ether, tend to move in circular motions around the Earth.)
Slow down there, Turbo...
I've sometimes encountered people, including people who should know better, who naively believe that people who held the Aristotelian worldview were "simple-minded" or even "stupid". I assure you, they had the same (roughly speaking) mental machinery that you and I have. In the case of Aristotle, Ptolemy, and others, I'm actually willing to wager that they were much smarter than even the average college professor. Moreover, they had what were—at the time—very good rational and empirical arguments for their beliefs. I'm sure that if you were around in those days, you would've bought into these.
DeWitt (ibid., 81-91) summarizes some arguments for the Aristotelian viewpoint from Ptolemy’s Almagest (published ca. 150 CE). For example, it was known that the Earth was spherical since objective events, like eclipses, were recorded at different times in different places, and the difference in time was proportional to the distances between locations. By studying the regularity of the sun rising in the East prior to more westerly locations, the ancients reasoned that the curvature of the Earth is more or less uniform. Having established the east-west spherical nature of Earth, which is compatible with Earth being a cylinder, he then notes that some stars are only visible the further north one goes. The same goes when traveling south. This suggests that Earth is spherical in the north-south direction too, thereby establishing that Earth is spherical.
Even though you agree (I hope) that the Earth is spherical, could you have come up with those arguments? It's important to remember that our ancestors were not naive.
It's instructive also, I think, to look at beliefs you likely don't agree with. Ptolemy gives some common-sense arguments for geocentrism: the belief that the Earth is the center of the solar system and/or the universe. Here's one such argument. The ancients knew that the Earth’s circumference was about 25,000 miles. Assuming that the Earth rotates on an axis would lead to the conclusion that a full rotation takes 24 hours. This would mean that, if we are standing at the equator, we would be spinning at over 1,000 miles per hours (since 25,000 miles / 24 hours = 1,041.7 mph). This speed would obviously be compounded by an Earth that is also orbiting around a sun. But obviously, it doesn’t even feel like we are moving at 1,000 mph, so considering greater distances is unnecessary. Heliocentrism simply doesn’t match with our everyday experience, since it doesn't at all feel like we are on a sphere travelling at well over a thousand miles per hour.
Here's another argument. Objects don’t typically move without some external force acting on them. Try it if you want to convince yourself. Earth, moreover, is a very large object. It stands to reason that only if a very massive force was moving it would Earth move. But no massive force is immediately evident. So it is reasonable to infer that Earth is stationary.
The aforementioned arguments could perhaps be dubbed "common-sense" arguments. Ptolemy also gave more empirical arguments about the nature of falling objects and stellar parallax, but these are a little more technical than what is needed to prove the basic point I want to make: the ancients were not dumb. In fact, in section 7 of the preface to Almagest, Ptolemy even briefly considers the mechanics heliocentrism(!). Also, it is noteworthy that it took the combined efforts of some of the biggest names in science, as we've already mentioned, to dethrone geocentrism. Relatedly, Ptolemy's model of the solar system was unrivaled in predictive power for 1400 years. Lastly, most educated people beginning around 400 BCE correctly believed that the Earth was spherical, as we do. In sum, the ancients were no slouches.
How do worldviews die?
Recall the analogy between worldviews and jigsaw puzzles. It is the central pieces that hold most of the import. These central pieces connect what would otherwise be disunited and seemingly unrelated components. The center is, in short, the beating heart of the worldview. Worldviews die when its central tenets are no longer believed.1
The magicians
We find ourselves in London in the year 1600. We are in the tail end of the Tudor Period (1485-1603), which is generally considered to be a "golden age" in English history. Queen Elizabeth I is on the throne as she has been since 1558, and what a reign it's been. Advancements in cartography and the study of magnetism have led to an increase in maritime trade. The British navy itself was doing very well. In the summer months of 1588, the English defeat the Spanish Armada, which had previously been thought to be invincible. There were no major famines or droughts, and there was an increase in literacy rates. It's even the case that, towards the end of Elizabeth's reign, in the early 1590's, Shakespeare's plays were beginning to hit the stage.
It was during this time period that some pivotal steps towards modern science were taken. It was a few decades earlier, in the middle of the 1500s, that there were remarkable innovations in the making and use of tools of observation, as well as ways of conceptualizing and categorizing one’s findings. Tools of observation and systematizing our beliefs (an inheritance from Aristotle), of course, are essential to science. But by 1600, one idea more than any of the others was catching in certain intellectual circles: the idea of the controlled experiment.
Prior to this time period, when one experimented, what one really meant by that was merely to say that they tried something. In fact, in some Spanish-speaking countries, the Spanish word for experiment (experimentar) is still used to mean something like, "to see what something feels like." But the word experiment was shifting in meaning. It was, in some specialized circles, starting to mean an artificial manipulation of nature; a careful controlled examination of nature. This would eventually lead to a complete paradigm shift. Lorraine Daston writes:
“Most important of these [innovations] was ‘experiment,’ whose meaning shifted from the broad and heterogeneous sense of experimentum as recipe, trial, or just common experience to a concertedly artificial manipulation, often using special instruments and designed to probe hidden causes” (Daston 2011: 82).
Who were these early experimentalists? Like all great movements, the experimentalist movement went through three stages: ridicule, discussion, adoption. Although this may pain those of us who are science enthusiasts, the early experimentalists were looked upon as... well... weird. They typically spent most of their free time engaging in experiments and would let other social and professional responsibilities lapse. They would spend an inordinate amount of their money on their examination of nature, and they would mostly socialize only with other experimentalists. Daston again:
“[O]bservation remained a way of life, not just a technique. Indeed, so demanding did this way of life become that it threatened to disrupt the observer’s other commitments to family, profession, or religion and to substitute epistolary contacts with other observers for local sociability with relatives and peers... French naturalist Louis Duhamel du Monceau depleted not only his own fortune but that of his nephews on scientific investigations. By the late seventeenth century, the dedicated scientific observer who lavished time and money on eccentric pursuits was a sufficiently distinctive persona in sophisticated cultural capitals like London or Paris to be ridiculed by satirists and lambasted by moralists” (Daston 2011: 82-3).
Stage 2: Discussion
Enter Francis Bacon (1561-1626). By 1600, Bacon had already served as a member of Parliament and had been elected a Reader, a lecturer on legal topics. But the aspect of Bacon's work that was truly novel was his approach to natural science. Bacon struggled against the traditional Aristotelian worldview as well as the mixture of natural science and the supernatural. He re-discovered the pre-Socratics and was fond of Democritus and his atomism: the view that the world is composed of indivisible fundamental components. He also had a clear preference for induction, contrary to many of his contemporaries.
In 1605, Bacon publishes The Advancement of Learning in which he rejects many Aristotelian ideas. In book II of Advancement, he argues that we must cleanse ourselves of our “idols” to engage in empirical inquiries well. These idols include intuitions rooted in our human nature (idols of the tribe), overly cherished beliefs of which we can’t be critical (idols of the cave), things we hear from others but never verify (idols of the market place), and the ideological inheritance of accepted philosophical systems (idols of the theater).
Instead, Bacon argues that the only way to know something is to be able to make it, to control it. In other words, making is knowing and knowing is making. The way this idea is most often encapsulated is in the following phrase: Knowledge is power. As it turns out, this controlling of nature overawed early experimentalists. Bacon even referred to this applied science as "magic".
Regress? What regress?
In the last lesson, we were introduced to an argument from the skeptic camp, an argument that concluded that knowledge, as conceived of by Plato, is impossible. I joked that this is more like a game to contemporary epistemic philosophers, but I wasn't completely kidding. At least some epistemic philosophers do see it that way (e.g., Williams 2004). If it is a game, then, we should be able to solve it. Bacon provides us with one possible approach: change the definition of knowledge.
Take a look once more at the regress argument:
- In order to be justified in believing something, you must have good reasons for believing it.
- Good reasons are themselves justified beliefs.
- So in order to justifiably believe something, you must believe it on the basis of an infinite amount of good reasons.
- No human can have an infinite amount of good reasons.
- Therefore, it is humanly impossible to have justified beliefs.
- But knowledge just is justified, true belief (the JTB theory of knowledge).
- Therefore, knowledge is impossible.
Notice that the word "justification" is featured prominently throughout the argument. This is important because this argument, then, is assuming Plato's JTB theory: knowledge is justified, true belief. As such, one way to diffuse this argument is to not assume Plato's JTB theory. This is precisely what Bacon is doing, and this is no accident.
Bacon rejects the Aristotelian tradition of what he calls the "anticipation of nature" (anticipatio naturae), and argues that we should instead interpret nature through the rigorous collection of facts derived from experimentation. Knowledge isn't "justified, true belief." Bacon couldn't care less whether or not his knowledge claims were justified in the eyes of Platonists. The only question for Bacon is, "Can you control nature?" If you can, that's enough to say that you know how nature works. This should be the true goal of science: to interrogate and ultimately control our natural environments and actively work towards bringing about a utopian transformation of society. Mathematicians and historian of mathematics Morris Kline puts it this way:
“Bacon criticizes the Greeks. He says that the interrogation of nature should be pursued not to delight scholars but to serve man. It is to relieve suffering, to better the mode of life, and to increase happiness. Let us put nature to use... ‘The true and lawful goal of science is to endow human life with new powers and inventions’ ” (Kline 1967: 280).
Many students find Bacon's views intuitively appealing, and they very well may be. We will put off critiquing them until the next section. For now, let me stress how this relates to our overall project.
Dilemma #1: How do we solve the regress?
So far, we've been introduced to the branch of philosophy known as epistemology, which concerns itself with questions relating to knowledge. We've seen one possible definition of knowledge (the JTB theory) and one objection to that theory (given via the regress argument) from one camp of thinkers who believe that knowledge is impossible (the skeptics). Let's flag this spot in the conversation and call it Dilemma #1. This dilemma is, in a nutshell, this: how do we stop the skeptic's regress argument?
Bacon provides us with one solution: change the definition of knowledge. In the next section, we will introduce some problems with this solution. In the lessons to come, we will look at alternate solutions coming from different theorists. Only then will we be able to judge which is the best solution.
I might add that this will be the general trajectory of the course. There will be 10 dilemmas in all, and each will be associated with a number of different solutions. These dilemmas are all interrelated, as you'll see. They are also connected to fundamental ideas in Western thought. Fun times ahead.
Bacon's legacy
Bacon's public career ended in disgrace. Although it was an accepted practice at the time, Bacon received gifts from his litigants and his political rivals used this to charge him with corruption. He was removed from public office, and he devoted the rest of his days to study and writing. He died in 1626.
His legacy lives on, however. Per Daston, “Baconians [those who subscribed to Bacon's ideas] played a key role in the rise of the terminology of observation and experiment in mid-seventeenth-century scientific circles” (Daston 2011: 83; interpolation is mine). Moreover, since the practice of science doesn’t come to fruition just from a common methodology, Bacon also suggested a way for establishing joint ground and sharing insights. In New Atlantis (published posthumously in 1626), an incomplete utopian novel, Bacon discussed the House of Salomon, a concept that influenced the formation of scientific societies, societies which still live on today. Daston again: “[T]he scientific societies of the late seventeenth and early eighteenth centuries shifted the emphasis from observation as individual self-improvement, a prominent theme in earlier humanist travel guides, to observation as a collective, coordinated effort in the service of public utility” (Daston 2011: 90).
Decoding Pragmatism
Some comments
At this point we've introduced a second contender into the fray: pragmatism. We've also complicated our epistemic picture a little bit, so I'd like to make sure everyone's on board with this:
We have two different definitions of what knowledge is: Plato's JTB theory and pragmatism.
We have two different theories for justifying knowledge claims: the correspondence theory of truth and the coherence theory of truth.
We have two different conceptions of the aims of science: realism and instrumentalism.
Lastly, these different views tend to cluster together: JTB fits in nicely with the correspondence theory of truth and realism, while pragmatism tends to fit in nicely with coherentism and instrumentalism.
Putting on our historical lenses
It may be the case that you've already picked which view you agree with. That's fine. But now I want you to think about what view would've seemed more sensible at the turn of the 17th century. If you were there at Bacon's funeral, would you have bet on his ideas catching and taking off? Would you have wanted them to?
It's hard for us to put ourselves in the right frame of mind to understand and to feel what someone from the 1600's might've thought and felt. We are, in a very real way, jaded. But we have to realize this: the great success of the natural sciences that we enjoy the fruits of today was not yet empirically validated in the 17th century. In other words, if you were there, and you were looking at these "experimentalists" and "magicians", you could reasonably argue that society should not to take the plunge. You found yourself at the crossroad. Would you choose the worldview that had dominated Western thought for over a thousand years (i.e., the devil you know) or would you take a chance with this new natural science stuff? I'm sure that we'd like to think that we would've been early adopters of the latest ideas, but consider for a moment how uncertain this new worldview must've seemed.
All in pieces...
To be honest, to only focus on the nascent scientific method is to grossly underestimate the feeling of inconstancy that was gripping society in the 1600s. It was a time of intellectual and social upheaval, sometimes violent; and it had been for a few centuries.
My guess is that most of us would have felt the same feeling of precariousness. It all felt on the verge of collapse. I close with Morris Kline's words:
“It was to be expected that the insular world of medieval Europe accustomed for centuries to one rigid, dogmatic system of thought would be shocked and aroused by the series of events we have just described. The European world was in revolt. As John Donne put it, ‘All in pieces. All coherence gone” (Kline 1967: 202).
Around the year 1600, the Aristotelian worldview, which had dominated Western Thought since roughly 300BCE, finally gave way to a new more mechanistic worldview.
During this time period, the concept of an experiment began to be developed, and experimentalists enthusiastically took to the analysis of nature.
The ideas of Francis Bacon were instrumental in standardizing and codifying the concepts of experiment, scientific societies, and eventually the scientific method itself.
Pragmatism is a competing conception of knowledge.
There are two constellations of views that fit well together:
- JTB theory + the correspondence theory of truth + realism about science
- pragmatism + coherence theory of truth + instrumentalism about science
FYI
Suggested Reading: Lorraine Daston, The Empire of Observation, 1600-1800
-
Note: The suggested reading is only the first 11 pages of the document. Here is a redacted copy of the reading.
TL;DR: 60Second Philosophy, Who is Francis Bacon?
Supplementary Material—
-
Reading: David Simpson, Internet Encyclopedia of Philosophy Entry on Francis Bacon
Related Material—
- Video: Smarthistory, Friedrich, Abbey among Oak Trees
- Note: This is an analysis of the work of Caspar David Friedrich, one of my favorite artists. I use his paintings throughout this course. One painting in particular, his Monastery Ruins in the Snow, is the image I use to represent the feeling of being trapped in the pit of skepticism.
Advanced Material—
-
Reading: Francis Bacon, Novum Organum
-
See in particular Book II.
-
-
Reading: Jürgen Klein, Stanford Encyclopedia of Philosophy Entry on Francis Bacon
Footnotes
1. DeWitt and I are both heavily influenced by the work of physicist/philosopher Thomas Kuhn (2012), as are many. This is not to say that DeWitt (or I) agree completely with Kuhn, though. Nonetheless, this whole way of speaking is reminiscent of Kuhn's The Structure of Scientific Revolutions, originally published in 1962. The interested student can find a copy of this work or watch this helpful video.
Cogito
If you would be a real seeker after truth, it is necessary that at least once in your life you doubt, as far as possible, all things.
~René Descartes
Aristotle's dictum
Up to this point, we've been using the analogy between worldviews and jigsaws or between worldviews and webs of beliefs (which was philosopher W.V.O. Quine's preferred metaphor). Analogies, unfortunately, have their limitations. As it turns out, there are aspects of the downfall of worldviews that are decidedly not like a jigsaw puzzle (or a spider's web, for that matter). For example, once a worldview is torn apart, it is not uncommon for thinkers to parse through the detritus, find a pearl, and incorporate it into their new worldview. This would be as if someone, after breaking apart the completed jigsaw puzzle that decorated their living room coffee table, were to then take a prized puzzle piece from that set and use it in a different jigsaw. Surely that doesn't make much sense. Nonetheless, this is how the partitioning of worldviews goes: not all beliefs are tossed. There are always some pearls of wisdom that can be updated, or otherwise adapted to the new normal.
First, some context. If I painted too rosy a picture of the end of the Tudor period in the last lesson, you will forgive me—I hope. My perception might have been altered by my knowledge of what happened next. As if we needed a reminder that, in the past, societal advancements were often accompanied by backtracking, both nature and established institutions pushed back against intellectual progress and stability—with a vengeance. In 1600, for example, philosopher and (ex-)Catholic priest Giordano Bruno was burned at the stake for his heretical beliefs in an infinite universe and the existence of other solar systems. Even more tragic, if only because of the sheer magnitude of the numbers involved, was the Thirty Years' War (1618-1648), which claimed the lives of 1 out of 5 members of the German population. This began as a war between Protestant and Catholic states. However, fueled by the entry of the great powers, the conflict wrought devastation to huge swathes of territory, caused the death of about 8 million people, and basically bankrupted most of the belligerent powers.
As if human-made suffering isn't unhappy enough, Mother Nature often only compounds the agony. In 1601, due to a volcanic winter caused by an eruption in Peru, Russia went through a famine that lasted two years and killed about one third of the population. There were also intermittent resurgences of bubonic plague in Europe, northern Africa, and Asia throughout the 17th century. I, of course, don't have to tell you that a tiny virus 125 nanometers in size can cause major cultural, political, and economic disruptions.
But thinkers of the time did not know yet about germ theory, let alone climate science. The former would have to wait until the work of Louis Pasteur in the 1860s. Intellectuals instead focused on a domain that they felt they might be able to affect: our religious convictions. Thinkers like René Descartes (the subject of this lesson whose name is pronounced "day-cart") and John Locke (the subject of the next lesson) noticed that religious fanatics could reason just as well as logicians. The problem, they thought, was that they began with religious assumptions that they took to be universal truth. From these dubious starting points these fanatics were able to justify the killing of heretics, the torture of witches, and their religious wars. In other words, these thinkers, like many others of their age, wanted to ensure that people reasoned well, that people would filter their beliefs and discard those that didn't stand up to scrutiny. A worthy goal, indeed.
How should we reason? Descartes believed that our premises must be more certain than our conclusions. In other words, we must move from propositions that we have more confidence in to those that we have less confidence in (at least initially, before argumentation). In fact, however, Aristotle had made this point in his Posterior Analytics almost two thousand years before (I.2.72.a30-34). As such, we will call this principle, that we should move from propositions that we have more confidence in to those that we have less confidence in, Aristotle's dictum. Remember: never throw out the baby with the bathwater.
Reasoning Cartesian style
Descartes gives the metaphor of a house. We must build the house on a strong foundation, otherwise it collapses. Similarly, we must move from certain (or near certain) premises on towards our conclusion. Aristotle, by the way, speaks in a similar language. Aristotle talks about how our premises are the “causes” of our conclusions, but he’s talking about sustaining causes, the supports that keep our conclusions from crashing down.
Descartes uses this epistemic principle to argue against circular reasoning. For example, recall the example we gave for the informal fallacy of begging the question. Say someone believes that (A) God exists because (B) it says so in the Bible, a book which speaks only truth; and they also believe that (B) the Bible is true because (A) it comes from God (that exists). Clearly this person can’t find both A and B more convincing than the other. This is clearly circular reasoning (for a discussion, see Shenefelt and White 298: n11). The trouble with religious fanatics, Descartes reasoned, is that their premises are not known to begin with. And from these false beliefs they build an even more dubious superstructure. Realizing this, Descartes decided to use the method of doubt (or hyperbolic doubt), disabusing himself of anything he didn’t know with absolute certainty. Stay tuned.
Newton claimed that he stood on the shoulders of giants, but most people don’t really know which giants he spoke of. I’ll go ahead and tell you, at least with regards to Newton’s first law of motion. Galileo (1564-1642) came close to getting it right, but it was Descartes (1596-1650) who first to clearly state the principle. Finally, Newton (1642-1727), borrowing heavily from Descartes’ formulation, incorporated it into his own greater mechanical view of the universe (see DeWitt 2018: 100).
This goes to show that Descartes was much more than a philosopher; he was also a gifted mathematician and scientist. One of his most famous insights was the bridging of algebra and geometry (as in Cartesian coordinates, pictured here). I might add, however, that Descartes doesn’t get sole credit for coordinate geometry. French lawyer and mathematician Pierre de Fermat arrived at the same broad ideas independently and concurrently.
Important Concepts
Stragglers
As we saw in the last section, not all of the ideas from the Aristotelian worldview were discarded. For one, Aristotle's dictum was preserved and used by Descartes in an attempt to tamper the religious fanaticism of the time. But there were other ideas that Descartes felt should be preserved. One of those was the hope for deductive certainty about the world.
Recall that by this point in history there was an alternative conception of knowledge: Bacon's pragmatist approach. Although we've already looked at problems with this view, I'd like to drive these a little further now. Pragmatism, as we've seen, melds well with a coherence theory of justification. But coherentism leaves us with many unanswered questions. For example, is a statement true because it coheres with an individual’s web of beliefs? This, by the way, is sometimes referred to as alethic relativism. It seems like most individuals aren't reliable enough to be counted to have a well-supported coherent set of beliefs. Should we instead say that a statement is true only if it coheres with a whole group’s web of beliefs? If that's the case, how do we decide who’s a member of the group? Do they have to be experts? How do we decide who’s an expert?
Perhaps the most worrisome implication of pragmatism is the possibility that everything we currently know to be true could be overturned by empirical developments. In other words, it could very well be the case, according to the logic of pragmatism, that the worldview that we hold in the 21st century could be completely overthrown in the decades and centuries to come. Moreover, that worldview might itself also eventually be overthrown. It is possible that everything we think we know is actually false. This is called pessimistic meta-induction, and it's very worrisome for some people.
Descartes, it seems, was definitely not comfortable with this notion. The alternative appeared to be JTB theory + the correspondence theory of truth + realism about science. Now, it should be said that Descartes didn't use any of these labels. Instead, we are labeling Descartes' views after the fact, but it's worth it because it will help us understand a very important point. Stay tuned.
This alternative package of views (JTB theory + the correspondence theory of truth + realism about science) isn't without its problems. Both the correspondence theory of truth and realism about science have their objections. But let's just focus on the JTB theory. After all, we've already looked at a problem for this view: the regress argument. Until we solve the regress, we really can't move forward with this constellation of views. We know this, and Descartes knew it as well.
And so Descartes sought to establish solid foundations for science and common sense. He thought to himself that there must be some things we know with absolute certainty, things that are self-evident, things that cannot be denied. Since these beliefs cannot be doubted or questioned, then these are regress-stopping beliefs. In other words, these are foundational beliefs, and upon these foundational beliefs we can build the rest of our beliefs. That's how Descartes planned on stopping the regress: by starting with beliefs that literally could not be questioned.
How do you discover foundational beliefs? For this goal, Descartes employs his Method of Doubt, also referred to as hyperbolic doubt. In a nutshell, Descartes would discard any beliefs that he did not know with absolute certainty. He would take as long as he needed to because, in the end, whatever was left over was known with 100% confidence. In other words, at the end of that process, he should have his foundational beliefs. On those beliefs, he would explain the rest of the world.
Descartes' goals
Descartes' story is complicated and interesting, and we cannot do justice to his life here. One tantalizing detail that just can't be left out, though, is that he served as a mercenary for the army of the Protestant Prince Maurice against the Catholic parts of the Netherlands during the early stages of the Thirty Years' War. It was there that he studied military engineering formally. Although we do not know his actual course of study, he very likely studied the trajectories of cannonballs and the use of strong fortifications to aid in defense and attack. Can you see any analogies to his philosophical project?
In any case, the interested student can consult a biography of Descartes for more details. Here, I'd like to focus instead on what was happening to Descartes' contemporaries and how that might've influenced Descartes' thinking. Galileo Galilei (1564-1642) was an advocate of the heliocentric model. He was also thirty-five years the elder of Descartes and a very public figure. Even before his astronomical observations, Galileo had invented a new type of compass to be used in the military. This compass, developed in the mid- to late-1590s, helped gunners calculate the position of their cannon and the quantity of gun powder to be used. When the telescope was invented in 1608, Galileo set to observing the skies and made many important observations that would contribute to the downfall of the Aristotelian worldview. To add to his star power, Galileo became the chief mathematician and philosopher for the Medicean court.1 The Medicis were apparently flattered that Galileo named a group of four stars the Medicean stars. All this to say that if something happened to Galileo, intellectuals would notice.
In effect, something did happen to Galileo, although he certainly played a hand in this. Galileo, a fierce advocate of heliocentirsm, published his Dialogue Concerning Two Chief World Systems in 1632. In it, he insulted the powers that be (the Roman Catholic Church) by putting the position of the Church (geocentrism) in the mouth of the character called "the fool". This, of course, did not sit well with the pope. It should be added that at this point the Church was reeling from its conflict with the Protestants, and so the Church was a little touchy about challenges to its authority. And so, in 1633, the Inquisition condemns Galileo to house arrest, although it could've been worse. Already in his 60s, Galileo died a few years later.2
Before leaving Galileo, it should be said that, although he was immensely important in the history of science for many reasons, one stands out. He was one of the first modern thinkers to clearly state that the laws of nature are mathematical. Morris Kline writes:
“Whereas the Aristotelians had talked in terms of qualities such as earthiness, fluidity, rigidity, essences, natural places, natural and violent motion, potentiality, actuality, and purpose, Galileo not only introduced an entirely new set of concepts but chose concepts which were measurable so that their measures could be related by formulas. Some of his concepts, such as distance, time, speed, acceleration, force, mass, and weight, are, of course, familiar to us and so the choice does not surprise us. But to Galileo’s contemporaries these choices and in particular their adoption as fundamental concepts, were startling” (Kline 1967: 289).
How is this related to Descartes? We must recall that at this time period religion gave meaning to every sphere of life. As it turns out, we don't have to travel far to see religious inspiration. The very same scientists who played a role in the downfall of Aristotelianism were motivated by religious fervor. Copernicus was motivated by neo-Platonism, a sort of Christian version of Plato's philosophy (DeWitt 2018: 121-123), and Kepler developed his system, which was actually right, through his attempts to "read the mind of God" (ibid., 131-136). Even Galileo was a devout Catholic; he just had a different interpretation of scripture than did Catholic authorities.3
Yuval Noah Harari reminds us that up until the dawn of humanism, religion gave meaning to every sphere of life.
“In medieval Europe, the chief formula for knowledge was: knowledge = scriptures × logic. If we want to know the answer to some important question, we should read scriptures and use our logic to understand the exact meaning of the text... In practice, that meant that scholars sought knowledge by spending years in schools and libraries reading more and more texts and sharpening their logic so they could understand the texts correctly” (Harari 2017: 237-8).
Religion gave meaning to art, music, science, war, and, if you recall, even death. The following contains graphic descriptions of executions which were practiced in Descartes' native France. Those who are sensitive can and should skip this video.
And so Descartes had to toe a fine line. He, obviously, wanted to continue to engage in scientific research, but he didn't want to end up like Galileo. He wanted to find a way to reconcile science and the Church. He needed to establish a foundation for science without alienating the faithful. But how? After a long gestation period, Descartes finally publishes his Meditations in 1641.
“Because he had a critical mind and because he lived at a time when the world outlook which had dominated Europe for a thousand years was being vigorously challenged, Descartes could not be satisfied with the tenets so forcibly and dogmatically pronounced by his teachers and other leaders. He felt all the more justified in his doubts when he realized that he was in one of the most celebrated schools of Europe and that he was not an inferior student. At the end of his course of study he concluded that all his education had advanced him only to the point of discovering man’s ignorance” (Kline 1967: 251).
Decoding foundationalism
Into the fray...
At this point, we have two different conceptions of knowledge competing with each other. Moreover, they are both a constellation of views. They're like little worldviews, not as all-encompassing as the Aristotelian worldview, but still a coherent system of beliefs nonetheless. They are:
- A Cartesian view: JTB theory + correspondence theory of truth + foundationalism + realism about science.
- A Baconian view: pragmatism + coherence theory of truth + instrumentalism about science.
Notice that I didn't call them "Descartes' view" and "Bacon's view". Just as with the Aristotelian worldview, this isn't exactly what they believed. These labels are simply a means for us to talk about these concepts, and they may or may not have matched perfectly how Descartes and Bacon actually thought. These different approaches to knowledge are certainly, however, in the spirit of each of these thinkers.
For rhetorical elegance, I won't call them the Cartesian view and a Baconian view. I'll use a shorter label. For the Cartesian view, I will use the label foundationalism. This is a little clumsy in that it is both the name of a theory about how to stop the regress as well as the name for the overall constellation of views. I think that's ok, though. The reason for this is that foundationalism (as a means to stop the regress) is what holds the constellation of views together. It's sort of the "band-aid" that fixes the regress problem and keeps the JTB theory, the correspondence theory of truth, and realism about science alive. For the Baconian view, things are a little simpler. It has a label. We will refer to it as positivism.4
When a worldview collapses, it is the central beliefs that tend to die off. However, not all beliefs associated with the worldview die off. For example, Aristotle's dictum, the view that you should move from premises that you are more certain of to conclusions that you are less certain of (at least initially), was still advocated as the Aristotelian worldview was collapsing and is still considered a good maxim for reasoning even today.
- Note: It might be added that a whole field that Aristotle started, the study of validity (a field known as logic), was also not discarded along with the Aristotelian view. This field had amazing leaps forward in the 19th century and is an integral part of computer architecture and computer science today. The interested student can take my symbolic logic course for more.
Descartes decided to attempt to rescue deductive certainty in science as the Aristotelian worldview was collapsing. In other words, he wanted to find a way to establish the foundations of science such that the truths of science are known with certainty.
Descartes was also looking for a way to reconcile science and faith, as well as a method for pacifying religious fanaticism.
Descartes' foundationalist project is not entirely without other motivations. Pragmatism carries with it the possibility that everything we think we know is actually false; this is called pessimistic meta-induction. Avoiding this would've been appealing to thinkers in the early modern period.
Using the method of doubt, Descartes arrived at his four foundational truths:
- He, at the moment he is thinking, must exist.
- Each phenomenon must have a cause.
- An effect cannot be greater than the cause.
- The mind has within it the ideas of perfection, space, time, and motion.
We now have two competing constellations of views: foundationalism and positivism.
FYI
Suggested Reading: René Descartes, Meditations on First Philosophy
-
Notes:
-
Read only Meditations I & II.
-
Here is an annotated reading of Descartes’ Meditations, courtesy of Dan Gaskill at California State University, Sacramento.
-
TL;DR: Crash Course, Cartesian Skepticism
Related Material—
-
Video: Nick Bostrom, The Simulation Argument
-
Netflix: The interested student should also watch "Hang the DJ" (Episode 4, Season 4) of the Netflix series Black Mirror.
Advanced Material—
-
Reading: The Correspondence between Princess Elisabeth of Bohemia & René Descartes
-
Reading: Michael Williams (2004), Scepticism and the Context of Philosophy
Footnotes
1. At this point in time, the term philosopher meant what we would today call a scientist (see DeWitt 2018: 143). We talked about this in Lesson 1.1 Introduction to Course.
2. It is my understanding that Galileo got some revenge, sort of.
3. The interested student can read Galileo's Letter to Castelli, where he most clearly articulates his views on the matter.
4. I might add that positivism should be understood as an umbrella term, since there are various "flavors" of positivism.
Rationalism v. Empiricism
El original es infiel a la traducción.
(The original is unfaithful to the translation.)
~Jorge Luis Borges
Jigsaws, revisited
What do you think about when you hear that some physicists now believe that it's not only our universe that exists, but that there are innumerable—maybe infinite—universes in existence? It certainly is a strange idea, and I can imagine a variety of different responses. Some might be in awe. Others might be in disbelief. Some might think it is completely irrelevant to their life, especially if you are a member of a historically/currently disenfranchised group. I'm reminded of the Russian communist Vladimir Lenin's response to the possibility that there is a 4th dimension of space. Apparently Lenin shot back that the Tsar can only be overthrown in three-dimensions, so the fourth doesn't matter.
My guess, however, is that these possibilities were not open to people of the 17th century. The Aristotelian worldview that had just been dethroned was a worldview that had purpose built into it. Let me explain. Recall that in the Aristotelian worldview, each element had an essential nature. Moreover, each element's behavior was tied to this essential nature. This is called teleology, the explanation of phenomena in terms of the purpose they serve. These explanations are, if you were to truly believe them, extremely satisfactory. And so, there is something psychologically alluring about Aristotle's teleological science. Losing this purpose-oriented explanation must've been accompanied with substantial distress, intellectual and otherwise.
The distress of uncertainty lasted almost a century. Thinkers were arguing about what would replace the Aristotelian vision. And then, in 1687, Isaac Newton publishes his Philosophiæ Naturalis Principia Mathematica, and a new worldview was established. Natural philosophy (i.e., what would eventually be called "science") went from teleological (goal-oriented or function-oriented) explanations to mechanistic explanations, which are goal-less, function-less, and purpose-less. This was, psychologically, probably worse for the people of the time. Not only did humans realize they are not living in the center of the universe, the function-oriented type of thinking they had been using for a thousand years had to be supplanted by cold, lifeless numbers.
And yet, as the saying goes, if you want to make an omelette, you gotta break some eggs. There were some unexpected benefits to the downfall of the Aristotelian worldview outside of the realm of science, philosophy, and mathematics. There were important social revolutions during this time period, and some thinkers link them to the end of teleological views:
“Likewise, the general conception of an individual’s role in society changed. The Aristotelian worldview included what might be considered a hierarchical outlook. That is, much as objects had natural places in the universe, so likewise people had natural places in the overall order of things. As an example, consider the divine right of kings. The idea was that the individual who was king was destined for this position—that was his proper place in the overall order of things. It is interesting to note that one of the last monarchs to maintain the doctrine of the divine right of kings was the English monarch Charles I. He argued for this doctrine—unconvincingly, it might be noted—right up to his overthrow, trial, and execution in the 1640’s. It is probably not a coincidence that the major recent political revolutions in the western world—the English revolution in the 1640’s, followed by the American and French revolutions—with their emphasis on individual rights, came only after the rejection of the Aristotelian worldview” (DeWitt 2018: 168).
Problems with Descartes' view
Let's return now to our story. Recall that Descartes' grand scheme was to discover foundational truths and build the rest of his belief structure upon them. However, one might lose confidence in the enterprise when one views, with justifiable dismay, the unimpressive nature of the foundational truths. Here they are reproduced:
- He, at the moment he is thinking, must exist.
- Each phenomenon must have a cause.
- An effect cannot be greater than the cause.
- The mind has within it the ideas of perfection, space, time, and motion.
These are the supposedly foundational beliefs. However, we can, it seems, question most if not all of these. Take, for example, the notion that an effect cannot be greater than its cause. Accepting that there is a wide range of ways in which we can define effect and cause, we can think of many "phenomena" that are decidedly larger than their "cause". Things that readily come to mind are:
- the splitting of an atomic nuclei and the resulting chain reaction that takes place at the detonation of an atom bomb,
- the viruses, which are measured in nanometers, and the pandemics that they cause both in antiquity as well as in the modern age, and
- the potential of lone gunman (Lee Harvey Oswald), who would otherwise be historically inconsequential, bringing about the assassination of a very historically important world leader (John F. Kennedy) and the ensuing political chaos and national mourning.
Reflecting on this last point, if this is a credible instance of an effect being smaller than the cause, it almost seems as if Descartes was vulnerable to one of the cognitive biases that many conspiracy theorists prove themselves susceptible to: the representativeness heuristic. The representativeness heuristic is the tendency to judge the probability that an object or event A belongs to a class B by looking at the degree to which A resembles B. People find it improbable that (A) a person that they would've otherwise never heard about (Oswald) was able to assassinate (B) one of the most powerful people on the planet (JFK) precisely because Oswald and JFK are so dissimilar. Oswald is a historical nobody; JFK is a massive historical actor. And it is precisely this massive dissimilarity between Oswald and Kennedy that makes it so difficult for our mind to imagine their life trajectories ever crossing. And so one might judge the event to be improbable and, if you're prone to conspiratorial thinking, you'll begin searching for alternative theories involving organized crime, the Russians, Fidel Castro, etc.1
If it walks like a duck...
One very good criticism that one can make about Descartes' foundationalist project is that, if this is supposed to be an attempt to escape skepticism, then hasn't gotten us very far. If one considers just the four foundational beliefs, it seems every bit as bad as not knowing anything at all. It might even be worse, since it gives rise to fears hitherto unconsidered. How do we know, for example, that other people have minds? How do we know we're not the only person in existence? Could it be that everyone else is a figment of our imagination or mindless robots pretending to be human? Perhaps most deviant of all is the possibility that you are in an ancestor simulation where you are the only being that has been awarded consciousness; everyone else is a specter.2
It is precisely on this point, however, that Descartes picks up his project. As it turns out, Descartes argued that he could get out of skepticism, both of the pyrrhonian variety as well as his self-imposed skepticism (also called methodological skepticism). In a nutshell, he argued that he could prove God exists and that such a perfect being wouldn’t let him be deceived about those things of which he had a “clear and distinct perception” (see Descartes' third meditation). His argument for God's existence, in an unacceptably small nutshell, was that he has an innate idea of a perfect God. An innate idea, by the way, is an idea that you are born with. Descartes, if you recall, identifies having the ideas of perfection, space, time, and motion as a foundational belief. He felt that he knew these with certainty, and that these were featured in the mind since birth. Continuing with his argument, the idea of perfection, reasons Descartes, could have only been provided (or implanted?) by a perfect being, which is God. So, God exists!
There are two things that I should say here. First, I am temporarily holding off on a substantive analysis of arguments for and against God's existence; these will occur in the next portion of the course. Stay tuned.3
Second, Descartes is essentially claiming that God rescues him (and the rest of us) from skepticism. God wouldn't allow an evil demon to deceive us. God wouldn't allow you to think your perceptions of reality are hopelessly false. Think about this for a second. Descartes was able to link evil demons with skepticism and God with knowledge. This, then, accomplishes one of Descartes' goals: to reconcile science and faith.
Having said that, Descartes' argument for God's existence has been criticized since the early modern period. One interesting exchange on this topic occurred between Descartes and Elisabeth, princess of Bohemia. The two exchanged letters for years, and on numerous instances Elisabeth questioned whether we really knew whether God is doing all the things Descartes said God was doing, such as helping us to establish unshakeable knowledge of the world. Although I've not read all the letters, it is my understanding that Descartes never argued for his claims to Princess Elisabeth's satisfaction.
Princess Elisabeth is a nice respite from this otherwise male-dominated world. Nonetheless, due to the "greatest hits" nature of introductory philosophy courses, our next stop is another white male. However, he is one that you may have heard of before: John Locke. We turn to his take on knowledge next.
Two camps
The rift between Locke and Descartes is best understood in terms of a helpful if controversial division of thinkers from the early modern era into two camps: rationalism and empiricism. Rationalism is the view that all claims to knowledge ultimately rely on the exercise of reason. Rationalism, moreover, purports to give an absolute description of the world, uncontaminated by the experience of any observer; it is an attempt to give a God’s-eye view of reality. In short, it is the attempt to reach deductive certainty with regards to our knowledge of the world.
The best analogy to understanding rationalism is through deductive certainty in geometry. In his classic mathematics text, Elements of Geometry, the Greek mathematician Euclid began with ten self-evident claims and then rigorously proved 465 theorems. These theorems, moreover, were believed to be applicable to the world itself. For example, if one is trying to divide land between three brothers, one could use some geometry to make sure that everyone gets exactly their share. This general strategy of starting with things that you are certain of and building from there should sound familiar: it is what Descartes did. If you're guessing that Descartes is a rationalist, you are correct.
“In Euclidean geometry… the Greeks showed how reasoning which is based on just ten facts, the axioms, could produce thousands of new conclusions, mostly unforeseen, and each as indubitably true of the physical world as the original axioms. New, unquestionable, thoroughly reliable, and usable knowledge was obtained, knowledge which obviated the need for experience or which could not be obtained in any other way. The Greeks, therefore, demonstrated the power of a faculty which had not been put to use in other civilizations, much as if they had suddenly shown the world the existence of a sixth sense which no one had previously recognized. Clearly, then, the way to build sound systems of thought in any field was to start with truths, apply deductive reasoning carefully and exclusively to these basic truths, and thus obtain an unquestionable body of conclusions and new knowledge” (Kline 1967: 149).
Empiricism, on the other hand, argues that knowledge comes through sensory experience alone. There is, therefore, no possibility of separating knowledge from the subjective condition of the knower. By default, empiricism is associated with induction since they reject the possibility of deductive certainty about the world. John Locke, the thinker featured in this lesson, was an empiricist.
Each of these camps embodies an approach to solving issues in epistemology during that time period, from 1600 to about 1800. However, we should add two details to give a more accurate (yet complicated) view about these opposing camps. First, these thinkers never referred to themselves as rationalists or empiricists. These are labels placed on them after the fact by another philosopher, Immanuel Kant. Moreover, these labels have a different meaning in this context. Typically, today the word rationalism is given to anyone who is committed to reason and evidence in their belief-forming practices. This is not how we are using the term in this class. Here, rationalism refers to this camp of thinkers who sought to establish deductive certainty about the world.
Second, it might even be the case that these thinkers would be hostile to the labels we've given them. Rationalists, predictably, were committed to mathematics. Descartes, as we've discussed, is famous for some of his mathematical breakthroughs. Gottfried Wilhelm Leibniz, another rationalist, had insights that led him to the development of differential and integral calculus independently of and concurrently with Isaac Newton. Nonetheless, any careful reader sees very different approaches between these two thinkers (Descartes and Leibniz), thinkers who are allegedly in the same camp. And while ascribing a joint methodology to the rationalists is suspect, it is even more contentious to put the “empiricists” into a single camp. This is because there are well-documented cases of “empiricists” denouncing empiricism (see in particular Lecture 2 of Van Fraassen 2008). In short, this is a useful distinction, but one should not fall into the trap of believing that the two sides are completely homogenous.
“This convenient, though contentious, division of his predecessors into rationalists and empiricists is in fact due to Kant. Believing that both philosophies were wrong in their conclusions, he attempted to give an account of philosophical method that incorporated the truths, and avoided the errors, of both” (Scruton 2001: 21; emphasis added).
The human
John Locke (1632-1704) was a physician and philosopher and is commonly referred to as the "Father of Liberalism". In 1667, Locke became the personal physician to Lord Ashley, one of the founders of the Whig movement (a political party that favored constitutional monarchy, which limits the power of the monarch through an established legal framework, over absolute monarchy). When Ashley became Lord Chancellor in 1672, Locke became involved in politics, writing his Two Treatises of Government during this time period. Two Treatises of Government is a sophisticated set of arguments against the absolute monarchism defended by Thomas Hobbes and others. In 1683, some anti-monarchists planned the assassination of the king; this is known as the Rye House Plot. The plot failed and, although there is no direct evidence that Locke was involved, he fled to Holland. There, he turned to writing once more. In 1689, he publishes his Essay Concerning Human Understanding, the main subject of this lesson.
Decoding Locke
Signpost
At this point, we have three competing ways of conceptualizing knowledge: positivism, Cartesian foundationalism, and Lockean indirect realism. It might be helpful to at this point provide images for associating each of these. Enjoy the slideshow below:
These are incompatible perspectives. Moreover, Descartes and Locke have an added layer of conflict. Descartes clearly assumes that the foundation of all knowledge is the foundational beliefs established through reason, not beliefs acquired through the senses. Locke states exactly the opposite view: the mind is empty of ideas without sensory experience. The ultimate foundation of knowledge is sensory experience. We will identify this debate about the ultimate foundation of knowledge as "Dilemma #2: Empiricism or Rationalism?".
Problems with Locke's view
Just like Descartes' view has some issues, Locke's view doesn't seem to really help us escape from skepticism either. First and foremost, Locke admits that there is no way we can ever check that our ideas of the world actually represent the world itself. We can be sure, Locke claims, that our simple ideas do represent the world, but an ardent skeptic would challenge even this. The only way you could know that your representations of the world correspond to the world is if you compare them; but this is impossible on Locke's view. This is, in fact, the argument that a fellow empiricist, George Berkeley, levied against Locke.
Speaking of empiricism, here's another challenge to Locke. The empiricist camp happens to feature one of the most prominent skeptics of all time: David Hume. During his lifetime, Hume's work was consistently denounced as works of skepticism and atheism. How did Hume arrive at his skeptical conclusions? Through what some would call his "radical empiricism". If the goal is to escape skepticism, do we really want to take the side of an empiricist, like Locke or Hume?
For those who are well-versed in contemporary psychology, you might notice that there are other problems with Locke's view that are empirical in nature (see Pinker 2002). However, given the time period we are covering, we should, for the moment, focus on philosophical objections about how this project apparently fails to help one escape from skepticism. The empirical objections will have to wait until the last lessons of the course. Stay tuned.
Moving forward
Now that you've been exposed to three competing approaches to knowledge available to you in the early modern period, which do you think is best? Let me rephrase that question. Had you lived through what someone living in the mid-17th century had lived through, which approach to knowledge do you think you'd prefer? Remember that we are jaded. We have 20/20 hindsight. We know how things are going to pan out. But if you were living at the time, my guess is that you would've sided with Descartes. Here are some reasons:
- Descartes and his rationalist/foundationalist project purports to be able to get us certainty, and this is very appealing. We have to remember that people were moving from the Aristotelian worldview, where objects were purpose-oriented and the Earth was the center of the universe(!), to a more mechanistic, cold, purpose-less Newtonian worldview. Humanity's self-esteem had taken a severe blow, and salvaging deductive certainty about the world seemed to at least be a comforting consolation prize.
- The Baconian empiricist movement was new and unproven. Sure there had been advances in astronomy, but these had the effect of putting science into a state of crisis more than anything. Thinkers, like Descartes, desperately sought to find foundations to replace the discarded old ones.
- Moreover, empiricism was suspicious. Locke's incapacity to ensure his readers that we really could know what the external world is like was worrisome. Later on, of course, Hume would use empiricism to lead us straight into the pit of skepticism. To many, empiricism didn't seem very welcoming.
And so, moving forward, we will further explore Descartes' rationalist/foundationalist project. The mere possibility of being able to achieve deductive certainty is extremely alluring. There is just one problem: Descartes uses God's existence as a means to escape skepticism. The next rational question is this: Does God actually exist?
Two approaches to scientific explanations were covered: teleological (goal-oriented or function-oriented) explanations and mechanistic explanations, which are goal-less, function-less, and purpose-less. The Aristotelian science that dominated Western thought for a thousand years was teleological. The universe was thought to be inherently teleological. The job of a natural scientist was to understand the teleological essences of categories of objects.
There were some unexpected social benefits associated with the downfall of the Aristotelian worldview.
There are various problems with Descartes' view, including concerns over his foundational truths not being very foundational and the perceived inability of his project to get us out of skepticism.
There is a division between two warring philosophical camps: rationalism and empiricism. The rationalists ground knowledge ultimately on reason, like Descartes. The empiricists, like Locke, use sensory experience as their starting point for acquiring knowledge.
Locke's view is distinguished from Descartes' in many ways, including Locke's reliance on the senses and his rejection of innate ideas.
Locke's view, however, also has its problems; most notably, it's hard to see how we can ever check that our representations of the world actually match the world itself (which seems just as bad as skepticism).
FYI
Suggested Reading: John Locke, An Essay Concerning Human Understanding
-
Note: Read only chapters 1 and 2.
TL;DR: Crash Course, Locke, Berkeley, and Empiricism
Supplementary Material—
Related Material—
- Video: Closer to Truth, The Multiverse: What's Real?
Advanced Material—
-
Reading: Matt McCormick, Internet Encyclopedia of Philosophy entry on Immanuel Kant: Metaphysics.
-
Note: Most relevant are sections 1 and 2.
-
-
Reading: Peter Markie, Stanford Encyclopedia of Philosophy Entry on Rationalism vs. Empiricism
Footnotes
1. Castro at least would've had a good reason for wanting Kennedy dead. In chapter 18 of Legacy of Ashes, Tim Weiner (2008) reports that Kennedy seemed to have implicitly consented to assassination attempts on Fidel Castro, in order to repair his image after the Bay of Pigs disaster. There were dozens, maybe hundreds of attempts on Castro's life.
2. This discussion reminds me of the Metallica song "One" that describes the plight of a patient with locked-in syndrome.
3. The interested student can also take Dr. Leon's philosophy of religion class.
Aquinas
and the Razor
My name is Legion: for we are many.
~Mark 5:9
Dilemma #3: Does God exist?
The uncomfortable question that titles this subsection leads to the even more uncomfortable analysis of the arguments for God's existence that have been given throughout history. Equally discomforting is what I'm about to say: there's about a thousand years—the first thousand years, roughly speaking—of Christian history that will offer very little of value. Of course, my guess is that you will not consider this blanket statement to be in good judgment. And so, I'll happily explain my reasoning in this lesson and the next.
First, however, I must make some preliminary comments. Recall that we are endeavoring to answer this question because we are attempting to find support for the Cartesian rationalist/foundationalist project. Descartes, it is clear from his Meditations, considered himself a devout Catholic. Descartes, moreover, believed that God, through His goodness, could be the solution to the problem of skepticism. And so, to keep the narrative flow of the course, we will look exclusively for proof of God as envisioned by the Catholic tradition. I hope this omission does not offend any students taking this course who align with Protestantism, the Church of Latter-day Saints, Islam, Hinduism, Wiccanism, etc.
Second, the approach will be, by and large, historical. This is, to repeat myself, first and foremost an intellectual history. As such, I want us to focus on arguments and ideas that would've been available to people in the early modern period in Europe. We can, at times, look at other arguments for and against God's existence that are more contemporary. But in the final analysis, these will have to be bracketed and put aside when assessing Cartesian foundationalism (although we can certainly discuss the meaning of contemporary arguments for and against God's existence for our own lives).
Third, I will make my very best attempt to give both sides of this debate. There are four lessons on the philosophy of religion in total. The first and the third will advance arguments for God's existence (and critique them); the second and fourth lessons will be given from the perspective of an atheist (and discuss some limitations). This bipolar approach might be jarring at first, but I think it is a useful method nonetheless.
Fourth, given that we are exclusively be discussing Catholicism, it should be said at the outset that when I use terms like "believers", "the faithful", and other allusions to adherents of a particular religion, I mean adherents to Catholicism. Again, this is for the sake of the narrative and has nothing to do with excluding non-Catholics.
Lastly, these arguments are very much only the "greatest hits" of philosophy of religion, something that is inevitable in an introductory course. A thorough review of the arguments for and against God's existence takes much longer than 4 brief lessons. The interested student can enroll in a philosophy of religion course. The references given in these lessons might also make valuable reading material.
The Closing of the Western Mind
I made a contentious claim in the last subsection: that about a thousand year span of Christian history made no worthwhile arguments for God's existence. I defend that claim now.
Although I cannot peer into the minds of people, my guess is that most believers think that Christian doctrine is and has always been clearly articulated, with no major dissension. Nothing could be further from the truth. The history of the early Church was littered with heated theological disputes, some of which became lethal. For example, in Lost Christianities, Bart Ehrman details the various competing interpretations of what Christianity meant. One worthy of mention is the adherents of a view known as gnosticism. This Christian sect believed that spiritual knowledge trumped any authority the Church might have, which you can imagine did not sit well with early church leaders. There was also, according to this sect, two gods: a good one and an evil one.
Another dispute that shook the early Church was over what eventually would be called the Arian heresy: the view that Jesus, although divine, was created by God and was subordinate to God. What would eventually become the orthodox (or official) position was that Jesus, or God the Son, has always been co-equal with God the Father. There are, however, various Bible passages that support the Arian interpretation over the orthodox interpretation (e.g., Mark 10:18, Matthew 26:39, John 17:3, Proverbs 8:22; see Freeman 2007: 163-177 for a full discussion). The existence of these passages explains why it was so difficult to stamp out the Arian heresy, which persisted well into the 4th century.
There was even dissent between church leaders, such as the rift between St. Paul and St. Peter. Paul seems to have been difficult to work with and he distanced himself from those who had actually known Jesus (such as Peter). Paul's insecurity over his questionable authority caused him to stress the supernatural aspects of Jesus, such as his resurrection, as well as how he received a revelation, the implication being that knowing Jesus personally is not the only way to receive the good news (see Freeman 2007, chapter 9).
And as if all this wasn't bad enough, there was even discord in a single person's interpretation of events. To take the example of Paul again, it turns out Paul was inconsistent in his teachings. Initially, he said that faith alone will save you. But this was because he believed that the second coming was imminent, and there was no time to change one’s behavior. Only later, once the second coming failed to materialize, did he stress the need for charity (ibid.).
Then came Emperor Constantine. Realizing that persecuting Christians wasn’t working, he decided to show them tolerance in the Edict of Milan, co-adopted with the Eastern emperor Licinius. (Note that by this point, the Roman Empire had multiple emperors at once to help rule the vast empire.) As a way to gain the favor of Christian communities that had been persecuted, Constantine gave exemptions from the heavy burdens of holding civic office and taxation to the clergy. However, he was surprised by the amount of communities that called themselves "Christians" as well as by their conflicts with each other. Given that he had lavished gifts on the Christian clergy, there was an urgency for clarifying what “Christian” really meant. Constantine ended up selecting those more pragmatic communities of "Christians" to maintain their civic gifts and withdrew his patronage from the others. In doing so, he began to shape what would become the Catholic Church.
However, this did not end the disputes, which seemed to somewhat irritate Constantine. He thus convened the Council of Nicaea in 325 CE to try to establish what orthodox Christianity really was. The factions, however, did not come to an agreement, and Constantine had to settle theological matters by imperial decree. In other words, the non-Christian Roman emperor simply stipulated what the right view was. This happened time and time again in the history of the Church, with different emperors settling different theological disputes by decree. Not only does this seem less-than-divinely inspired, but it also imbued, at least the Western half of the empire, with a persistent and pervasive authoritarianism. Authority seemed like the way to solve problems. Moreover, once Christians had the backing of the state, they turned to the persecution of pagans. This persecution was literal, and it included destruction of pagan temples, harassment, and murder (see Nixey 2018).
“’Mystery, magic, and authority’ are particularly relevant words in attempting to define Christianity as it had developed by the end of the fourth century. The century had been characterized by destructive conflicts over doctrine in which personal animosities had often prevailed over reasoned debate. Within Christian tradition, of course, the debate has been seen in terms of a ‘truth’ (finally consolidated in the Nicene Creed in the version of 381) assailed by a host of heresies that had to be defeated. Epiphanius, the intensely orthodox bishop of Salamis in the late fourth century, was able to list no less than eighty heresies extending back over history (he was assured his total was correct when he discovered exactly the same number of concubines in the Song of Songs!), and Augustine in his old age came up with eighty-three. The heretics, said their opponents, were demons in disguise who ‘employed sophistry and insolence’… From a modern perspective, however, it would appear that the real problem was not that evil men or demons were trying to subvert doctrine but that the diversity of sources for Christian doctrine—the scriptures, 'tradition', the writings of the Church Fathers, the decrees of councils and synods—and the pervasive influence of Greek philosophy made any kind of coherent ‘truth’ difficult to sustain... Both church and state wanted secure definitions of orthodoxy, but there were no agreed axioms or first principles that could be used as foundations for the debate... The resulting tension explains why the emperors, concerned with maintaining good order in times of stress, would eventually be forced to intervene to declare one or other position in the debate ‘orthodox’ and its rivals ‘heretical’” (Freeman 2007: 308-309).
Rediscovery
The preceding section showed that the early Catholic Church, rather than using reasoned debate and rational argumentation, relied on authority and the power of the state to establish Christian doctrine. After the fall of the Western Roman Empire in 476 CE, the region fell into its so-called dark age.1 Even prior to this, however, the nomadic tribes of the northern Arabian peninsula were progressively being unified into a meta-ethnicity (a coherent group composed of several ethnicities), perhaps surprisingly, by a shared language (particularly its high form, which was used for poetry). And then, in 570 CE, a man named Muhammad was born. After receiving his revelations, the unification of the Arab people was expedited, this time by religion (see Mackintosh-Smith 2019).
The early Muslim conquests, 622-750, were lightning fast. In a historical blink of an eye, there was a new regional power. To show just how formidable the Muslim armies were, consider the Sassanid Persian Empire. The Sassanids had been at war with the Eastern Roman Empire (also known as the Byzantine Empire) for decades without one side truly defeating the other. Approaching the middle of the 7th century, though, the Sassanids began their military encounters with the Arab armies. The Sassanids, used to battle with the Byzantines, had a powerful but slow heavy cavalry. The Arabs, on the other hand, were light and quick-moving, and, through the combined use of camels and horses, were able to overwhelm the Persians. Not even war elephants could stop the Arabs, since some Arabs served as mercenaries during the Byzantine-Persian conflicts and knew how to deal with them. Eventually, compounded by bouts of plague and internal problems, the Sassanid Empire finally fell in 651.
The new Muslim Caliphates quickly became hungry for the intellectual insights of its conquered peoples and beyond. Historian of mathematics Jeremy Gray explains:
"When the Islamic empire was created, it spread over so vast an area so quickly that there was an urgent need for administrators who could hold the new domains together. Within ten years of Muhammad's death in AD 632 the conquest of Iran was complete and Islam stood at the borders of India. Syria and Iraq to the North were already conquered, and to the West Islamic armies had crossed through Egypt to reach the whole of North Africa. Since the Islamic religion prescribed five prayers a day, at times defined astronomically, and moreover that one pray facing the direction of Mecca, further significant mathematical problems were posed for the faithful... So it is not surprising that Islamic rulers soon gave energetic support to the study of mathematics. Caliph al-Mamun, who reigned in Baghdad from AD 813 to AD 833, established the House of Wisdom, where ancient texts were collected and translated. Books were hunted far and wide, and gifted translators like Thabit ibn Qurra (836-901) were found and brought to work in Baghdad... So diligent were the Arabs in those enlightened times that we often know of Greek works only because an Arabic translation survives" (Gray 2003: 41).
Although the Muslim conquests are fascinating in their own right, they are relevant in our story because of what happens next. In 1095, Pope Urban II calls for the First Crusade. He instigated this Crusade by exaggerating the threat of Turkish aggression at Constantinople, as well as poor treatment of Christians by Muslims (e.g., during pilgrimages). Most importantly, he demonized Muslims. And so, the Crusades began with the goal of taking back the Holy Land (see Asbridge 2011). Depending on what you count as a "Crusade", the story ends at different times. For our purposes, the Crusades ended when Acre falls to the Mamluk Sultanate in 1291.
How is this relevant to our story? It was during these centuries of prolonged contact with Islamic peoples that Europeans began rediscovering and reacquiring many ancient texts in mathematics, philosophy, and more. We will see this rebirth take place as we study two arguments for God's existence. The first will be by St. Anselm, written before the Crusades; the second will be by St. Thomas Aquinas, written towards the end of the Crusades.
Important Concepts
Some comments on metaphysics
I had previously said that you can just think of metaphysical questions as being of the sort "What is ______?" The reason for this short-hand is that metaphysics, much like philosophy itself, has been different things at different times. In classical Greece, metaphysics was delving into the ultimate nature of reality and it included some theories about cosmogenesis. By the 20th century, cosmogenesis was the purview of physics, and some philosophers, notably the members of the Vienna Circle, attacked metaphysics and demanded that the branch of philosophy be shutdown. Today, given the state of theoretical physics, some philosophers (who are also trained in physics and/or mathematics) dive into the metaphysics of reality once more (see Omnès 2002). In short, defining metaphysics is tough, since it somewhat depends on the historical context.
Decoding Anselm
Decoding Aquinas
Ockham
Telling the story of Thomas Aquinas and his Five Ways is not complete without telling the story of William of Ockham. Ockham, as I'll refer to him, was a Franciscan friar who was born around 1287 and died in 1347. He is best known for endorsing nominalism, a view we'll cover in Unit III. He is relevant here due to his use of a methodological principle that bears his name: Ockham's razor. Ockham's razor, a.k.a. the principle of parsimony, is the principle that states that given competing theories/explanations, if there is equal explanatory power (i.e., if the theories explain the phenomenon in question equally well), one should select the one with the fewest assumptions. Expressed another way, it's the principle that states that, all else being equal, the simplest explanation is the most likely to be correct. Put a third way, don't assume things that don't add explanatory power to your theory, i.e., don't assume things that have no value in helping you explain a phenomena. In short, when explaining a phenomena, don't assume more than you have to.
Ockham's razor is applicable in many different fields. In computer science, although one starts off just trying to get the code to do what it's supposed to do, eventually, a good programming practice to develop is to make your code more elegant and simple so that it is easier to read by other developers (see Andy Hunt and Dave Thomas' The Pragmatic Programmer). In the social sciences, it is a feature of good theories to not assume more than what is necessary. If you can explain, for example, the Fall of the Western Roman Empire through disease, naturally-occurring climate change, social unrest, and external threats, then it's no good to also add in there some fifth factor that doesn't add any explanatory value.
One domain where I wish people would use Ockham's razor is in the realm of conspiracy theories. Often times, when someone espouses a conspiracy theory, they must assume many more things than the non-conspiracy theorist, such as secret societies, space-alien races, hitherto unknown chemicals and weapons, etc. Moreover, these extra assumptions actually have zero explanatory power. If you are attempting to explain some event via, for example, a secret society, then you have effectively explained nothing because the society is SECRET. That means you don't know their dark plan, or their sinister methods, etc. It's the equivalent of attempting to explain something you don't understand by using some other thing you don't understand as the explanation. Utter nonsense.
The full story of Ockham can only be told in a class on the Middle Ages, but suffice it to say here that Ockham's gave commentaries on theological matters much like that of Aquinas' natural theology. Mind you, Ockham was a believer, but he also didn't think the arguments of his day proved God's existence. Ockham was not alone. Duns Scotus, another Franciscan friar who was about two decades the elder of Ockham, found some difficulties in reconciling God's freedom with any reasoned argumentation or proof of God's existence.
“Duns Scotus (d. 1308) gave open expression to the rejection of reason from questions of faith. God, he held, was so free and his ways so unknowable they could not be assessed by human means. Accordingly there could be no place for analogy or causality in discussing him; he was beyond all calculation. Duns, in the great emphasis he placed on God's freedom, put theology outside the reach of reason” (Leff 1956: 32).
What Scotus is saying is that God is so free and so powerful that he can just change reason itself. The laws of logic can be bent by God if God chose it. So any attempted proof of God, through reason argumentation, would fail, since it would assume something that God can destroy at will: the laws of logic.
"For the skeptics [of scholasticism], God, by his absolute power, was so free that nothing was beyond the limits of possibility: he could make black white and true false, if he so chose: mercy, goodness, and justice could mean whatever he willed them to mean. Thus not only did God’s absolute power destroy all [objective] value and certainty in this world, but his own nature disintegrated [in terms of the human capacity for understanding God through rational reflection]; the traditional attributes of goodness, mercy and wisdom, all melted down before the blaze of his omnipotence. He became synonymous with uncertainty, no longer the measure of all things” (Leff 1956: 34; interpolations are mine).
In other words, for Ockham and others, since God is all-powerful, He could change anything at anytime, including our methods of human reasoning and logic itself. Put another way, human reason and logic are insufficient to understand a being this powerful. Since all arguments are grounded by human reason and logic, all arguments are insufficient to establish the existence of God, let alone an understanding of Him.
Again, Ockham (and Scotus) were believers. Ockham's view on his faith is called fideism. Fideism is the view that belief in God is a matter of faith alone. Any attempt to prove God’s existence is futile.
On account of his teachings, Ockham and others were called to Avignon, France, in 1324, to respond to accusations of heresy. After some time in Avignon, during which a theological commission was assessing Ockham's commentaries on theological matters, Ockham and other leading Franciscans fled Avignon, fearing for their lives. He lived the rest of his life in exile.
Ockham, however, plays an important role in the intellectual history of the Middle Ages. Indeed, Ockham serves as a breakpoint. Up to that point, the goal of philosophy in the Middle Ages was to affirm and reinforce the predominant theological views of the time. In other words, philosophy was just for defending existing religious doctrine. But Ockham saw the roles that philosophy and theology played as being distinct. In particular, philosophy could not really function to defend belief in a supernatural deity; it had, instead, its own separate function of exploring and defending philosophical positions (on politics, metaphysics, art, etc.). Historian of science W. C. Dampier writes of the intellectual milestone that is the work of William of Ockham:
“[T]he work of Occam marked the end of the mediaeval dominance of Scholasticism. Thenceforward philosophy was more able to press home its enquiries free from the obligation to reach conclusions foreordained by theology... [T]he ground was prepared for the Renaissance, with humanism, art, practical discovery, and the beginnings of natural science, as its characteristic glories” (Dampier 1961: 94-5).
On account of Descartes' needing God to rescue us from skepticism, we are seeking an answer to the question of God's existence. Our focus will be on Catholicism.
During the early part of the Middle Ages, reasoned argumentation was not at its best in the West. It was primarily through prolonged contact with Muslim caliphates that the West rediscovered the Greek-style of argumentation.
Thomas Aquinas' work marks the height of the attempted combination of reason and faith. He essentially argues for God's existence using Aristotelian reasoning, albeit a Christianized version.
Approaches to theology like that of Aquinas, however, had some critics. We noted the work of William of Ockham.
Ockham argued that human reason and logic are insufficient to understand God since God (through his power) can simply change the laws of logic.
Per historian of science W. C. Dampier, Ockham's work marks the end of Scholasticism's dominance in the West and paved the way for the Renaissance and more.
FYI
Suggested Reading: Gordon Leff, The Fourteenth Century and the Decline of Scholasticism
TL;DR: Sky Scholar, What is Occam's Razor? (Law of Parsimony!)
- Note: The Sky Scholar (Dr. Robitaille) is very cheesy. I love it.
Supplementary Material—
-
Reading: Theodore Gracyk, Aquinas and the Five Ways
Advanced Material—
-
Reading: Internet Encyclopedia of Philosophy, Entry on Ockham
-
Note: Read Sections 1, 2, and 6
-
-
Reading: William Cecil Dampier, Chapter 2 of A History of Science and its Relations with Philosophy of Religion
-
Reading: Frank Thilly, Chapter 30-33 of A History of Philosophy
Footnotes
1. Interpreting what "dark age" means requires some clarification. In chapter 6 of his Against the Grain, James C Scott beckons us to reconsider the use of the phrase “dark age.” Dark for whom? Is it merely decentralization? Is it merely that we can’t study it as well? Is it a real drop in health and life expectancy for the people? In many cases, it might've been a simple reversion to a simpler, more local way of life, without an imperial ruler unifying disparate peoples. Nonetheless, I will use this phrase here for rhetorical elegance.
The Problem of Evil
Utter trash.
~The Greek intellectual Celsus,
evaluating the Old Testament
An ever-changing dynamic
The relationship between intellectuals and religious faith is a complicated one. Some intellectuals, as will be shown below, had nothing but animosity towards the Christian faith. Others have been tremendously empowered by it and have used it as their motivation for their intellectual breakthroughs. Taking the perspective of the atheist, then, requires some subtle maneuvering. Many atheists consider themselves to be on the side of reason—in an imagined competition between faith and reason. But this will be easier said than done.
Let's begin in the early days of the Christian faith. We'll start in the year 163 CE. The intellectual climate in Antiquity, the label given to the time period which comprises the era of Classical Greece and Rome, is impressive to say the least. The most famous physician of the time period, Galen (129-210 CE), was performing live vivisections of animals (for scientific purposes) which were well-attended and popular. And although the views held by Galen were not strictly-speaking compatible with modern science—for example, he never used control groups—it was nonetheless more empirical than any other previous approaches to medicine. In short, it was progress.
It is also the case that the philosophy of atomism was in full swing. This rich tradition, which began in the 5th century BCE, held that the universe is composed of tiny, indivisible atoms, each having only a few intrinsic properties—like size and shape. Atomist philosophers were seen as competitors to the Aristotelian system, who built their own comprehensive view of the origins of the universe without teleological natural philosophy.
It was even the case that the emperor of the day was a philosopher. This was none other than Marcus Aurelius (121-180 CE), who was deeply committed to Stoicism. Although most Stoic writings are lost to history, from the few surviving texts and commentaries that have survived, we can discern the general view of the Stoics. The basic goal of Stoicism is to live wisely and virtuously. The Greek word used to label this state is arete, a term that is actually best translated not as “virtue” (as it is commonly done) but as “excellence of character.”
Something excels, according to this tradition, when it performs its function well. Humans excel when they think clearly and reason well about their lives. To achieve this end, the Stoic develops the cardinal virtues of wisdom, justice, courage, and moderation. The culmination of these virtues is arete, and this is the only intrinsic good, i.e., the only thing that is good for its own sake. External factors, like wealth and social status, are only advantages, not goods in themselves. They have no moral status. They, of course, can be used for bad aims, but the wise mind uses them to achieve arete. For a review of Marcus Aurelius' practice, see Robertson (2019).
It appears that early Christians rejected many of the intellectual movements of their time. For example, the atomists, whose belief system contradicted the notion of life after death, were targeted by early Christians. Apparently the atomist doctrine that human lives were just a “haphazard union of elements” conflicted starkly with the Christian notion of a soul (Nixey 2018: 38). Much later, Augustine was concerned that atomism weakened mankind’s terror of divine punishment and hell (ibid., 39). As such, texts that espoused atomist views, like those of Democritus, were actively denounced or neglected and were eventually lost to the West.
The Stoic movement was also negatively impacted by Christianity. As previously noted, Stoic writings, along with most of the literature of the classical world, are lost, and this process was initiated when Christians took control of the Roman empire in the 4th century CE (Nixey 2018). In fact, only about 10% of classical writings are still in existence. If one narrows the scope only to writings in Latin, it is only about 1% that remains (ibid., 176-177; see also Edward Gibbon's History Of The Decline And Fall Of The Roman Empire, chapters 15 and 16). And so, the Stoic logic that is studied today (in courses like PHIL 106) was actually reconstructed (or "rediscovered") in the 20th century, as modern logicians were formalizing their field (O'Toole and Jennings 2004).1
As the anti-intellectualism of Christianity was becoming evident, thinkers in Antiquity were becoming increasingly suspicious and critical. The Christians shirked military service and instead preached meekness (which must've sounded ludicrous to Roman ears). Later on, the public spending on monks and nuns was believed to have weakened the Roman Empire substantially. Some intellectuals felt the need to cry out.
Celsus was one of these thinkers. He openly mocked the virgin birth and the Christian creation myth. He wondered why Jesus’ teachings contradicted earlier Jewish teachings (Had God changed his mind?) and why God waited so long between creation and salvation (Did he not care prior to sending Jesus?). He also wondered why he sent Jesus to a “backwater” (i.e., Bethlehem) and why he needed to send Jesus at all (Nixey 2018: 36-7). Celsus was most concerned with how willful ignorance, of which he accused the Christians, made you vulnerable to believing in things that are easily dismissed had the person in question been more well-rounded in their reading. He pointed out what many of us today know all too well: people that only get their information from one source are at risk of biased information, disinformation, and believing in things that a little critical thinking could dispel.
“Lack of education, Celsus argued, made listeners vulnerable to dogma. If Christians had read a little more and believed a little less, they might be less likely to think themselves unique. The lightest knowledge of Latin literature, for example, would have brought the interested reader into contact with Ovid’s Metamorphoses. This epic but tongue-in-cheek poem opened with a version of the Creation myth that was so similar to the biblical one that it could hardly fail to make an interested reader question the supposed unique truth of Genesis" (Nixey 2018: 42).2
After Celsus, Porphyry waged an even more thorough attack on Christianity. His attack was so ferocious that his works were completely eradicated, the task being begun by Constantine. Celsus’ works would be lost if it weren’t for the long passages that the Christian Origen quoted during his counterattack.
But then again, almost every major thinker from the early modern period that we've covered so far was a believer; Hume is the sole exception (so far). It's not the case that believing in Christianity necessarily negatively affects your information-processing capacities.
“The mathematicians and scientists of the Renaissance were brought up in a religious world which stressed the universe as the handiwork of God... Copernicus, Brahe, Kepler, Pascal, Galileo, Descartes, Newton, and Leibniz… were in fact orthodox Christians. Indeed the work of the sixteenth, seventeenth, and even some eighteenth-century mathematicians was a religious quest, motivated by religious beliefs, and justified in their minds because their work served this larger purpose. The search for the mathematical laws of nature was an act of devotion. It was the study of the ways and nature of God which would reveal the glory and grandeur of His handiwork” (Kline 1967: 206-7).
The atheist response
We are now going to take a look at the perspective of the atheist. You will soon discover that there are atheist responses to every kind of argument for God's existence. To understand this, however, we must first clarify what the different types of arguments are.3
Once again it is Immanuel Kant who categorizes for us. In the domain of religion, Kant argues that there are only three kinds of argument for God’s existence:
- the ‘cosmological’ category, as in Aquinas’ Argument from Efficient Causes,
- the ‘ontological’ category, as in Anselm’s and Descartes’ ontological arguments,
- and the ‘physico-theological’, also known as arguments from design.
The third category is of interest to us. We've not yet covered an argument of this type. Moreover, Kant, himself a believer, makes the case that this is the type of argument that is best understood by the human intellect. We'll look at one next.
“Kant says of the argument from design [the ‘physico-theological’ type of argument] that it ‘always deserves to be mentioned with respect. It is the oldest, the clearest, and the most consonant with human reason. It enlivens the study of nature, just as it itself derives its existence and gains ever new strength from that source’” (Scruton 2001: 66-7; interpolation is mine).
Argument from
intelligent design
As we mentioned last time, Aquinas provided his Five Ways, five proofs of God's existence. The fifth of these five ways is an argument of the physico-theological type, an argument from intelligent design. Rather than using Aquinas' arguments again, though, I will use a more modern argument. We will look at William Paley's watch analogy.
In the early 19th century, there was a revival of natural theology. Natural theologians, recall, assumed God's existence and sought to discover the grandeur of God's handiwork by studying the natural world. This revival is primarily due to William Paley. Paley advocated natural theology as a method of discovering (and praising) God’s work. It was perceived as an act of devotion. In fact, this is why Charles Darwin’s father, realizing his son’s waning interest in medicine (his first career choice), recommended that Charles take up natural theology instead. (Ironic, isn't it?) See chapter 1 of Wright's (2010) The Moral Animal for a short biography of Darwin which includes this anecdote.
The argument
- The world displays order, function, and design.
- Other things (e.g., watches) display order, function, and design.
- Other things (e.g., watches) that display order, function, and design, do so because they were created by an intelligent designer.
- Therefore, the world displays order, function, and design because it was created by an intelligent creator (and this is God).
Objections
The Regress Problem
If complexity implies that there is a designer, then consider how complex God must be. It seems like God, since He is so complex, also had a designer. In fact, how do we know that God wasn't made by some even more powerful meta-god?
Compatibility with Polytheism
Even if this argument is sound, it does not necessarily prove the existence of a singular God. It’s possible that many gods collaborated to create the universe. In fact, this sort of makes more sense, since most complex enterprises are done by teams and not individuals.
Hume’s Objection
In his Dialogues on Natural Religion, Scottish philosopher David Hume made a comment relevant to this argument (although he had died before the publication of Paley's work). Hume made the point that in order for an analogical argument to work, you have to know the two things you are comparing. That is to say if you are comparing, let's just say, life to a box of chocolates, in order for the comparison to work, you'd have to know both things fairly well. We are, of course, alive. And most of us have had experience with those boxes of assorted chocolates, where some items are very tasty but some are filled with some kind of gross red goo. The box of chocolates takes you by surprise sometimes, just like life. The analogy works because you know both things.
So here's the problem that this poses for the teleological argument: maybe you've seen a watch get made, but you've never seen a universe get created. You're comparing a thing you know (a watch) to a thing you don't understand fully (the universe). So, Hume would say, the analogy doesn't work.
Argument from Ockham’s Razor (Atheist Edition)
The following is an argument made by G.H. Smith in his book The Case Against God. It makes use of Ockham's razor to disarm the argument from intelligent design. Smith argues that the only difference between the view of the natural theologian, who uses empirical observation to try to prove God's existence, and the atheistic natural philosopher, who uses empirical observation to learn about the world, is that the former (the natural theologian) has an extra belief in his/her worldview. The extra belief is, of course, belief in God. But belief in God offers no explanatory power. This is because to posit the supernatural as an explanation for some natural phenomenon explains nothing. Supernatural things are, by definition, beyond natural explanations. Thus, the design argument has zero explanatory power. By Ockham’s Razor, the belief in God is superfluous.
Denying premise #3
Another, more general strategy is to deny that premise #3 is true. If successful, this objection would undermine the soundness of the whole argument. The argument might go like this. First off, we can say that the universe does not display purpose. Even though there are some regularities in the universe (like stable galactic formations and solar systems), none of these have any obvious purpose. What is the purpose of the universe? What is it for? These appear to be questions without answers, at least not definitive ones.
Some atheists (e.g., Firestone 2020) go further and attempt to dispel any notion that the universe might be well-ordered in any way. Firestone argues that the so-called regularities we do observe in the universe only appear to be regularities from our perspective. For example, we know that the early universe, soon after the Big Bang, was very chaotic (Stenger 2008: 121). Further, some parts of the universe are still chaotic (there are galaxies that are crashing into each other, black holes swallowing entire solar systems, etc.). We couldn't see much of that during Paley's time, but to continue to argue that the universe is well-ordered and displays function seems to be anachronistic (or out of sync with the times).
Some theists might respond to the objections above by arguing that some of the universe does have a function. Perhaps the function of our part of the universe is to harbor human life. If this is the argument, then there is a glaring problem with it. We must remind ourselves that human life on this planet is only temporary. This is because life on this planet will be impossible somewhere between 0.9 and 1.5 billion years from now (see Bostrom and Cirkovic 2011: 34). At this point, the sun will begin to enter its red giant phase and expand. It might consume the planet. Or it might simply heat up our planet until complex life is impossible. In either case, harboring human life would not be one of the functions of Earth.
Lastly, even if we agree that there is some kind of order to the universe, this is not the same kind of order that is seen in a watch. Rather, it is merely the sort of patterns you would find in any complex system. That is to say any sufficiently complex systen gives rise to perceived regularities. This is usually referred to as Ramseyian Order (see Graham & Spencer 1990). In other words, this means that Paley is guilty of an informal fallacy; he used the word "order" with two different meanings (see Firestone 2020).
Equivocation is a fallacy in which an arguer uses a word with a particular meaning in one premise, and then uses the same word with a different meaning in another premise. For example, in the argument below, the word "man" is used in two different senses. In premise 1, "man" is (in sexist language) being used to refer to the human species. In premise 2, "man" is being used to refer to the male gender. Not cool, fam. Not cool.
Important Concepts
Atheists on the attack
The traditional approach that atheists take when critiquing belief in the Judeo-Christian God is to show how the very descriptions and traits ascribed to God render the whole notion of God incoherent. There are, of course, other ways to argue against God's existence, but, in this course, we'll focus on the method just described.4
The Problem of Divine Foreknowledge
The problem of divine foreknowledge is one such problem. This problem arises when one reflects on the apparent incompatibility between God's omniscience and human free will. If God knows everything that there is to be known, then God knows every single choice you will ever take. To God, your entire life is like a book; God already knows how everything is going to turn out. As such, your life seems like it is set; it is written, it cannot change. If this is the case, then you don't enjoy real freedom; if your path is already determined, then you don't really have free will. It seems that your life is already determined; your fate is sealed.
This problem gets even more troublesome when you realize that, in the Christian worldview, there is an existence after your earthly death—either eternal bliss in heaven or endless punishment in hell. But God already knows where you'll end up. So, it might be that some of us end up in hell, tortured for all eternity, but that we never exercised any free will to deserve being there. In other words, it seems that God punishes those who can't help but to have behaved in the way that they did, and this seems really unfair.
The Omnipotence Paradox
The attribute of omnipotence is itself probably incoherent. For example, can God make a boulder so big that God can't move it afterwards? If God can't, then God is not all powerful, since God can't make it; if God can, God is not all-powerful since God can't move it. Here's another one: can God make beings such that, afterward, God can't control them? There are numerous other examples. Two is enough.
The Question of Miracles
Miracles occur when God bends the laws of nature. But why does God need to bend the laws of nature? Didn't God set everything in motion? Why didn't God plan everything correctly from the beginning? Did God make a mistake? Did God have a change of heart? Why would miracles be necessary for a being that is omniscient?
The Problem of Evil
The preceding problems each reflect on one of the attributes ascribed to God and argues that it conflicts with our commonsense intuitions about the world, about humans, or about the attributes ascribed to God. The Problem of Evil (PoE), however, utilizes all of the attributes ascribed to God in order to mount a powerful argument against God's existence...
Context
To understand this argument, we need to first grant an assumption: there exists unnecessary suffering in the world. To me, there's hardly a better example of unnecessary suffering than the world wars that the Great Powers dragged the rest of the world into during the 20th century. These included the deaths of millions, starvation, exposure to the elements, grotesque injuries, genocide, etc. The horrors of World War I (WWI) will be relevant later in this course, so I will now give some details.
WWI was caused by a series of alliances of and arm races by the European powers, as well as a German nation that felt it could not exercise its imperial power since the other imperialist powers had already colonized much of the world. Moreover, Germany felt increasingly surrounded by its rivals. And so, Germany felt that a war against France and Russia could bolster its status as a regional power and grant it some colonies. All powers that eventually became involved thought it'd be a short war—the generals believed the soldiers would be "home before the leaves fall." They were wrong.
Seven reasons why WWI was pointless and full of unnecessary suffering:5
Military uniforms were horribly outdated. For example, the French high command structure refused to trade in their traditional, easily-spotted red pants for newer, more camouflaged colors like field gray. One commander even said, “Le pantalon rouge c'est la France.” [The red pants are France.] Clearly, these colored pants simply painted a target on French soldiers and is a completely senseless policy.
Soldiers had insufficiently protective head gear. The Germans, for example, introduced the steel helmet only in early 1916, after almost two years of fighting. Countless deaths could be attributed to this shortsighted policy alone.
Even though some thinkers believed that the economic cost of war was so great that it would be irrational to engage in conflict, like Norman Angell did in his influential The Great Illusion (first published in 1909), the Great Powers nonetheless continued their arms race. This made war all but inevitable as well as irrational. In the end, nations were bankrupted, empires fell, and millions were dead.
The French had known about the German plan of attack, the Schlieffen Plan, through their intelligence channels. But they simply didn’t believe it, and later thought it would workout to their favor. They were wrong. They could've stopped Germany earlier, but they didn't.
Highly ranked officers in the Russian army, on whom which the French and British relied on to attack Germany from the East, willfully ignored learning modern war tactics and were proud of it. This led to the continued deployment of the frontlines to charge machine guns. This is might've worked with older weaponry, but not with the mechanized warfare of WWI. Soldiers were slaughtered as they charged machine guns. Completely inexcusable.
Although WWI was referred to as the “war to end all wars”, it really was classic imperialism in disguise. The Allies, for example, were making private deals with countries that could help them win the war, such as Italy, while making public proclamations about how people have the right of self-determination. The most glaring example of this was the Sykes-Picot agreement. Per this agreement, after the Allies won, the Ottoman Empire was to be broken up and distributed among the allied victors. The same month that this was being negotiated, the British government made a public declaration guaranteeing a national home for Jewish people in Palestine, which was part of the Ottoman Empire. Britain primarily wanted this home for Jewish people to extend its sphere of influence. And none of this would’ve been known if it weren’t for the Bolsheviks. Weeks after taking power, the Bolsheviks divulged the details of the Sykes-Picot agreement, showing that the Great Powers were engaging in imperialism, business as usual.
WWI was the final straw that broke the Romanov Dynasty in Russia. A few months after the collapse of this regime the Bolshevik Communists took over, initiating one of the most awful political experiments in the history of civilization. Even if the only effect of WWI was that the Bolsheviks gave rise to the Soviet Union, the excess mortality in the Soviet Union during Stalin's reign was 20 million. In other words, Stalin caused as many deaths as WWI and it was in large part WWI that allowed Stalin's party to take power.
The result:
About 20 million people died in WWI.
To be continued...
FYI
Suggested Reading: John Mackie, Evil and Omnipotence
TL;DR: Crash Course, The Problem of Evil
Supplementary Material—
-
Video: Bart Ehrman, God and the Problem of Suffering
-
Note. Bart Ehrman is a biblical scholar. His personal website is here.
-
Related Material—
-
Reading: Burt Solomon, The Tragic Futility of World War I
-
Reading: Brian Frydenborg, The Urgent Lessons of World War I
Advanced Material—
-
Reading: Marilyn McCord Adams, Horrendous Evils and the Goodness of God
-
Reading: Jeff Speaks, Speaks on Mackie
-
Reading: Peter Van Inwagen, The Problem of Evil
-
Note: This is a series of lectures for the interested student. Most relevant to the course is Lecture I.
-
- Reading: Randy Firestone, Paley’s version of the Teleological Argument is Based on an Equivocation Fallacy: There is No Order in the Universe Which Resembles the Order of a Watch
Footnotes
1. Gibbon’s Decline and Fall of the Roman Empire was placed in the Vatican’s Index Librorum Prohibitorum, i.e., the "List of Prohibited Books".
2. Of course, an educated Christian would’ve also been able to tell that there were several other people preaching that they were the messiah, claiming divinity and living a life of renunciation. Several even claimed that they had to die for the sake of humanity and had followers who claimed they resurrected after their death. One such case was that of Peregrinus, whose biography mirrors that of Jesus of Nazareth to an alarming degree. In any case, educated Christians would not have known about Peregrinus for long. A book by Lucian describing the tale of Peregrinus was also banned by the Church. The Church dealt with pagan messiahs whose life was similar to that of Jesus so often that there's a term for it: diabolical mimicry.
3. A note on the label "atheist": It is not universally agreed upon what the label "atheist" actually means. Does it mean an active denial of the Judeo-Christian God's existence or is it merely non-belief? In this class, I will refer to the individual who actively denies the existence of some deity as an atheist and someone who just has no beliefs about deities as a non-theist; someone who believes in God is a theist. As for the agnostics, I won't define that category in a loose "spiritual" sense like some of my students have in the past. In this class, an agnostic is someone for whom knowledge of the supernatural is impossible; a gnostic is someone who believes knowledge of the supernatural is possible. According to this nomenclature, one can actually combine positions. One can be either a theist, atheist, or non-theist as well as either a gnostic or agnostic. For example, Ockham is a theist agnostic, who believed in God but thought that knowledge of God was impossible. Aquinas was a theist gnostic, who believed in God and thought he could have knowledge of this existence. In this class, we will cover the position of the gnostic atheist: the person who actively denies the existence of the Judeo-Christian God and who believes they know that this supernatural being does not exist.
4. Another strategy used to argue against God's existence is sometimes referred to as "debunking". The general idea here is to tell the history of some religion, for example, Christianity, and then argue that there is nothing divine or supernatural about this belief system. So, there is the deductive approach which seeks to undermine the intelligibility of the notion of God and the historical inductive approach which undermines confidence in religious beliefs by explaining the phenomenon in purely natural terms. We will be using the first approach, i.e., the deductive approach, in this class.
5. For more on 1914, see Barbara Tuchman's The Guns of August. For a history of the imperial powers in the Arabian peninsula, see chapter 13 of Mackintosh-Smith (2019). Mackintosh-Smith has this to say about the Sykes-Picot agreement: “Some commentators have argued that their pact... ‘was a tool of unification, rather than the divisive instrument it is now commonly thought to have been.’ That is sophistry. The agreement did, in fact, accept the principle of eventual Arab independence, but on condition of the two powers having permanent influence. A prisoner is not free just because he is under house arrest instead of in jail” (Mackintosh-Smith 2019: 442-443).