Cart 0

 UNIT I

logic1.jpg

Introduction to Course

 

 

The limits of my language mean the limits of my world.

~Ludwig Wittgenstein

 

Beginnings...

Students hardly know what to expect when they begin this course. It's completely understandable. Formal languages are hardly ever taught as formal languages. In other words, up until now, you've probably never been explicitly taught a formal language. But that is precisely what will happen in this course.

What is a formal language?

A formal language differs from a natural language in that natural languages tend to be much more fluid and, in a way, ambiguous. I sometimes marvel about how the phrase "Not bad" can be interpreted in multiple ways. If someone says this to you after you asked them how a movie is, then you can interpret them as saying, "The movie is good." If, however, you are debating how good a movie is, someone might say this phrase emphasizing the second half: "It's not bad." Now they're suggesting that, sure, maybe it's predictable or maybe it's slow in the beginning, but overall it's not (that) bad of a movie. And sometimes(!), if you say "Not bad" with a certain wink in the eye, you don't just mean that it's good; you imply that it's really good. A person who is attempting to learn American English as a second-language would be forgiven if they can't quite grasp the meaning of the common expression "Not bad".

Natural languages also have funny syntax. Notice in the paragraph above that I included an exclamation point in parentheses in the middle of a sentence. This was placed there to emphasize that this is surprising. Notice that as you read it, you weren't confused about the overall meaning. You might've been slightly puzzled, but you didn't lose track of what was going on. I also inconsistently placed the period at the end of a quote: sometimes I put the period within the quotes and sometimes outside. But were you confused at any point? Probably not.

Formal languages lack these traits. For every string and operator in a formal language, terms which will be defined later, there is one and only one predetermined meaning. Moreover, if you violate a rule of grammar, like I did with the periods and quotation marks, then suddenly the expression is incoherent. Formal languages are far more strict than natural language. Unnaturally strict, in fact. It will be difficult to learn, but you'll gain insight into the way that many disciplines and technologies of the 21st century work; you'll learn the type of language that they use.

Technically, you've already learned a formal language, although they likely didn't emphasize the formal aspect of it. However, as you were acquiring your mathematical knowledge, you were learning that certain operators work in certain ways and not in others; for example, the symbol "+" is for adding. You also learned that there are rules of grammar or "syntax rules" in mathematics. You know intuitively that the expression "4 ÷=", perhaps read as "Four divided to the power of equal sign"(?), makes absolutely no sense.1

In this class, you'll be explicitly learning a formal language. In a nutshell, this was a language that was developed to answer the following question: If I know these things, do I also know this? Of course, you might want a more technical definition of logic. Let me try that now.

 

 

What is logic?

According to Herrick (2013), logic is the systematic study of the standards of correct reasoning; i.e., it is the study of the principles our reasoning should follow (p. 3). According to Hurley (1985), logic may be defined as the science that evaluates arguments; i.e., to develop a system of methods and principles that we may use as criteria for evaluating the arguments of others and as guides in constructing arguments of our own (p. 1). According to Bergmann, Moor, and Nelson (2013), what logic really does, in other words the hallmark of deductive logic, is truth-preservation. Reasoning that is acceptable by the standards of deductive logic is always truth-preserving; that is, it never takes one from truths to falsehood. Reasoning that is truth-preserving is said to be valid (p. 1-2).

As if the preceding weren't complicated enough, these writers are focusing primarily on one type of logic: first-order logic. There are actually various other types of logic, but we won't be going into those. We'll only focus on first-order logic, and so I'll drop the "first-order" and just say "logic."

At this point, you might be getting a little nervous. Don't worry. I've paced this class so that it is doable if you put in the work. In other words, it is a challenging class, but if you do all the assignments diligently at the time that they are assigned, you'll be fine. In fact, many students are impressed with themselves at the end of the course. If all goes according to plan, you should be able to read the deduction below.

 

Derivation in SD

 

 

Important Concepts

Take a moment to go through the slides below. Make sure to memorize the important concepts of every lesson. It is on these concepts that the next lesson will build. Then the concepts from tomorrow will be the foundation for the concepts after that. Do not fall behind!

Also, although certain concepts won't make total sense right now, it's important to read the language used here to start to give your mind the tools it needs for understanding these concepts later. In chapter 1 of A Mind for Numbers, Barbara Oakley describes this activity as giving yourself "mental hooks" on which to hang tough concepts later. You'll be introduced to these tough concepts more formally later, when you are doing more focused thinking and learning.

Some comments

Logical consistency will be a cornerstone of everything we will be doing. We will define important concepts in this course, such as the notion of validity, in terms of logical cosistency, so it is important that you understand it. Logical consistency only means that it's possible that all the sentences in a set are true at the same time. That's it. Don't equate consistency with truth. Two statements can be consistent while being false. For example, here are two sentences.

  • "The present King of France is bald."
  • "Baldness is inherited from your mother's side of the family."

"The present King of France is bald" is false. This is because there is no king of France. Nonetheless, these sentences are logically consistent. They can both be true at the same time, it just happens that they're not both true.2

Arguments will also be central to this class. Arguments in this class, however, will not be heated exchanges like the kind that couples have in the middle of an IKEA showroom. For logicians, arguments are just piles of sentences that are meant to support another sentence, i.e., the conclusion. The language of arguments has been co-opted into other disciplines, such as computer science, as we shall soon see.

As you learned in the Important Concepts there are two major divisions to logic: the formal and informal branches. We will be focusing on formal logic. There are other classes that focus more on the informal aspect of argumentation, such as PHIL 105: Critical Thinking and Discourse. Classes like PHIL 105 focus on the analysis of language and the critical evaluation of arguments in everyday (natural) languages. One aspect of these classes that students tend to enjoy is the study of informal fallacies. Again, we won't be studying these here, but here's one just for fun:

Food for Thought...

 

 

 

On the horizon

What's ahead? I like to take a historical perspective in all my classes. I also like take a decidedly interdisciplinary approach. I consider this the best way to understand how ideas progress, how some ideas are updated and improved upon and how others are discarded. In fact, students that have taken a course with me before would be forgiven for thinking that I like history or cognitive science more than philosophy.3

Nonetheless, we will take every opportunity to look at how logic relates to other disciplines, in particular mathematics, computer science, and artificial intelligence. We will also begin at the very origins of logic and tell its full story, sort of... The origins of logic are actually a complicated issue. One type of logic originated in India in the 6th century BCE. This logic had discernible syntax rules and was used to answer questions about the physical universe and the nature of knowledge. In China also, beginning around the 5th century BCE, a system of logic was devised. However, we won't be focusing on these.4

In the 5th and 4th centuries BCE, in Greece, something truly special was happening. This is one of the reasons why we'll be focusing on the logic that originated in Greece, although there are others (see footnote 4). Here a new way of thinking was being developed. A student was allowed to disagree with his teacher, to provide criticisms to their argument, and to develop their own theory. This is a middle way between dogmatic obedience and crude deprecation. It was this way of thinking, when institutionalized centuries later, that gave science the property of self-correction. Although a full analysis of what was going on during the "Greek miracle" is far beyond this course, I can tell you that this period is undoubtedly influential on the rest of human history (see Rovelli 2017, chapter 1; see also Lloyd 1999), as you'll come to see.

 

The Parthenon, Athens (Greece)

 

Footnotes

1. I might add that in France, due to the influence of one of the greatests mathematicians of the 20th century David Hilbert, some educational psychologists pushed for teaching mathematics from formal rules first, as opposed to the more intuitive practical way of learning about numbers by counting objects, adding piles together, subtracting some from the pile, dividing up piles, etc. Thankfully, this episode is over (see Dehaene 1999: 140, 241-42).

2. Contrary to popular belief, apparently baldness is not all your mother's fault. In the very least, smoking and drinking have an effect.

3. Philosophers have criticized me for not being enough of a philosopher. What I really attempt to do is use the tools and findings of Cognitive Science (and other relevant disciplines) to try to solve philosophical problems. In a sense, I believe most philosophical problems are actually empirical problems that haven't been framed properly.

4. There are many reasons for focusing on the Western Tradition, and so I'd like to preempt any accusations that the college is being Eurocentric in requiring this particular class. It is true, of course, that Europeans and the descendants of Europeans had come to dominate much of the world by the middle of the 20th century, whether it be culturally, economically, politically, or militarily (as in the numerous American occupations after World War II). It's also true that European standards (of reasoning, of measuring, of monetary value, etc.) have become predominant in the world as part of a general (if accidental) imperial project (see chapter 21 of Immerwahr's 2019 How to Hide an Empire). It's also the case that it is during this time period of European dominance during which public education was being formulated and put into practice, and so there is a tradition in universities of following this model and emphasizing the European role. In fact, some scholars (e.g., Loewen 2007, Aoki 2017) have hypothesized that the function of education is at least partially to indoctrinate students into the dominant culture, typically one endorsed by the elites. All of this is both true and irrelevant for why we are focusing on the Western tradition in this course. As it turns out, only Aristotle's logic focused on logical force. In Shenefelt and White's (2013) If A then B, the authors juxtapose Aristotle’s syllogistic and Indian and Chinese logic. Indian logic was apparently preoccupied with rhetorical force more than anything. It featured repetition and multiple examples of the same principle so that it's primary function was persuasion. Chinese logic apparently did study some fallacious reasoning, but (strangely) never its counterpart (i.e., validity). It was only Aristotle’s logic that focused on logical force, the study of validity itself. The Greeks wanted to know why an argument’s premises force the conclusion on you. In short, only Aristotle studies logical force unadulterated by rhetorical considerations, and this led to major historical events centuries (and millenia) later. The interested student should check out Shenefelt and White (2013), in particular chapters 1, 2, and 9.

 

Validity

 

 

Knowledge of the fact differs from knowledge of the reason for the fact.

~Aristotle

Origins...

Aristotle is the most famous student of Plato. He was prolific, writing on all major topics of inquiry established during the time that he was alive. Like his teacher, he also founded a school: the Lyceum. He was the tutor of Alexander the Great, son of Philip II. He was also the founder of the first school of Logic, a discipline he created and which is still studied when learning philosophy, computer science, and mathematics. You'll soon come to see why.1

"Aristotle was born in 384 BC, in the Greek colony and seaport of Stagirus, Macedonia. His father, who was court physician to the King of Macedonia [Amyntas III, father of Philip II], died when Aristotle was young, and the future founder of logic went to live with an uncle... When Aristotle was 17, he was sent to Athens to study at Plato’s Academy, the first university in Western history. Here, under the personal guidance of the great philosopher Plato (427-347 BC), the young Aristotle embarked on studies in every organized field of thought at the time, including mathematics, physics, cosmology, history, ethics, political theory, and musical theory... But Aristotle’s favorite subject in college was the field he eventually chose as his area of concentration: the subject the Greeks had only recently named philosophy… [And] Aristotle would write the very first history of philosophy, tracing the discipline back to its origins in another Greek colony, the bustling harbor town of Miletus... Thales of Miletus (625-546 BC) first rejected the customary Greek myths of Homer and Hesiod, with their exciting stories of gods, monsters, and heroes proposed as explanations of how the world and all that is in it came to be, and offered instead a radically new type of explanation of the world” (Herrick 2013: 8-9; interpolation is mine).

As you can see from the quote above (and as we mentioned last time), Aristotle was around during a time in which a new way of thinking was being devised. No, I'm not just talking about how this "super cool" new discipline of Philosophy was coming about. This so-called "Greek Miracle" is truly interdisciplinary and a historical landmark in intellectual history. Surprisingly, though, scholars have a hard time characterizing exactly what made this period so special. I'll give you two theories on this topic.

As we saw last time, some scholars stress the central role that debating took. Physicist Carlo Rovelli, when giving his history of quantum gravity, starts back in Ancient Greece, with the ideas of Democritus who, under the tutelage of Leucippus, develops atomic theory: things are composed of smaller objects which are indivisible.2 Rovelli stresses that students were allowed to debate their teachers, and that ideas had to be defended with more rigor. Historian of mathematics Luke Heaton agrees with Rovelli. Heaton notes the differences in the approaches taken by different cultures in the field of mathematics. For example, the ancient Egyptians performed calculations and inferences exactly in the ways of their ancestors. In fact, they literally believed this to be a matter of life and death. The whole cosmic harmony of their god Ma’at "could turn to chaos and violence if the ruler or his people did not adhere to their traditions and rituals." In other words, the Ancient Egyptians did things the way they had always been done, and could not even imagine of attempting new methods of mathematics. In contrast, however, Greeks debated mathematical truths. They emphasized the rigorous articulation of logical principles in argumentation (see Heaton 2017: 34-5; see p. 34 for quote).

Gate-level circuit
A gate-level circuit
(see footnote 1 for more info).

A second theory comes from classicist Pierre Vernant. Vernant argues that, beginning in the sixth century BCE, there was a new "positivist" type of reflection concerning nature. This is to say that thinkers weren't satisfied with divine explanations any longer. One example that Vernant gives is that thinkers were no longer satisfied with explaining cosmogenesis through sexual unions (Vernant 2006: 219). And so they emphasized theories that were more testable and arguable. They sought ways of explaining the world that were amenable to examination. According to Vernant, the special aspect of Greek thought isn't just that they could argue; it's that they would limit their argumentation to domains where argumentation might lead to a resolution. In other words, the Greeks moved away from superstition and supernatural explanations (where one explanation is just as good as another) towards explaining reality in physical terms (which is a step towards modern science; see Vernant 2006: 371-80).

Whatever the right way of characterizing the Greek miracle might be, it's clear that there was something special going on. And Aristotle's ideas are a part of this tradition. Sure, Aristotle had some ideas that were also very very wrong. In fact, some of his ideas actually impeded progress, as in physics. And we can talk about those if you want. But paired with that must be the acknowledgement that some of his ideas are nothing short of intellectual breakthroughs, breakthroughs that were expanded and improved upon by others, breakthroughs that shaped the modern world. For example, Aristotle's works on botany and zoology taught us how to systematize, an important step in scientific classification. Most important, I think, and the subject of this course, is his development of logic. It appears that, although logical principles were routinely used in argumentation, no one had sat down to codify them. Aristotle did.

“[W]e can say flatly that the history of [Western] logic begins with the Greek Philosopher Aristotle... Although it is almost a platitude among historians that great intellectual advances are never the work of only one person (in founding the science of geometry Euclid made use of the results of Eudoxus and others; in the case of mechanics Newton stood upon the shoulders of Descartes, Galileo, and Kepler; and so on), Aristotle, according to all available evidence, created the science of logic absolutely ex nihilo [out of nothing]” (Mates 1972: 206; interpolations are mine).

 

 

 

Important Concepts

Distinguishing Deduction and Induction

As you saw in the Important Concepts, I distinguish deduction and induction thus: deduction purports to establish the certainty of the conclusion while induction establishes only that the conclusion is probable.3 So basically, deduction gives you certainty, induction gives you probabilistic conclusions. If you perform an internet search, however, this is not always what you'll find. Some websites define deduction as going from general statements to particular ones, and induction is defined as going from particular statements to general ones. I understand this way of framing the two, but this distinction isn't foolproof. For example, you can write an inductive argument that goes from general principles to particular ones, like only deduction is supposed to do:

  1. Generally speaking, criminals return to the scene of the crime.
  2. Generally speaking, fingerprints have only one likely match.
  3. Thus, since Sam was seen at the scene of the crime and his prints matched, he is likely the culprit.

I know that I really emphasized the general aspect of the premises, and I also know that those statements are debatable. But what isn't debatable is that the conclusion is not certain. It only has a high degree of probability of being true. As such, using my distinction, it is an inductive argument. But clearly we arrived at this conclusion (a particular statement about one guy) from general statements (about the general tendencies of criminals and the general accuracy of fingerprint investigations). All this to say that for this course, we'll be exclusively using the distinction established in the Important Concepts: deduction gives you certainty, induction gives you probability.

In reality, this distinction between deduction and induction is fuzzier than you might think. In fact, recently (historically speaking), Axelrod (1997: 3-4) argues that agent-based models, a new fangled computer modeling approach to solving problems in the social and biological sciences, is a third form of reasoning, neither inductive nor deductive. As you can tell, this story gets complicated, but it's a discussion that belongs in a course on Argument Theory.

Food for Thought...

Alas...

In this course we will only focus on deductive logic. Inductive logic is a whole course unto itself. In fact, it's more like a whole set of courses. I might add that inductive reasoning might be important to learn if you are pursuing a career in computer science. This is because there is a clear analogy between statistics (a form of inductive reasoning) and machine learning (see Dangeti 2017). Nonetheless, this will be one of the few times we discuss induction. What will be important to know for our purposes is only the basic distinction between the two forms of reasoning.

 

 

Assessing Arguments

Validity and soundness are the jargon of deduction. Induction has it's own language of assessment, which we will only briefly cover next time. These concepts will be with us through the end of the course, so let's make sure we understand them. When first learning the concepts of validity and soundness students often fail to recognize that validity is a concept that is independent of truth. Validity merely means that if the premises are true, the conclusion must be true. So once you've decided that an argument is valid, a necessary first step in the assessment of arguments, then you proceed to assess each individual premise for truth. If all the premises are true, then we can further brand the argument as sound.4 If an argument has achieved this status, then a rational person would accept the conclusion.

Let's take a look at some examples. Here's an argument:

  1. Every painting ever made is in The Library of Babel.
  2. “La Persistencia de la Memoria” is a painting by Salvador Dalí.
  3. Therefore, “La Persistencia de la Memoria” is in The Library of Babel.

At first glance, some people immediately sense something wrong about this argument, but it is important to specify what is amiss. Let's first assess for validity. If the premises are true, does the conclusion have to be true? Think about it. The answer is yes. If every painting ever is in this library and "La Persistencia de la Memoria" is a painting, then this painting should be housed in this library. So the argument is valid.

But validity is cheap. Anyone who can arrange sentences in the right way can engineer a valid argument. Soundness is what counts. Now that we've assessed the argument as valid, let's assess it for soundness. Are the premises actually true? The answer is: no. The second premise is true (see the image below). However, there is no such thing as the Library of Babel; it is a fiction invented by a poet. So, the argument is not sound. You are not rationally required to believe it.

Here's one more:

  1. All lawyers are liars.
  2. Jim is a lawyer.
  3. Therefore Jim is a liar.

You try it!5

La Persistencia de la Memoria, by Salvador Dalí

 

 

Decoding Validity

Closing Comments

As you might've noticed above, people need some time to calibrate their concept of logical consistency. It just means the statements can be true at the same time. Don't let blatantly false statements, especially politically charged ones, trip you up. If you do let yourself be tricked in this way, you can expect to have some problems on my test. You've been warned.

Also, be sure to distinguish between validity and soundness, a topic also covered in the video above. Validity is about the logical relationship between premises and conclusion, namely that the premises fully support the conclusion. Soundness is validity plus the premises actually being true. Finally, both validity and soundness are properties of arguments. Don't forget! Again, you've been warned.

Lastly, you might've noticed pictures of circles in this lesson. There'll be more to come. Why? You'll see...

 

 

FYI

Homework!

  • Validity and Soundness Handout
    • In this assignment, you will use the Imagination Method to assess arguments for validity; that is to say, you will simply think about the argument and attempt to figure out if it is valid or not.
    • The Logic Book (6e), Chapter 1 Glossary

     

    Footnotes

    1. A basic building block of computers is a logic gate, which is functionally the same as some of the truth-tables we'll be learning in Unit II. Through a combination of logic gates you can build gate-level circuits, and out of gate-level circuits you build modules, such as the arithmetic logic unit (a unit in a computer which carries out arithmetic and logical operations.) Eventually, all the elements of a module were forced onto a single chip, called an integrated circuit, or an IC for short (see Ceruzzi 2012: 86-90). In the image you can see a gate-level circuit.

    2. Not to get into theoretical physics here, but some students make the erroneous inference that because we split the atom, that means atomic theory is false. Not at all. It means, among other things, that physicists used the label "atom" too soon; they misapplied it by using in on objects that actually are divisible. But more importantly, quantum theory leads to the conclusion that there really is a lower limit to the size of things in the world, and they can't get any smaller than that lower limit. In a sense, Democritus was right. Rovelli (2017) refers to this granularity as one of the lessons learned from quantum theory.

    3. It is important to note that different thinkers distinguish between deduction and induction in different ways (see Wilbanks 2010). Depending on how you distinguish these two, abduction is either a third form of reasoning or a type of induction. I'm agnostic on this matter. However, one of the main books I'm using in building this course is Herrick (2013), and it is his way of characterizing deduction/induction that we will be using. On the account that we're using, there's only deduction and induction, with abduction being a form of induction (see chapter 33, titled "Varieties of Inductive Reasoning", of Herrick 2013 to learn about his conception of abduction, also known as "inference to the best explanation").

    4. Another common mistake that students make is that they think arguments can only have two premises. That's usually just a simplification that we perform in introductory courses. Arguments can have as many premises as the arguer needs.

    5. This argument is valid but not sound, since there are some lawyers who are non-liars (although not many).

     

 

Categorical Logic 1.0

It requires a very unusual mind to undertake the analysis of the obvious.

~Alfred North Whitehead

Recap

As you know by now, validity is an important concept in this course. Equally important, in so far as they help us wrap our minds around validity, are the notions of logical truth and logical falsity. Recall that a sentence is logically true if and only if it is not possible for the sentence to be false. Here are two examples of logically true sentences:

  • "Either Yuri is a tricycle or Yuri is not a tricycle."
  • "It’s false that I am a banana and I am not a banana."

In the first sentence, it is obvious that either it is the case that 'Yuri' refers to a tricycle or it's not. That's just it. There are no other options. Since or-sentences, formally called disjunctions, are true as long as one of the disjuncts is true, then it's true no matter what. In the second sentence, we have the denial of a contradiction. If I were to tell you that "I am a banana and I am not a banana", you'd know two things. 1. I'm probably losing my mind; and 2. it is not possible to both be a banana and not be a banana. This is a contradiction. It's literally not possible for this sentence to be true. In other words, this is a logically false sentence. But notice that the example bulleted above is the negation of this contradiction. This means that, in effect, the sentence is saying that this contradiction is false, which is true. It will always be true that a contradiction is false. This is why we label the full sentence, "It’s false that I am a banana and I am not a banana", logically true.

The world, of course, is not this simple. If only it were that all sentences were either always true or always false. In fact, most sentences are logically indeterminate. Put another way, the truth or falsity of a logically indeterminate sentence does not hinge on the logical words, like "not" and "or", that it contains. In other words, logically indeterminate sentences are neither logically true nor logically false.

The last concept we'll recap here is that of logical entailment, for which we will use the symbol "⊨". This symbol, which is read as 'double turnstile', simply asserts that given a set Γ of sentences, Γ logically entails a sentence if and only if it is impossible for all the members of Γ to be true and that sentence false. I know this sometimes looks scary to students. Let me try that again.

Γ is simply the Greek letter gamma. It stands for a set of sentences like, {“Ann likes swimming”, “Bob likes pudding”, “Carlos hates Dan”}.1 This set (Γ) entails the following sentence: “Carlos hates Dan”. Why? Well clearly because it is a member of the set. It's right in there. We will learn about more complex methods of entailment, but this is the basic idea. A set of sentences logically entails a sentence if and only if it is impossible for all the members of the set to be true and that sentence false. Written out, it looks like this:

{“Ann likes swimming”, “Bob likes pudding”, “Carlos hates Dan”} ⊨ “Carlos hates Dan”

 

 

Special cases of logical concepts

The Jargon of Induction

Before delving into deduction for good, we should briefly look at the language used to assess inductive arguments. An inductive argument is said to be strong when, if the premises are true, then the conclusion is very likely true. An inductive argument is cogent when a. it is strong, and b. it has true premises.

During the middle of the 19th century there were great advancements in formal deductive logic (which will be covered in this course), as well as a revival in inductive logic spearheaded by the British philosopher John Stuart Mill (pictured below), who is mostly known for his Utilitarian system of ethics. All this to say that the study of inductive reasoning is an extremely worthwhile task. As previously mentioned, inductive logic will unfortunately not be covered in this course, but Mill’s collected works, including his A System of Logic Ratiocinative and Inductive are available online (see also Hurley 1985, chapter 9).

 

John Stuart Mill

 

 

Storytime!

Storytime

Previously, I had stated that Aristotle devised the science of logic ex nihilo. This is true, since no one in the Western tradition had explicitly worked out the study of logical concepts the way Aristotle did. This is not to say, however, that thinkers before this time weren't making use of logical concepts in their argumentation. All of us naturally use logical language. That is, we tend to make use of logical words such as "and", "or", "if...then", etc. Developmental and comparative psychologist Michael Tomasello even has a theory as to when sapiens developed the capacity to use logical language, a step which required new psychological capacities (see Tomasello 2014: 107).

Tomasello's A Natural History of Human Thinking

We all even naturally make use of the important concept of logical consistency. Simply think about a time when someone was blatantly contradicting themselves. It bothered you not merely because they were probably lying, but because inconsistencies (typically) trigger some mental discomfort, alerting us that something is wrong.

So it shouldn't surprise us that philosophers before Aristotle were making use of logical concepts even without the formal study of logic having been initiated. I could list basically any philosopher here, but, in an introductory course, there is no better example than Socrates.

Statue of Xenophon
Statue of Xenophon.

Most of what we know about Socrates comes from his two most famous students: Plato and Xenophon. From what we can piece together, Socrates clearly saw the discipline of philosophy as a way of life. He taught, contrary to the traditions of his time, that virtue wasn’t associated with noble birth; rather, it is something that is learned, a form of moral wisdom. His conversations with others, at least according to Plato, would help people search for the right answers to life's most fundamental question: how should I live? Repeatedly, Socrates stressed the need to develop moral strength, the requirement that we use our powers of reason to reflect on our lives and the lives of those around us, and, ultimately, that we devote ourselves to wisdom, justice, courage, and moderation.

In many cases, Socrates first had to get the people he had conversations with to realize a deficiency in their view. Only then can he help them move towards more sound positions. In one of Plato's dialogues, Euthyphro, Socrates discusses the nature of piety with Euthyphro, the latter of which is mired in inconsistencies. This dialogue is sometimes painful to read since Euthyphro is clearly reeling. Socrates, as a character in Plato's dialogue, is making use of logical techniques for showing inconsistencies. All this, remember, was before the science of logic had been explicitly begun. This is a testament to the intellectual profundity of Socrates, a trait he was able to impart on his students, Plato and Xenophon.

 

Socrates

 

Important Concepts

Clarifications

As you learned in the Important Concepts above, Aristotle focused on the logical form of sentences and realized that there are only four types of categorical sentences. I'll reproduce them here:

The Four Sentences of Categorical Logic2

The universal affirmative (A): All S are P.
The universal negative (E): No S are P.
The particular affirmative (I): Some S are P.
The particular negative (O): Some S are not P.

Let's make some clarifications. Aristotle believed that all declarative sentences can be put into this form. Of course, there are other types of sentences with other linguistic functions, e.g., exclamations. But only declarative sentences can be true or false, and so those are the ones we are concerned with. Remember, logic is concerned with truth-preservation. Exclamations are neither true nor false. It would be a category mistake to label a sentence like "Golf sucks!" or "SUSHI!!!" or "¡Vamos Pumas!" as either true or false. They are more so expressions of emotion on the part of whoever uttered the sentence. So, declarative sentences are what Aristotle had in mind when he was thinking of logical form.

Also, you must've noticed in the Important Concepts a clarification about the word "some". This is important to learn now, so you don't make mistakes later on. "Some" means "at least one" to logicians. Students often tell me that "some", to them, means "more than one". While you're engaging in assessing arguments for validity, though, you must use the logician's sense of "some".

Here's a related issue. What do you intuit if someone says, "There are some tamales that are spicy"? You might interpret this as saying both "There are some tamales that are spicy" and "There are some tamales that are not spicy". This might especially be the case if you don't like spicy food. You hear the first sentence as implying the second one. However, in this class, we will use a logically rigorous definition of implication, which we won't really dive into until Units III and IV. The long and short of it is that if you see a sentence like "There are some tamales that are spicy", this does not imply the sentence "There are some tamales that are not spicy". The only time you can assume that there are tamales that are not spicy is if you are given the sentence "There are some tamales that are not spicy".

 

 

Standard Form in Categorical Logic

With the clarifications out of the way, let's practice putting sentences into their proper logical form. We will refer to this as the standard form for sentences in categorical logic. The rules are simple. Make sure that the premises/conclusion are in the following order:

  1. The quantifier (which is either "all", "no", or "some")
  2. Subject class (led by a noun)
  3. Copula (i.e., the word "are")
  4. Predicate class (also led by a noun)

Here are various examples. After a few of them, try to rewrite them yourself before you see how I put them into standard form. It's ok if our subject and predicate classes aren't the same, since we are likely to use different nouns to start off our categories.

 

 

Three Types of Categorical Reasoning

Although Aristotle's categorical reasoning has now been formalized into a robust mathematical framework (see Marquis 2020), Aristotle began with just three types of categorical reasoning. First, an immediate inference is an argument composed of exactly one premise and one conclusion immediately drawn from it. For example:

  1. All Spartans are brave persons.
  2. So, some brave persons are Spartans.

A mediate inference is an argument composed of exactly two premises and exactly one conclusion in which the reasoning from the first premise to the conclusion is “mediated” by passing through a second premise. This type of argument is also called a categorical syllogism. For example:

  1. All goats are mammals.
  2. All mammals are animals.
  3. Therefore, all goats are animals.

Lastly, a sorites is a chain of interlocking mediate inferences leading to one conclusion in the end. For example:

  1. All Spartans are warriors.
  2. All warriors are brave persons.
  3. All brave persons are strong persons.
  4. So, all Spartans are strong persons.

 

 

Looking ahead

The whole point of deductive logic at this early stage was to see which conclusions necessarily followed from their premises. In other words, it was simply the study of validity. Hence, Aristotle devised different ways for assessing the three types of categorical reasoning for validity. These will occupy us for the rest of this unit. We will begin with immediate inferences, or one-premise categorical arguments. After that, we will move on to mediate inferences, which are the main challenge of this unit. In fact, mediate inferences will be the reason why you might dream of circles in the nights to come. After that, we will take a brief look at assessing sorites for validity.

But before we can run, we have to crawl. We'll begin with immediate inferences. Aristotle developed two methods for assessing these one-premise arguments: 1. the Square of Opposition and 2. the laws of conversion, obversion, and contraposition. We will focus on the Square of Opposition. This will require that we learn how to use a "paper computer". Stay tuned.

 

One of Ramon Llull's diagrams

 

FYI

Homework!

Advanced Material—

 

Footnotes

1. Note that we use the "{" "}" symbols to capture the content of a set of sentences.

2. There is a reason for why the four sentences of categorical logic are labeled "A", "E", "I", and "O". The universal and particular affirmative are labeled "A" and "I" since these are the first two vowels in the Latin word affirmo, which means "I affirm". The universal and particular negative are "E" and "O" for the vowels in the Latin word nego, which means "I deny".

 

 

Homework for Unit 1, Lesson 3: Categorical Logic 1.0

Note: Click here for a PDF of this assignment.

Assessing Categorical Arguments for Validity: Intuitive Method Edition

Indicate whether the following arguments are valid or invalid. Use the Imagination Method.

  1. All philosophers are lovers of truth. No lovers of truth are closed-minded people. Thus, no philosophers are closed-minded people.

  2. No soldiers are rich. No rich persons are poets. Hence, no soldiers are poets.

  3. No scientists are poets. Some scientists are logicians. Therefore, some logicians are not poets.

  4. Some actors are sculptors. Some poets are not actors. So, some poets are not sculptors.

Complete the following arguments in such a way such that each is valid: 

  1. All cats are _______.

  2. Some _______ are pets.

  3. So, some _______ are _______.

 

  1. All _______ are _______.

  2. No mammals are _______.

  3. So, no _______ are _______.

Lastly, memorize the following terms, as well as the Square of Opposition diagram below:

Two statements are contradictories if and only if they cannot both be true simultaneously and they also cannot both be false simultaneously.

Two statements are contraries if and only if they cannot both be true simultaneously, but it is logically possible both are false.

Two statements are subcontraries if and only if they cannot both be false simultaneously but it is logically possible that both are true.

One statement P is a subaltern of another statement Q if and only if P must be true if Q is true.

One statement P is a superaltern of another statement Q if and only if P must be false if Q is false.

3. Categorical Logic 1.0 (readable).jpg

The Square of Opposition

 

One of Ramon LLull's diagrams

 

No, no, you're not thinking; you're just being logical.

~Niels Bohr

Level I: Immediate Inferences

So far we've only assessed arguments using the Imagination Method, basically thinking through premises and intuiting if the conclusion necessarily follows. Such informal methods are helpful with simple arguments, but their usefulness would vanish in more complicated cases. More formal, rigorous methods had to be devised. We'll begin, as promised, with short one-premise arguments.1

For reference, here are the steps for using the square of opposition.

  1. Write out the argument.
  2. Assess which type of sentence of categorical logic the statement is and write the letter beside the premise.
  3. Write the truth-value (either T or F) next to the sentence letter of both the premise and the conclusion. (Note: If the sentence is just asserted, that means it’s true.)
  4. Use the different logical relations in the square of opposition—such as superalternation, subalternation, etc.—to discover what the truth-value of the premise implies. In other words, Use the square of opposition to figure out the truth-value of the rest of the sentences given the premise. But remember(!), use ONLY the premise.
  5. Check that the truth-value of the conclusion matches the truth-value for the relevant sentence-type on your square of opposition. If they do match, the argument is valid; if not, it is invalid.

 

 

FYI

Assessing Categorical Arguments for Validity: Immediate Inferences Edition

Directions: Use the Traditional Square of Opposition to check if the following one-premise arguments are valid or invalid.

Note: Some of these statements may first have to be put into standard form.

  1. Some cookie-eaters are cookie monsters. So it is false, that no cookie-eaters are cookie monsters.
  2. It is false that some cookies are turnips. So, no cookies are turnips.
  3. It is false that some cookies are not delicious. So, it is false that no cookies are delicious.
  4. It is false that all cookies are metallic. So, it is true that some cookies are metallic.
  5. Some cookies are not splurps. So, no cookies are splurps.
  6. Some cookie monsters are cookie lovers. So, all cookie monsters are cookie lovers.
  7. It is false that no dogs are cookie eaters; hence, it is false that all dogs are cookie eaters.

 

Square of Opposition

 

Square of Opposition

 

Tablet Time!

 

Square of Opposition

 

 

Storytime!

 

A caricature of Kant
A caricature of
Immanuel Kant.

In the late 18th century, the German philosopher Immanuel Kant wrote that logic “has not advanced a single step, and is to all appearances a closed and completed body of doctrine... There are but few sciences that can come into a permanent state, which admits of no further alteration. To these belong logic and metaphysics” (Kant as quoted in Haack 1996: 27). We can say definitively that Kant was wrong. He was about as wrong as you could be on this topic. Not only were there improvements to the logic of his time, various other "logics" sprouted and whole new disciplines arose which had at their foundation a form of logical analysis.

Although I'm not always terribly forgiving of Kant's views, in this case, I might be more lenient. From Kant's vantage point, it really might've looked as if logic was "closed and complete", at least in the West. In Susanne Bobzien's Stanford Encyclopedia of Philosophy Entry on Ancient Logic, she writes that Aristotle's logic "was taught by and large without rival from the 4th to the 19th centuries CE" (see the section on Aristotle for quote). This is surprising because, in Aristotle's time, there existed rival forms of logical analysis. We will meet the main challenger in the next unit. A question naturally arises. How did Aristotle come to dominate?

The more one studies history, the more one realizes that ideas don't always win out because they are better. Sometimes they get a little help. And other times it's just luck. In the case of Aristotle and his logic, it's a little bit of both. The story is complicated, but here's what we can say. Nearly all of the writings of the Stoics, the founders of the school of logic that was the main competitor to that of Aristotle, are lost. How they came to be lost is why this story is so complicated. I can't tell the full story here, but at least part of it is due to the rise of Christianity and the cleansing of pagan and heterodox literature. The interested student can read Freeman's (2007) The Closing of the Western Mind: The rise of faith and the fall of reason and Nixey's (2018) The Darkening Age: The Christian Destruction of the Classical World.2

George Boole
George Boole (1815-1864).

In any case, Aristotle did dominate, basically unquestioned. That is until George Boole (1815-1864) publishes The Mathematical Analysis of Logic in 1847. Boole was a mathematician that spent his brief career at Queen's College, Cork in Ireland where he has a monument (pictured above). To try to explain Boole's contributions to the modern world this early in the course will be futile. We will revisit his ideas later on in more detail. For now, we will say only two things. First, most historians of computer science agree that Boole laid down the foundations for the information age. Second, and more related to our current task, Boole made the first improvement to Aristotle's logic in over a millennium.

 

Some books by Bolaño

 

Hypothetically...

Consider the following proposition: “All dragons are things that are magical.”

Question: Is this proposition true or false?

You don't really want to say that it's true, do you? Wouldn't that imply that you think dragons are real? But you don't want to say it's false either. You want to say both that dragons are imaginary creatures and that, in the traditional conception of these imaginary creatures, magic is involved. There! We said it. Easy enough. But how would we represent this categorical logic?

A young Roberto Bolaño
A young Roberto Bolaño.

Imagine someone asserts the sentence "All of Roberto Bolaño's books are at least 200 pages" (see image above). Put into standard form, "All books written by Roberto Bolaño are books with more than 200 pages" is an A-type sentence. As we learned in the previous section, a true A-type sentence implies a true I-type sentence. This would be that if "All books written by Roberto Bolaño are books with more than 200 pages" is true, then "Some books written by Roberto Bolaño are books with more than 200 pages" is also true. That's basic subalternation.

This is where the trouble begins. Imagine someone asserts the sentence “All dragons are things that are magical” or "All harks are things that are very scary". Harks are indeed scary, by the way (click here). However, to say "All harks are things that are very scary" is to imply "Some harks are things that are very scary"; and remember: to logicians "some" means "at least one". So, if you say "All harks are things that are very scary" is true, you necessarily are committing to the existence of harks. Harks, though, obviously don't exist. But when we say "All harks are things that are very scary", it's pretty clear what we're trying to get at. If harks existed, then they would be very scary indeed.

So what George Boole did is stipulated that you can take either the existential viewpoint or the hypothetical viewpoint. If you are talking about things that don't exist, you simply take the hypothetical viewpoint: If F's exist, then they are G. Although this seems like a simple enough fix, there is an unexpected consequence. The Square of Opposition is gutted. Many of its connections are lost. And so we are left with the traditional Square of Opposition if we are taking the existential viewpoint, and the modern Square of Opposition when we are taking the hypothetical viewpoint (pictured below).

 

Modern Square of Opposition

 

 

Important Concepts

 

Next time...

Be sure to complete the homework for this lesson diligently. Catch-up on any lessons you haven't covered. Review if necessary. Drink lots of coffee. Do whatever you need to do to get ready for the toughest part of this unit...

 

Venn diagram

 

FYI

Homework!

 

Footnotes

1. On a technical/historical note, Aristotle himself didn't use the image of the Square of Opposition that we are using. Rather, this image was developed during the Roman imperial age using Aristotle's observations. But almost all of the relations were already present in Aristotle's work. In De Interpretatione, Aristotle discusses contraries and contradictories; in Topics, he discusses subalternation. Subcontraries were, however, an idea provided by later logicians. The Square of Opposition first appears in a commentary of Aristotle's De Interpretatione written by Apuleius of Madauros, written in the second century CE (Shenefelt and White 2013: 54n8).

2. I might add that this state of affairs was confined mostly to the West. As it turns out, other regions of the world were seeing a continued interest in mathematics, logic, philosophy, and the sciences, while Europe went into its Dark Age. For example, the Islamic empire grew so rapidly that they urgently needed administrators talented in mathematical and logical analysis to help hold the domains together. The Islamic rulers gave energetic support to these studies. "Books were hunted far and wide, and gifted translators like Thabit ibn Qurra (836-901) were found and brought to work in Baghdad... So diligent were the Arabs in those enlightened times that we often know of Greek works only because an Arabic translation survives" (Gray 2003: 41).

 

 

HOMEWORK for Unit i, Lesson 4: The Square of Opposition

Note: Click here for a PDF of this assignment.

Assessing Categorical Arguments for Validity: Immediate Inferences Edition

Directions: Use the Traditional Square of Opposition to check if the following one-premise arguments are valid or invalid.

  1. Some cookie-eaters are cookie monsters. So it is false, that no cookie-eaters are cookie monsters.

  2. It is false that some cookies are turnips. So, no cookies are turnips.

  3. It is false that some cookies are not delicious. So, it is false that no cookies are delicious.

  4. It is false that all cookies are metallic. So, it is true that some cookies are metallic.

  5. Some cookies are not splurps. So, no cookies are splurps.

  6. Some cookie monsters are cookie lovers. So, all cookie monsters are cookie lovers.

  7. It is false that no dogs are cookie eaters; hence, it is false that all dogs are cookie eaters.

Categorical Logic 2.0

 

 

We endeavour to employ only symmetrical figures, such as should not only be an aid to reasoning, through the sense of sight, but should also be to some extent elegant in themselves.

~John Venn

From Athens to Great Britain

As discussed in the last lesson, Aristotle's logic was taught in the West basically without opposition (and without any major updates) until the 19th century. Insofar as logic is a method for assessing arguments, categorical logic didn't need much improvement. In fact, some of the updates made to it in the 19th century, as evidenced in the epigraph above, were driven by a desire to make inferences not only valid, but also aesthetically pleasing. That being said, there were some substantive changes on the horizon.1

The updates made to logic were from George Boole, whom we met last time, and John Venn, both of which were British logicians and mathematicians. We'll be looking at these updates in the lessons to come. Today, however, we will move on to mediate inferences, or two-premise arguments of categorical logic. More commonly, these are referred to as categorical syllogisms, as you'll learn in the Important Concepts below. We will begin by standardizing these and then use ancient techniques in assessing them for validity. These methods require us to discover the mood and figure of the syllogisms in question to see if they are valid or not, a method pioneered by Aristotle himself. Then we will move on to the more modern Venn diagrams.

Important Concepts

 

Mood and Figure

Here are the steps for assessing for validity using mood and figure:

  1. Put each individual categorical proposition into standard form: quantifier, subject class, copula, and predicate class.
  2. Make sure the syllogism as a whole is in standard form. That is,
    1. the syllogism is composed of three standard-form categorical statements;
    2. the subject term of the conclusion (the minor term) must occur once and only once in the second premise, and not in the first premise;
    3. the predicate term of the conclusion (the major term) must occur once and only once in the first premise, and not in the second premise;
    4. the other term in the first premise (the middle term) must also occur in the second premise.
  3. Label the argument according to its mood and figure (see Important Concepts above).
  4. If an argument's mood and figure is on this table, then the argument is valid. If not, then the argument is invalid.

 

 

Food for thought...

 

 

 

Circles, circles, circles...2

“Another revolutionary development in nineteenth-century logic was the discovery, by the English logician and philosopher John Venn (1834-1923), of a radically new way to show that a categorical syllogism is valid. Venn’s method allows us to visually represent the information content of categorical sentences, in such a way that we can actually see the relations between the sentences of a syllogism” (Herrick 2013: 176; emphasis added).3

Here are the steps for Venn diagrams that we'll be following:

  1. Abbreviate.Abbreviate. Abbreviate the argument, replacing each category with a single capital letter.
  2. Draw and Label. Label the circles (with capital letters) using the minor term (the subject term of the conclusion) for the lower left circle, the major term (the predicate term of the conclusion) for the lower right circle, and the middle term (the remaining class) on the middle circle.
  3. Decide on Order. If the argument contains a universal premise, enter its information first. If the argument contains two universal premises or two particular premises, order doesn’t matter.
  4. Enter Premise Information. Universal statements get shaded. Particular statements get an X.
    • Note: When placing an X in an area, if one part of the area has been shaded, place the X in an unshaded part. When placing an X in an area, if a circle’s line runs through the area, place the X directly on the line separating the area into two regions. In other words, the X must “straddle” the line, hanging over both sides equally. An X straddling a line means that, for all we know, the individual represented by the X might be on either side of the line, or on both sides; in other words, it is not known which side of the line the X is actually on.
  5. Remember to decide if you are taking the hypothetical viewpoint. Look at the two circles standing for the subject terms of your premises. If these terms refer to existing things, and if there is only one region unshaded in either or both circles, place an X in that unshaded region (thereby presenting the existential viewpoint). If these terms refer to things that do not exist or the arguer does not wish to assume exist, then you are finished (thereby presenting the hypothetical viewpoint). Repeat for the middle and predicate terms.
  6. Assess for validity. A categorical syllogism is valid if, by diagramming only the premises, we have also diagrammed the information found in the conclusion. A categorical syllogism is invalid if, when we have diagrammed the information content of the premises, information must be added to the diagram to represent the information content of the conclusion.
    • CAUTION: If no X or shading appears in an area, this does not say that nothing exists in the area; rather, it indicates that nothing is known about the area. Again, it only means we have no information about the area. Thus, for all we know, the area might be empty, or it might contain one or more things.
  7. Label the argument valid or invalid.
  8. Look over your work.

 

 

 

Do stuff

 

FYI

Assessing Categorical Arguments for Validity: Venn Diagram Edition

Directions: Use either the mood and figure method or the Venn diagram Method to determine whether the following categorical syllogisms are valid. (Here's the table for mood and figure.)
Note that some arguments may be out of order, placing the conclusion before the premises. Use conclusion indicators, like "therefore" and "so", to help you identify the conclusion. Premise indicators, like "since" will help you identify premises.

  1. All philosophers are lovers of truth. No lovers of truth are close-minded people. So, no philosophers are close-minded people.
  2. No dogs are cats, and no fish are dogs. So some cats are fish
  3. All dogs can sing, and no singers can swim. Therefore, no dogs can swim.
  4. All anteaters eat hymenoptera. Therefore some Myrmecophagidae are anteaters, since some eaters of hymenoptera are Myrmecophagidae.
  5. Some scavengers are fish, but some dogs aren’t fish. Therefore, some dogs aren’t scavengers.
  6. All goats are cute. All small mammals are cute. So, all small mammals are goats.
  7. Some actors are sculptors. Some poets are not actors. So, some poets are not sculptors.
  8. All A are B. All A are C. Thus, all C are B.
  9. Some A are not C, and all C are B. Thus, some A are not B.
  10. Some B are C. Some B are not A. Therefore, Some A are C.

 

Tablet Time!

 

 

Footnotes

1. I might add that other thinkers of the time were also driven by an appeal to beauty as opposed to experimental data. Physicist James Clerk Maxwell (1831-1879), for example, famous for being the first to hypothesize that a changing electrical charge creates a magnetic field, arrived at this view not through experimental data, but because he was bothered that nature would otherwise be asymmetrical. In other words, for Maxwell, it was a matter of aesthetics that drove him to his conclusion (see Wolfson 2000, Lecture 4).

2. The techniques used here as essentially those of Herrick (2013), chapter 9 in particular. I also, however, am using tips and verbiage found in Monge (2017).

3. I might add that John Venn truly stood on the shoulders of giants with regards to his discovery of another method of assessing for validity. As it turns out, none other than Swiss mathematician Leonhard Euler had offered a series of diagrams also comprised of circles to express the import behind the four sentences of categorical logic. This work was done in the 18th century and was improved upon by the 19th century French mathematician J. D. Gergonne (see Shenefelt and White 2013: 60-64). This, of course, takes nothing away from John Venn's aesthetically pleasing work, which is the culmination of these analyses.

 

 

HOMEWORK FOR UNIT I, LESSON 5: Categorical Logic 2.0

Assessing Categorical Arguments for Validity: Venn Diagram Edition

Directions: Use either the mood and figure method or the Venn diagram Method to determine whether the following categorical syllogisms are valid. (Here's the table for mood and figure.)
Note: Some arguments may be out of order, placing the conclusion before the premises. Use conclusion indicators, like "therefore" and "so", to help you identify the conclusion. Premise indicators, like "since" will help you identify premises.

  1. All philosophers are lovers of truth. No lovers of truth are close-minded people. So, no philosophers are close-minded people. 

  2. No dogs are cats, and no fish are dogs. So some cats are fish

  3. All dogs can sing, and no singers can swim. Therefore, no dogs can swim.

  4. All anteaters eat hymenoptera. Therefore some Myrmecophagidae are anteaters, since some eaters of hymenoptera are Myrmecophagidae.

  5. Some scavengers are fish, but some dogs aren’t fish. Therefore, some dogs aren’t scavengers.

  6. All goats are cute. All small mammals are cute. So, all small mammals are goats.

  7. Some actors are sculptors. Some poets are not actors. So, some poets are not sculptors.  

  8. All A are B. All A are C. Thus, all C are B. 

  9. Some A are not C, and all C are B. Thus, some A are not B. 

  10. Some B are C. Some B are not A. Therefore, Some A are C. 

Logic and Computing

 

 

Persistence is often more important than intelligence. Approaching material with a goal of learning it on your own gives you a unique path to mastery.

~Barbara Oakley

Tips on Learning

Oakley's A Mind for Numbers

In her 2014 A Mind for Numbers, Barbara Oakley discusses the strategies that she and other accomplished instructors teach their students to prepare for STEM classes. These skills, however, could be applied to any challenging field. In fact, even chess players use these. If taken to heart, these techniques can help you succeed in acquiring new skills in challenging disciplines, including logic. As such, Oakley's work is invaluable for students who are starting their academic careers. Let's briefly review some of her recommendations.

In chapter 6, Oakley discusses the phenomenon of “choking”, the feeling of being overwhelmed during a test, making it so that none of the information that you need will come to mind. Many students report this happening to them when they have failed a test. Choking occurs when we have overloaded our working memory. To prevent this, we must take enough time to “chunk”, or integrate one or more concepts into a smoothly connected working thought pattern. In order to avoid this, you should give yourself several days to learn tough concepts and use retrieval practice during your study. That is to say, read slowly and, after each paragraph (or less), pause to say outloud what the main idea of what you just read is. You can also do this with homework problems. Don't just rush to finish them. Explain to yourself the reasoning behind your solution.

In chapter 11, Oakley discusses the role of working memory. To excel in your studies (and life), use memory tricks to become proficient at remembering important information. We’re good at remembering physical environments, like our homes. Whenever possible, “link” what you’re learning to physical locations and things. If you're learning physiology, perhaps the layers of the skin could be the layers of your house. If you're learning logic rules, you can make analogies to everyday household items, like we will later with the analogy of the lock and key. You can also carry pictures of flash cards of these connections with you when you workout, since working out regularly has been shown to improve memory. Take a glance before starting your training to prime your brain to think about those things in the background of your mind. It works!

Tips also found in chapter 11 include utilizing our predilection for music as an aid in our studies. Little songs that you make up might help you remember some important concepts or equations. The sillier the better. Oakley reminds us to not worry about being weird; some of the most brilliant thinkers were not very normal people. Also, sometimes a location can evoke a certain feeling. You can invoke certain memories by evoking this feeling. If you studied a lot in the library, visualize yourself in the library before starting a test.

Other tips:

  • Make your own questions.
  • Apply the concepts you’re learning to your life.
  • Doodle visual metaphors for hard concepts.
  • Review just before sleep and upon waking.

 

 

 

Level III: Sorites

For references, here are the two mains steps for working out sorites. First, make sure the argument is in standard form. This means that you must make sure that:

  • All statements are standard-form categorical statements.
  • Each term occurs twice.
  • The predicate term of the conclusion appears in the first premise.
  • Every statement up to the conclusion has a term in common with the statement immediately following.

Once this has been done, follow these steps:

  1. Pair together two premises that have a term in common and derive an intermediate conclusion. This conclusion should have a term in common with one of the unused statements in the sorites.
  2. Pair together these two statements and draw a conclusion from this second pair.
  3. Repeat until all premises have been used.
  4. Evaluate each individual syllogism.

Rule: If each individual syllogism is valid, the sorites is valid. If even one syllogism in the chain is invalid, the sorites is invalid.

For more information, check out the video below.

 

 

Aristotle

 

 

Storytime!

 

 

Computation today...

I'd like to add a caveat to this happy history of logic and computation. As if we need a reminder, the history of civilization isn't always in the "up" direction. Sometimes civilization takes a few steps back; sometimes it's several steps. Our experiments with computational devices are only just beginning, and there is no guarantee that there will be a happy ending. For example, in a recent study, researchers presented machine learning experts with 70 occupations and asked them about the likelihood that they will be automated. The researchers then used a probability model to project how susceptible to automation an additional 632 jobs were. The result: it is technically feasible to do about 47% of jobs with machines (Frey & Osborne 2013).

The jobs at risk of robotization are both low-skill and high-skill jobs. On the low-skill end, it is possible to fully automate most of the tasks performed by truckers, cashiers, and line cooks. It is conceivable that the automation of other low-skill jobs, like those of security guards and parcel delivery workers, could be accelerated due to the global COVID-19 pandemic.

High-skill workers won't be getting off the hook either, though. Many of the tasks performed by a paralegal could be automated. The same could be said for accountants.

Are these jobs doomed? Hold on a second... The researchers behind another study disputed Frey and Osborne’s methodology, arguing that it’s not the entire job that will be automated but separate tasks within the job role more generally. After reassessing, their result was that only about 38% could be done by machines...

It gets worse...1

 

 

The Road Ahead

In the Storytime! above, we introduced some historical figures that will play a role in the development of logic that we will cover in this course, namely Alan Turing. Historically speaking, though, the most important group to discuss at this juncture is the Stoic school of Philosophy. It's true. In Aristotle's own time, there were criticisms to his approach to reasoning coming from a group of people that would sit at the stoa (steps) of the agora (marketplace). Stay tuned.

A hark
A hark, which is
definitely not real. 

For now, we can admit that Aristotle's logic has some weaknesses. First off, once Boole introduced the concept of the hypothetical viewpoint, there emerges an unappealing relativity: some arguments are valid depending on whether or not you're taking the existential viewpoint or the hypothetical viewpoint. Not only is there something inelegant about this, it also makes it so that Aristotle's highly abstracted categorical statements (like "All S are P") are somehow "missing" information. How would we know if "All S are P" refers to actually existing things or not? There must be a way for expressing which viewpoint we are taking within our symbolized statements, but Aristotle's logic does not have a convention for this.

Second, Aristotle believed that all propositions could be forced into standard form for categorical statements. But as you might've noticed, this is really counterintuitive in some cases. For example, consider the sentence “All except truckers are happy with the new regulations”. To accurately convey the information in this expression, one must use two standard form categorical statements:

  • All non-truckers are persons who are happy with the new regulations.
  • No truckers are persons who are happy with the new regulations.

It goes without saying that this is extremely counterintuitive, inelegant, and cumbersome.

And so we take the next step in our introduction to the history of logic. We now take a look at the chief rival to Aristotle’s Categorical Logic: truth-functional logic.

 

 

FYI

Related Material—

 

Footnotes

1. The interested student can refer to literature on the potential of artificial intelligence becoming an existential risk to humans. The most famous example of this is the work of Nick Bostrom, particularly his Superintelligence: Paths, dangers, strategies.

 

 

 UNIT II

logic2.jpg

TL (Pt. I)

 

 

Wise people are in want of nothing, and yet need many things. On the other hand, nothing is needed by fools, for they do not understand how to use anything, but are in want of everything.

~Chrysippus

Chrysippus: the second greatest logician of antiquity

The Stoic School of Philosophy, generally speaking, was founded by Zeno of Citium in Athens in the early 3rd century BCE. They are mostly known for their ethical views, as you will learn in the Storytime! below. It was Chrysippus of Soli (279-206 BCE), who around 230 BCE took charge of the Stoic school, that initiated its formal studies into logic. In typical Greek intellectual fashion, Chrysippus didn't follow Aristotle's approach blindly; Chrysippus wanted to move past the writings of "the master" (as Aristotle was sometimes called). And so Chrysippus analyzed the notion of truth. Before diving into Chrysippus' insights, however, let's get some context.

So far, the emphasis of our logical analysis has been on categories. But the Stoic school of logic conceived of reasoning in a wholly different way. They broke down reasoning not to the level of categories, but to the level of complete declarative sentences. Chrysippus realized that some sentences had a special property: they could either be true or false. Call sentences that have this property truth-functional sentences. It's important to note that not all sentences exhibit this property. It makes no sense, for example, to say that an exclamation is true. We noted this back in Unit I. Instead, it appears that declarative sentences, sentences that describe a state of affairs, are the only type of sentence that is truth-functional. This is why Chrysippus's logic is called truth-functional logic: it is the logical analysis of complete declarative sentences.

Truth-functional logic is also known as propositional logic. A proposition is the thought expressed by a declarative sentence. This might be tough to get a grasp on first, so let me give you an example. Consider the following set of sentences:

  • "Snow is white."
  • "La neige est blanche."
  • "La nieve es blanca."
  • "Schnee ist weiß."

This set contains four sentences. That's clear enough. But how many propositions does it contain? In fact, it is only one proposition. This is because this is the same thought expressed in four different languages. These sentences, moreover, are clearly different from sentences like "What’s a pizookie?", "Close the door!", and "AHH!!!!!". It just simply is the case that we cannot reasonably apply the label of true or false to a question, a command, or an exclamation, the way we can to a declarative sentence.

Another way of thinking about propositions relates to the language of thought, which is sometimes (jokingly) referred to as Brainish or Mentalese. Have you ever had a thought but couldn't quite find the words to express yourself? Some linguists, like Steven Pinker, might say that you have the proposition in the language of thought, and you are searching for the natural language—say, English—to "clothe" your proposition in so that you can communicate this idea.1

One last note that might be of interest here: it was actually the Stoics that gave logic its name:

“Aristotle might be the founder of logic as an academic discipline, but it was the Stoic philosophers who gave the subject its actual name. At the research institute founded by Aristotle, the Lyceum, the subject apparently had no formal title. After the death of Aristotle, his logic texts were bundled together as a unit known only as the ‘organon’ or ‘tool of thought.’ Early in the third century BCE, logicians in the Stoic school of philosophy, also located in Athens, Greece, named the subject after the Greek term logos (which could mean ‘word’, ‘speech’, ‘reason’, or even ‘the underlying principle of the cosmos’)” (Herrick 2013: 211).

 

Storytime!

 

 

Important Concepts

 

The Great Insight

We've been hinting at Chrysippus' great insight, and now it's time to make it clear. Here is what Chrysippus realized. Truth-values and truth-functions reside within compound sentences with truth-functional connectives. In other words, there are logical connectives—like "and" and "or"—that establish certain relationships between the component sentences as well as for the compound sentence as a whole. Let's take a look at these relationships so you can see what I mean.

Let's start with negations. A negation is a sentence, either simple or compound, whose main logical operator negates the whole thing. For example, take the simple sentence "James is home". To negate this sentence would be to say something like "It's not the case that James is home". Now here's a question: Is this statement, "It's not the case that James is home", true or false? That depends. If a sentence negated is true, the negation as a whole is false; and if a sentence negated is false, the negation as a whole is true. In other words, if "James is home" is true, then the whole compound ("It's not the case that James is home") is false. But if "James is home" is false, then the compound ("It's not the case that James is home") is true.

Let's move on to conjunctions. A conjunction is simply an and-statement, like "James is home, and Sabrina is at the store." In a conjunction, the component sentences are called conjuncts. The relationship established here is that only when both conjuncts are true is the whole compound true. In other words, the compound ("James is home, and Sabrina is at the store") is only true if both "James is home" is true and "Sabrina is at the store" is true.

A disjunction is an or-statement, with the component sentences being referred to as disjuncts. It is important to note that the disjunctions that we will be covering is the inclusive "or". In other words, disjunctions, for our purposes, will be true either when at least one of the disjuncts is true, including the case where they are both true. Only when both disjuncts are false will the whole compound be false.

Conditionals are if/then-statements, like "If James is home, then Sabrina is at the store." The antecedent is the component sentence in the if-half of the sentence, in this case "James is home", and the consequent is the component sentence in the then-half of the sentence, "Sabrina is at the store." The truth-function of conditionals is a little counterintuitive, but here's the basic idea. When the antecedent is true and the consequent is false, the conditional as a whole is false. All other times, the conditional as a whole is true.

Sidebar

Students sometimes have trouble with this truth-function. Perhaps the way you can think about conditionals is as "rules". Here in the U.S.A., the legal drinking age is 21. So, generally speaking, we assume the following "rule": If someone is drinking, then they are over 21. How would we "falsify" or violate this rule? Well, if we saw someone drinking and learned they were 21, all is well; the rule is not being violated. If we saw someone not drinking and learned that they were 21, this wouldn't affect our rule in any way. Similarly, if we saw someone not drinking and learned that they were under 21, then the rule still stands. It's only in the case where it's true that someone is drinking and it's false that they are 21 or older that the rule is being violated. That's the basic intuition behind when a conditional is false: it's only when the antecedent is true and the consequent is false that a conditional as a whole is false.

The last truth-functional compound we will be looking at is the biconditional, or if-and-only-if-statement. This relationship is such that only when the truth-values match is the compound true. The analogy that I like to use when teaching this concept is that of a contract. Consider the sentence "You’re divorced if, and only if, you've signed the divorce papers." If one of the component sentences is true, then so is the other. Similarly, if one is false, then so is the other. So, only when the truth-values of the component sentences match is the whole compound true; if they don't match, the compound is false.

 

 

 

Leaving natural language behind...

The truth-functional relationships established above were explained using a natural language, namely English. A natural language is simply a language that evolved organically through time and which has only semi-fixed meanings. As noted in Unit I words, like "bad", can have different meanings in different contexts (like how "bad" meant "good" beginning in the 1990's or whenever it was). But this is no good for logical analysis. If we are going to get deductive certainty, we can't utilize notions with slippery meanings. So, instead of using natural language, we will be using a specialized formal language that we will call TL.

You'll be happy to hear that the use of TL will make it significantly easier to understand Chryssipus’ logic. Absent symbolization, it is extremely challenging to keep track of all the functional relationships. Moreover, once you've mastered TL, you'll only be a few steps away from mastering PL, which is the formal language that will be utilized in Unit IV—and basically the entire point of this class.

I will have much more to say about formal languages in the lessons to come, but, for now, let's do two things. Let's look at some problems with natural languages (to begin to convince you that we really do need a formal language), and let's introduce you to the connectives of our special language.

Food for thought...

Food for thought

Problems with natural language

Although wonderful for day to day use, natural language definitely has some drawbacks. First of all, it is sometimes very ambiguous. It goes without saying that spoken language doesn't always carry the same meaning in written form. I'm sure you don't have to dig too deep before remembering a time when someone misinterpreted a text message that you sent, reading into it hostility when you meant it as a joke.

In fact, even in spoken language there is plenty of room for misinterpretation. Sometimes misinterpretations can cost lives. Consider that, apparently, the decision to drop the atomic weapons in Hiroshima and Nagasaki was at least partially motivated by a misinterpretation. When the ultimatum of "unconditional surrender" was imposed on the imperial government, reporters in Tokyo asked Japanese Premier Kantaro Suzuki about the plans of the Japanese government. Suzuki responded with the Japanese word mokusatsu, which he meant as saying "no comment". However, there is another meaning to mokusatsu: ignoring in contempt. This second meaning, basically stating that the ultimatum is not worthy of comment, is the one that reporters chose to go with. Obviously there is a world of difference between saying "I have no comment at this time" and "That is not worthy of comment". And so within ten days, assuming that the Japanese were fanatic to the end, the decision was made to drop "the bomb".

As if the ambiguity in natural language weren't enough, there are three other problems I'd like to mention. First, it actually is surprisingly difficult to define complex notions. For example, try to define intelligence, culture, or civilization.

Second, even if you do have a "working definition" of these notions, it's usually the case that conceptions of complex notions evolve over time. The meaning behind marriage, for example, has changed over time, as Stephanie Coontz demonstrates in her (2005) Marriage, a history: From obedience to intimacy.

Lastly, the fixing (or stipulation) of meaning and the capacity to reduce this meaning into a single symbol is a hallmark of formal languages that allows the human mind to go farther than it otherwise could've done. I'll have more to say on this next time, but, for now, consider the following. Would you be able to perform long division in Roman numerals (like the ones in the clock below)?

 

 

Logical Connectives

As I said early in this course, we will be learning a formal language. However, I lied. We will be learning two formal languages. This is because we must learn TL on our way to the more complicated language we will be working with by the end of the course. Consider this a bit of a stepping stone. Nonetheless, this is a surprisingly powerful language. We will take a look at its syntax rules next time, but right now I want to introduce you to its logical connectives, or operators. You should memorize these as soon as possible.

Sentence Connectives of TL

  • The tilde symbol "~" stands for "not".
  • The wedge symbol "∨" stands for the (inclusive) "or".
  • The ampersand "&" stands for "and".
  • The horseshoe "⊃" stands for "if,...then".
  • The triple bar "≡" stands for "if, and only if".

Again, I'd like to stress that you should memorize these as soon as possible. Next time, you'll learn a whole new language...

 

 

FYI

Homework!

  • Memorize all terms and truth-functions in this lesson, as well as the sentence connectives of TL.
  • Study this summary of common connectives. This will help when you are translating from English to TL.

 

Footnotes

1. This view is in opposition to a view called the Whorf-Sapir hypothesis, which states that our thoughts are dependent on and structured by the language that we learn as children, for example English versus Cantonese. Pinker (and other linguists who side with the views of Noam Chomsky) argues that this view is outdated, but I won't get into that here. The interested student can refer to Pinker (1994).

 

 

TL (Pt. II)

 

 

If the task of philosophy is to break the domination of words over the human mind [...], then my concept notation, being developed for these purposes, can be a useful instrument for philosophers... I believe the cause of logic has been advanced already by the invention of this concept notation.

Gottlob Frege, Preface to the Begriffsschrift, 1879

The Unexpected Virtue of the Esoteric

Academia features both disciplines with very practical aims and goals and disciplines that are extremely esoteric (i.e., likely to be understood by only a small number of people with a specialized knowledge or interest). On the practical end, it doesn't get much better than medical research. In fact, in a 2017 study, researchers attempted to quantify the difference in social costs and social benefits of different employment types to see which jobs were the most socially valuable. Admitting that several factors taken into consideration might be considered subjective, the researchers produced a list that matches many of our intuitions. What job is the most socially valuable? In other words, what job has the highest social benefit and incurs the least social cost? You guessed it: medical researchers (see Lockwood, Nathanson and Weyl 2017).1

On the other hand, mathematicians themselves have been known to apologize for the "pointlessness" of some of their abstract work. Famously, G.H. Hardy "apologized" that pure mathematics was devoid of any possible applications (Hardy 1992, originally published 1940). In many cases, it was inquiry for the sake of inquiry that drove mathematicians to search for solutions to strange problems in their field. One might think that the problem that Gottlob Frege (1848-1925) was trying to solve is one of these problems that mathematicians/logicians need to "apologize" for. But they'd be wrong...

“[B]y the middle of the nineteenth century, mathematicians and logicians were pondering this question: What is the relationship between mathematics and logic? Some argued that mathematics is a outgrowth of pure logic. Others argued that logic is an outgrowth of pure math. The German mathematician Gottlob Frege developed the first formal logical language… as part of his quest to prove once and for all that mathematics is, in reality, just a branch of logic” (Herrick 2013: 469).

 

Gottlob Frege

 

Logicism

Some mathematicians in the middle 1800's had honed in on a question about the primacy of logic and mathematics. They wondered: Which is more fundamental—logic or mathematics? A view that became popular is that of logicism. Logicism is the view that mathematics can be derived out of logic; i.e., it is the view that logic is the root and mathematics is one of the branches. Put yet another way, this is the view that logic is more fundamental than mathematics.

Frege wanted to prove that logicism is true, but he faced the following problems:

  1. A deductive method of mathematical proofs had not yet been invented.
  2. Foundational theorems cannot be accurately expressed in natural language.

In the last lesson we saw how natural language really is too ambiguous to achieve deductive certainty. Frege recognized this and knew that this signified that he lacked the tools to prove logicism. In what I consider to be a stroke of genius, Frege solved his problem: he simply invented the world's first formal language for logic.2

A formal language is a language designed for a specific domain, like mathematics, logic, or computer programming, with fixed symbols, meanings, and rules of grammar. Frege planned to define the most basic mathematical computations in this formal language thereby showing the primacy of logic. This is because he would attempt to prove that mathematics can be derived out of logic. In merely attempting to perform this esoteric task, he solidified his position in the history of logic, mathematics, and computer science.

“Before the nineteenth century, logicians used few symbols in their theorizing... Frege’s ‘formal’ language allowed logicians to simplify expressions of extremely complex ideas, and this in turn led to enormous advances in logical theory. Incidentally, the new symbolic logic… also greatly advanced our understanding of language and also laid the conceptual foundations of computer science” (Herrick 2013: 423; emphasis added).

It's difficult to convey just what a paradigm shift Frege ushered in. I'll close this section with some quotes on how abstract symbols serve as an immensely powerful aid in reasoning:

  • “Many scholars of literacy would also argue that written language makes certain forms of reasoning, if not possible, at least more accessible” (Tomasello 2014: 142).
  • “Had it not been for the adoption of the new and more versatile ideographic symbols, many branches of mathematics could never have been developed because no human mind could grasp the essence of their operations in terms of the phonograms of ordinary language” (Lewis and Langford 1932: 4).

 

Diagram in one of Frege's works

 

 

Important Concepts

 

Working with TL

 

 

 

Practice Problems

Translations

Directions: Translate the following English sentences into TL using obvious letters for the sentence constants, e.g., use "S" for "Samantha is a student."

  1. Either Samantha is a singer or Juan is a guitar player. 
  2. Jaclyn knows sign language and Jason has a dog. 
  3. If Olie plays video games then Irazema is a chef.
  4. Hugo has a pet banana if (and only if) Patricia is married to Tim. 
  5. If Amanda is a bartender, then RCG is a patron and Anabelle is the hostess.
  6. Either Tomás lives in Seattle and Lucía lives in Denver, or Tomás lives in Seattle and Mauricio lives in Cancún.
  7. It’s definitely not the case that Lucía lives in Denver and at the same time Mauricio lives in Cancún.
  8. If Sarah makes chicken pot pie then Lisa will make martinis.
  9. If Lisa makes martinis, Sarah will make chicken pot pie. 
  10. It’s not true that it’s not the case that Angie is an atheist.

Identifying the Main Operator

Directions: Identify the main operator in the following:

  1. ~[(O ∨ I) ⊃ (H ≡ P)]
  2. (~U & ~I)
  3. (([V ≡ J] & [D ≡ E]) ∨ [K ⊃ C]) ⊃ A
  4. ~[(O ∨ M) ⊃ (T ≡ M)]
  5. [(T & L) ∨ (T & M)] & ~[(L & M) ∨ Q]
  6. ~~~~~~O

 

Tablet Time!

 

Food for thought...

 

 

 

A glimpse at truth-tables...

Below is a slideshow introducing you to the concept of a truth-table. The general idea is to show the conditions under which different truth-functional propositions are either true or false. "P" and "Q" are metavariables that represent any simple or compound sentence. Hence, the way that you use these tables is: a. you decide what the main operator is for the sentence as a whole, and b. you use the corresponding truth-table to figure out its truth-conditions. For example, for the sentence "(A & B) & C", the main operator is the second "&". As such, you refer to the truth-table for the & connective. It turns out that this compound is only true when both conjuncts are true. We'll get more practice with these in the lessons to come. For now, it is best to memorize these tables. Have fun!

 

 

 

FYI

Homework!

  • Memorize the truth-tables for the connectives.
  • Memorize the truth-tables for the connectives.
  • Memorize the truth-tables for the connectives.
  • Complete the translations and identification of the main operator assignments.
  • Review all terms
  • And this!

 

Footnotes

1. Per the study mentioned, the job sector with the least social value, that is the one with the highest cost to society and which gives back the least, is the banking sector.

2. We actually will not be learning Frege’s notation style. This is because Bertrand Russell and Alfred North Whitehead’s Principia Mathematica (published in three volumes from 1910-1913) introduced a language that is easier to read and easier to write. We will be learning a language closely related to Russell and Whitehead’s notation method. It's important to emphasize, however, that all of these languages are descendants of Frege's first formal language for logic.

 

 

Pattern Recognition

 

 

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.

~Douglas Hofstadter

On things that take way longer than you expected

Some skills take a long time to develop. One life-long musician that I know gives his students the following wisdom: "In about a year's time, you're going to sound pretty good; in about ten year's time, you'll master your instrument." Setting these very long timelines gives students a metric by which to compare their progress. Many people who take up learning an instrument want to be able to play a song within a few days; some even expect it on the first day. But this is not the way things go. And so expectations like those lead to premature frustration and ultimately to giving up on the whole enterprise. Moderating their expectations would've made all the difference.1

The same goes for adapting to higher levels of study in mathematics, computer science, and logic. Proving theorems and problem solving in these fields is often not the kind of thing you do in an afternoon. Some proofs can take days; others can take years. As you are learning the discipline of logic, however, you may be discovering that even the first steps in this field are challenging.

This is why I'd like to emphasize here that you do your "reps". I insist that going back and doing the same problems again (and again) will reinforce the skills that you need to move forward. Just like powerlifters don't deadlift 700lbs once in their life and say, "There I did it", you should similarly not just solve a problem once and think to yourself, "I'm done here." Core concepts should be regularly reviewed. Challenging problems should be worked out again, until you understand it so well that you can explain it to a classmate. In the very challenging problems of Unit III, you should solve the problems and then see if there's a second way to solve it. And then a third...

And so, I'm recommending that you do two things again in this lesson. One, go over the Food for Thought from the last lesson one more time. Two, re-do the practice problems from last time. I've provided an answer key below.

 

 

Food for thought...

 

Practice Problems

Translations

Directions: Translate the following English sentences into TL using obvious letters for the sentence constants, e.g., use "S" for "Samantha is a singer."

  1. Either Samantha is a singer or Juan is a guitar player. 
  2. Jaclyn knows sign language and Jason has a dog. 
  3. If Olie plays video games then Irazema is a chef.
  4. Hugo has a pet banana if (and only if) Patricia is married to Tim. 
  5. If Amanda is a bartender, then RCG is a patron and Anabelle is the hostess.
  6. Either Tomás lives in Seattle and Lucía lives in Denver, or Tomás lives in Seattle and Mauricio lives in Cancún.
  7. It’s definitely not the case that Lucía lives in Denver and at the same time Mauricio lives in Cancún.
  8. If Sarah makes chicken pot pie then Lisa will make martinis.
  9. If Lisa makes martinis, Sarah will make chicken pot pie. 
  10. It’s not true that it’s not the case that Angie is an atheist.

Identifying the Main Operator

Directions: Identify the main operator in the following:

  1. ~[(O ∨ I) ⊃ (H ≡ P)]
  2. (~U & ~I)
  3. (([V ≡ J] & [D ≡ E]) ∨ [K ⊃ C]) ⊃ A
  4. ~[(O ∨ M) ⊃ (T ≡ M)]
  5. [(T & L) ∨ (T & M)] & ~[(L & M) ∨ Q]
  6. ~~~~~~O

 

 

Some possible translations

  1. S ∨ J
  2. J & D
  3. O ⊃ I
  4. H ≡ P
  5. A ⊃ (R & H)
  6. (T & L) ∨ (T & M)
  7. ~(L & M)
  8. S ⊃ L
  9. L ⊃ S
  10. ~~A

Identifying the Main Operator

The main operator is underlined

  1. ~[(O ∨ I) ⊃ (H ≡ P)]
  2. (~U & ~I)
  3. (([V ≡ J] & [D ≡ E]) ∨ [K ⊃ C]) A
  4. ~[(O ∨ M) ⊃ (T ≡ M)]
  5. [(T & L) ∨ (T & M)] & ~[(L & M) ∨ Q]
  6. ~~~~~~O

 

 

Question: How do we assess for validity in truth-functional logic?

Stoic methods of assessing for validity

We will be covering multiple ways of assessing for validity using TL. Let's begin with the ancient method of the Stoics: pattern recognition. The Stoics, through their research, discovered certain “patterns of reasoning” that are always valid. If one were to memorize these patterns and train oneself to recognize these "in the wild", so to speak, then one can recognize valid arguments. Relatedly, there are also certain patterns that are actually always invalid. And so one can come to know very quickly whether an argument is valid or invalid by identifying the underlying pattern of the reasoning.

Let's put labels on these ideas. A valid argument form is an abstract pattern of reasoning that an argument can take, regardless of the subject matter, that is always valid. A formal fallacy is an erroneous pattern of reasoning where the fault hinges on the logical words and how they connect the propositions. In other words, formal fallacies are poorly formed arguments in that their structure itself is flawed. We'll have more to say on that later.

Valid argument forms

Modus Ponens and Modus Tollens

A modus ponens is an argument that begins with a conditional and then reaches a conclusion by affirming the antecedent. Using "P" and "Q" to represent metavariables that can range over any simple or compound sentences and using the "∴" symbol as a conclusion indicator (like "therefore"), the basic structure of a modus ponens is as follows:

  1. PQ
  2. P
  3. Q

A modus tollens is an argument that begins with a conditional and then reaches a conclusion by denying the consequent. Again, using "P" and "Q" to represent metavariables that can range over any simple or compound sentences, the basic structure of a modus tollens is as follows:

  1. PQ
  2. ~Q
  3. ∴ ~P

These are actually very common forms of reasoning, even if people don't realize the underlying reasoning when they use these argument patterns. For example, here's an argument about why we need to mow the lawn.

  1. We had said that if it is sunny, then we will cut the grass.
  2. It is sunny.
  3. ∴ We will cut the grass.

The first premise is a conditional with "It is sunny" in the antecedent position and "We will cut the grass" in the consequent position. Then the argument moves forward by affirming the antecedent; in other words, it is affirmed that it is in fact sunny. And so, the argument rightfully concludes that the consequent must be true: we will cut the grass.

Sometimes students think that only simple sentences can occupy the place of the antecedent and consequent. This is not the case. In fact, whenever you find a conditional (a formula of TL where the ⊃ is the main operator), no matter how long it is, and you also have the antecedent by itself (the part to the left of the ⊃), then you have a modus ponens. For example:

  1. We had said that if it is sunny, and we're feeling energized, and we have a babysitter, then we will cut the grass, do laundry, and get groceries.
  2. It is sunny, and we're feeling energized, and we have a babysitter.
  3. ∴ We will cut the grass, do laundry, and get groceries.

In this case, we have a conditional in premise 1 with a conjunction embedded in a conjunction in the antecedent position as well as the consequent position. Translating to TL would make premise 1 looks something like this: "((S & E) & B) ⊃ ((C & L) & G)". Premise 2 is "(S & E) & B", which is the part to the left of the horseshoe. The conclusion is "(C & L) & G"—the part to the right of the horseshoe.

Modus tollens is very similar to modus ponens except that you deny the consequent and derive the negation of the antecedent. Here's an example:

  1. If the store is open, the lights will be on.
  2. The lights are not on.
  3. ∴ The store is not open.

Again, we begin with a conditional. In symbols it would be "O ⊃ L". We then deny the consequent, the part to the right of the horseshoe (plus a tilde). In symbols, this is "~L". Finally, we derive the negated antecedent, the part to the left of the horseshoe (plus a tilde). In symbols: "~O". Just like with a modus ponens, a modus tollens can have very large antecedents and very large consequents, but as long as it is still a conditional, the negation of the consequent, and the negation of the antecedent as a conclusion, then it is a modus tollens. For example:

  1. (([V ≡ J] & [D ≡ E]) ∨ [K ⊃ C]) ⊃ (~U & ~I)
  2. ~(~U & ~I)
  3. ∴ ~(([V ≡ J] & [D ≡ E]) ∨ [K ⊃ C])

 

 

Disjunctive Syllogisms and the "Not Both" Form

A disjunctive syllogism is an argument that begins with a disjunction, then draws a conclusion by denying the truth of one of the disjuncts. In symbols:

  1. PQ
  2. ~P
  3. Q

Just like the other valid argument forms, disjunctive syllogisms can take many forms. The disjuncts could be simple sentences or very long compound sentences. What matters is that the main operator is a wedge. If you have that, plus a negation of one of the disjuncts, you can derive the other disjunct as a conclusion. For example:

  1. We’ll eat at Dick’s Drive-In or we’ll eat at Spud’s Fish and Chips.
  2. We will not eat at Dick’s Drive-in.
  3. ∴ We will eat at Spud’s.

An argument in the “not both” form is an argument that begins with the negation of a conjunction, i.e., ~(P and Q), then draws a conclusion by affirming the truth of one of the conjuncts. In symbols:

  1. ~(P & Q)
  2. P
  3. ∴ ~Q

 

It is very important that one is careful in making sure that premise 1 is the negation of a conjunction. Sometimes students will use this technique on the negation of a disjunction, but that is not a valid inference.

 

In any case, as long as you have the negation of a conjunction (even if those conjuncts are very large compound sentences) and the assertion of one of the conjuncts, then you can infer the negation of the other conjunct. Here's an example:

  1. It is not the case that Iron Man is currently fighting Captain America and Tony Stark is having lunch with Pepper Potts.
  2. Iron Man is currently fighting Captain America.
  3. ∴ Tony is not having lunch with Pepper.

 

 

 

Formal fallacies

The following patterns of reasoning are always invalid. First, the fallacy of affirming the consequent:

  1. PQ
  2. Q
  3. P

This might look alot like modus ponens; but they are worlds apart as far as their logical acumen is concerned. Take a look at this example.

  1. If there is fire, then there is oxygen present.
  2. There is oxygen present.
  3. ∴ There is fire.

Premise 1 is indoubtedly true: oxygen is a necessary constituent of the chemical reaction that is fire. However, if you are reading this, you are most assuredly somewhere where there is oxygen. The argument would suggest that you are also in a place where there's fire, but I really hope that's not the case.2

Another way of thinking about the fallacy of affirming the consequent is that it puts the cart before the horse. It is true that oxygen is necessary for fire—premise 1 is safe. But it's not true that everywhere where there is oxygen there is also fire present. In other words, even though the conditional is true, the only way to move forward with this argument is with the assertion of the antecedent, not the consequent.

Lastly, here is the fallacy of denying the antecedent. Much like affirming the consequent has a vague family resemblance to modus ponens, denying the antecedent looks a lot like modus tollens. Don't be deceived. Denying the antecedent is a pattern of reasoning that is always invalid. In symbols:

  1. PQ
  2. ~P
  3. ∴ ~Q

Witness:

  1. If there is fire, then there is oxygen present.
  2. There is no fire.
  3. ∴ There is no oxygen present.

 

 

FYI

Homework!

In addition to reviewing the truth-tables for the truth-functional connectives, translate the following into TL. Then, using the pattern recognition method, assess for validity. Lastly, identify the relevant valid argument form or formal fallacy.

  1. If there is no reliable way to tell that you are dreaming, you can’t be sure you’re not dreaming right now. There is no reliable way to tell you’re dreaming. Therefore, you can’t be sure you’re not dreaming right now.
  2. If you never read your textbooks, you should stop buying them. I stopped buying them. So it must be that I didn’t read them.
  3. Either you’re the teacher or you’re a student. It is not the case that you’re a student. So you must be the teacher.
  4. If God exists, then there would be no unnecessary suffering. But it is not the case that there is no unnecessary suffering. Therefore, God does not exist.

 

Footnotes

1. Anders Ericsson (et al. 1993) present a theoretical framework for achieving expert performance in a given domain. The key is deliberate practice, which is a highly structured type of activity, with the explicit goal being to improve performance. "Specific tasks are invented to overcome weaknesses, and performance is carefully monitored to provide cues for ways to improve it further. We claim that deliberate practice requires effort and is not inherently enjoyable" (ibid., 368; emphasis added; see also this interview of Ericsson).

2. If you are somewhere where there is no oxygen, surely you have more pressing matters than learning about formal fallacies.

 

 

Truth-Tables
(Pt. I)

 

 

The biggest lie ever is that practice makes perfect.
Not true—practice makes you better.

~Barbara Oakley

Pattern Recognition Solutions

  1. If there is no reliable way to tell that you are dreaming, you can’t be sure you’re not dreaming right now. There is no reliable way to tell you’re dreaming. Therefore, you can’t be sure you’re not dreaming right now.
    1. ~R ⊃ ~D
    2. ~R
    3. ∴ ~D
    • This is a modus ponens.
  2. If you never read your textbooks, you should stop buying them. I stopped buying them. So it must be that I didn’t read them.
    1. ~N ⊃ ~B
    2. ~B
    3. ∴ ~N
    • This is the fallacy of affirming the consequent.
  3. Either you’re the teacher or you’re a student. It is not the case that you’re a student. So you must be the teacher.
    1. T ∨ S
    2. ~S
    3. ∴ T
    • This is a disjunctive syllogism.
  4. If God exists, then there would be no unnecessary suffering. But it is not the case that there is no unnecessary suffering. Therefore, God does not exist.
    1. G ⊃ ~U
    2. ~~U
    3. ∴ ~G
    • This is a very famous argument against God's existence called the Problem of Evil; it takes the shape of a modus tollens.

 

Tablet Time!

 

 

 

 

Question: How do we know the valid argument forms are actually valid?

 

Truth-table Analysis
(Pt. I)

Here are the steps for building truth-table matrices:

  1. Write in the following: the sentence constants (in alphabetical order), the TL-symbolization of the sentence(s).
  2. Draw the table.
    1. Note: There should be enough columns for the truth-value assignments and all the sentence constants and connectives of the formula(s) in TL; there should be enough rows for all the possible truth-values of the sentence constants.
    2. Rule: If there is one letter constant, there should be three rows; if there are two constants, there should be 5 rows; 3 constants, 9 rows; etc. (In other words, N = 2x + 1, where N equals the number of rows and x equals the number of sentence constants.)
  3. Fill in all possible truth-value assignments.
    1. Rule: If there are two letter constants total, input T-T-F-F for the first column then T-F-T-F for the second; if there are three letter constants total, input T-T-T-T-F-F-F-F for the first column, then T-T-F-F-T-T-F-F for the second, then T-F-T-F-T-F-T-F for the third. And so on for matrices with a higher number of total sentence constants.
  4. Find the main connective(s).
    1. Note: When you build a truth-table matrix containing multiple sentences (as in the case of arguments), you have to find the main connective of each sentence.
  5. Calculate the values under the connectives with the smallest scope.
    1. Note: If there is a tie for which is the smallest scope, compute the leftmost operator first.
  6. Calculate the value of the main connective(s), i.e., the final column(s).
  7. Review your work.

If you are assessing an argument for validity using the truth-table method, here is the Rule for Validity Test: If there is any row on the truth-table that contains all true premises (or premise), but a false conclusion, then the argument is invalid. If the table contains no row showing true premise(s) and a false conclusion, the argument is valid.

 

 

 

 

 

FYI

Homework!

  • Reading: Merrie Bergmann, James H. Moor, and Jack Nelson, The Logic Book, Chapter 3
    • Note: Read p. 69-76 and complete the exercise problems in 3.1E #1-2. Also, here is the student solutions manual for the chapter.
  • Practice Problems: Complete truth-table matrices for the arguments below.
    • Note: The solutions can be found in the next lesson.

Instructions: Use truth-table analysis to assess the following for validity. The first two statements, separated by a ";", are the premises. The conclusion comes after the "∴".

  1. P ⊃ Q; ~P; ∴ ~Q
  2. P ⊃ Q; Q; ∴ P
  3. ~(P & Q); P; ∴ ~Q
  4. P ⊃ Q; P; ∴ Q
  5. P ∨ Q; ~P; ∴ Q
  6. P ⊃ Q; ~Q; ∴ ~P

 

 

Supplemental Material—

 

 

Truth-Tables
(Pt. II)

 

 

Being active and engaged does not mean that the body must move. Active engagement takes place in our brains, not our feet. The brain learns efficiently only if it is attentive, focused, and active in generating mental models.

~Stanislas Dehaene

Truth-table Solutions from Truth-Tables (Pt. I)

 

 

 

Important Concepts

 

Set Theory

At this point, we need to become comfortable with the language of set theory, or the mathematical theory of well-determined collections. We denote sets with curly brackets, "{" and "}". We refer to anything within a set as a member or element of the set. We typically use the capital Greek letter gamma "Γ" as a variable used to talk about sets of sentences of TL. A set with only one member is the unit set. And a truth-value assignment (or TVA) is an assignment of truth-values to the simple (or atomic) sentences of TL. These are actually the columns in our truth-tables that lay out all the possible truth-value combinations of the sentences of TL we are working with. In other words, the TVA is the columns in green from the previous section.

We also have to discuss the concept of isomorphism. The general idea behind isomorphism is that there is a mapping between two structures that can be inversed. They can be helpful in that, if you have two isomorphic objects, and if you understand one in a mathematically rigorous way, you can understand the other just as rigorously. For example, Douglas Hofstadter (1999: 53-60) reminds us that there is are some isomorphisms between mathematics and reality. There are countless examples where, if we understand the mathematics behind certain events in the real world, we can predict and control the actual events in the real world. In his 1999 book Gödel, Escher, Bach, Hofstadter speculates that as we discover the isomorphisms between mental processes and computer programs, we can begin to understand the mind better, since we have a firm understanding of computer programs (ibid., 568-73).

The reason why isomorphisms are important here is that we will now take our logical concepts of logical consistency, validity, logical truth, etc., previously expressed in the natural language of English, and we will express them in a mathematically rigorous way in the language of TL. Our formal language will shed greater light on our concepts expressed in natural language.

So, our concept of logical truth (which is that a sentence is logically true just in case it is not possible for it to be false) is isomorphic with the notion of truth-functional truth—a sentence of TL is truth-functionally true if and only if it is true on every truth-value assignment. Our concept of logical falsity (which is that a sentence is logically false just in case it is not possible for it to be true is isomoprhic with the notion of truth-functional falsity—a sentence of TL is truth-functionally false if and only if it is false on every truth-value assignment. Our concept of logical indeterminacy (which is that a sentence is logically indeterminate just in case it is neither logically true nor logically false) is isomorphic with truth-functional indeterminacy—a sentence of TL is truth-functionally indeterminate if and only if it is neither truth-functionally true nor truth-functionally false.

What does this mean for us in practical terms? It means that if the final column of a sentence of TL is all T's, then we know it is truth-functionally true. If the final column is all F's, then it's truth-functionally false. And if it has some T's and some F's, then it is truth-functionally indeterminate. This formal depiction allows us to understand more fully what is meant by the concepts of logical truth, logical falsity, and logical indeterminacy through the use of formal structures.

 

 

Food for thought...

Food for thought

Perhaps the most useful aspect of the isomorphism between logical concepts and their truth-functional expressions is that they really do illuminate what we mean when we use concepts like those implication, equivalence, and logical consistency. Consider the following statements:

A: Rodrigo weighs exactly 200lbs.

B: Rodrigo weighs more than 100lbs.

In this set, A implies B; that is, if you know A, then you also know B. It is also intuitively obvious that they don't mean the same thing; i.e., they are not equivalent. Now consider the following set:

A: It is not the case that either Aristo or Blipo is home.

B: Aristo is not home and Blipo is not home.

In this set, there is (intuitively at least) implication and equivalence. But can we express this intuitive notion formally?1

 

 

 

Truth-functional equivalence and consistency

Here are some helpful rules:

  • Rule for Equivalence Test: If two formulas on top of a table have matching final columns, then they are equivalent. If the final columns do not match, then the formulas are not equivalent.
  • Rule for Consistency Test: Given two (or more) formulas side by side on top of a table, if there is at least one row where the main operator for all the formulas is true, then the sentences are consistent. If there is no row on which the main operators are true, the sentences are inconsistent.

 

 

Indirect truth-tables

 

 

FYI

Homework!

Supplemental Material—

  • Practice Problems: Merrie Bergmann, James H. Moor, Jack Nelson, and James Moor, The Logic Book, Chapter 3
    • Note: The interested student can find more problems for determining whether a sentence is truth-functionally true, truth-functionally false, or truth-functionally indeterminate in 3.2E #1 (p. 85), problems for determining whether a pair of sentences is truth-functionally equivalent in 3.3E #1 (p. 91), problems for determining whether sets of sentences are truth-functionally consistent or truth-functionally inconsistent in 3.4E #1 (p. 94), and problems for assessing the validity of an argument using truth-table analysis in 3.5E #1 (p. 102-103). Also, here is the student solutions manual for the chapter.
  •  

    Footnotes

    1. We will be covering the truth-functional concepts of equivalence and consistency, but will not be covering implication. To perform truth-table analysis in order to see if one sentence of TL P implies another sentence of TL Q, all one must do is draw a truth table for a conditional statement and place P in the antecedent position and Q in the consequent position. If the final column, that is the column of the horseshoe, reads all T's, then that is a valid implication; if not, then it is an invalid implication.

     

 

 UNIT III

logic3.jpg

Rules of Inference
(Pt. I)

 

 

Logic and mathematics seem to be the only domains where self-evidence manages to rise above triviality; and this it does, in those domains, by a linking of self-evidence on to self-evidence in the chain reaction known as proof.

~W. V. O. Quine

Approaching modern logic...

At this point, you are familiar with both Aristotelian and Stoic logic, as well as some of their methods of assessing for validity. In addition to the informal Imagination Method (where you simply use your logical intuition to see if a conclusion is necessitated by the premises), you've also seen Aristotle's Square of Opposition at work on immediate inferences, as well as the method of assessing for validity which makes use of a syllogism's mood and figure, a method which Aristotle also pioneered. We've also seen the Stoic's method of pattern recognition and truth-table analysis which was developed in 1893 by C. S. Peirce (see Anellis 2012). The approach we've taken has been more or less historical, beginning with ancient techniques first and then jumping to the 19th century—a sliding timeline technique we used both for Aristotle's categorical logic and for Stoic truth-functional logic.1

We are now moving towards a method that was first developed in the 1930's (see Pelletier 1999). As you will soon see, this method clearly needed to wait until logicians had a formal language through which they could reduce very complicated expressions into a few symbols. We will be utilizing TL to this end. It is called natural deduction.

Before we move into natural deduction, however, it is important to lay out some questions that might arise at this juncture. Some of these questions, although they seem trifling at first, tie into other innovations of the 20th century—innovations that changed the world. Although logic's part in the digital revolution is just one of many key innovations that allowed the revolution to happen, a student of computer science will quickly recognize in some of the techniques we'll be learning the similarities to writing computer code.

Here are some questions that we might ponder at this point:

  1. Aristotle or the Stoics? Once logicians had a formal language for logic, first-order logical systems were proven to be truth-functionally complete and truth-functionally consistent. But the question is: Under whose terms? Is the right approach to logic categorical or truth-functional? Both logics are "contained" within modern first-order logic, but who acquired who?
  2. Can we symbolize Aristotle and Boole? At the end of Unit I we were left with the puzzle of logical relativity: some arguments were valid depending on whether we took the hypothetical viewpoint or the Aristotelian existential viewpoint. But we had no way of expressing this distinction in our symbols. Is there a way to do this?
  3. Is Logicism true? Recall that Frege's impetus for developing his formal language was to demonstrate that mathematics can be derived out of logic. Is this possible?
  4. How do we know Stoic valid argument forms are actually valid? We used the Stoic pattern recognition method in the last unit, but can we prove that they are actually always valid?

 

 

Actually...

We can answer one of the questions above now. We've already done it with truth-tables, and, even though there are more formal methods, this shows us that there really is something special about these patterns of thought. They do appear to be always valid. Here's a slideshow to remind you of the tables in question.

 

 

We can also begin to address question #1 above. It really does seem like categorical logic and truth-functional logic are similar enough to unify. Here's one similarity. Notice that in modern categorical logic, we draw a Venn diagram for an argument and then populate it with information from the argument, namely the premises. In modern truth-functional logic, we draw a truth-table for an argument, then we populate it with information from the argument. Only then do we decide if the argument is valid or invalid.

Another similarity is that the validity tests for both are visual. If you've correctly populated the information from a valid argument into a Venn diagram, you can literally see the conclusion. Something similar occurs in truth-table analysis—you can see the validity (or non-validity) of the argument.

Lastly, both validity tests are mechanical. They are algorithmic. There are steps one must follow. In other words, when assessing for validity, we become like human computers. In fact, when we follow algorithmic processes to assess for validity, we are doing what modern computers do when they read parameters for customized programs. Tellingly, parameters are also known as arguments.

For these reasons it seems at least possible that we can unify categorical logic and truth-functional logic. But will it be under Aristotelian or Stoic terms? Stay tuned.

 

 

 

Important Concepts

 

Food for thought...

As previously stated, in Chapter 6 of A Mind for Numbers, Barbara Oakley reminds us that “choking” occurs when we have overloaded our working memory. To prevent this from happening, we must take enough time to “chunk”, or integrate one or more concepts into a smoothly connected working thought pattern. Put another way chunking is the mental process of grouping together connected items, words, or even whole declarative sentences so that they can be stored or processed as single concepts.

Metaconcept diagram
Metaconcept diagram.

In order to become proficient in natural deduction, you'll need to truly integrate the rules of inference so that you can easily recognize them "in the wild". This will require that you learn how to chunk. The idea is simple enough. Take a look at the diagram on the right. The idea is that concepts are made of facts. For example, Concept 1 (C1) is composed of Fact 1 (F1), Fact 2 (F2), and Fact 3 (F3). Once you've chunked F1, F2, and F3 into C1, you've substantially lessened the load on your working memory. You can perform this feat at a higher level by chunking several concepts into a metaconcept. In the diagram, Metaconcept 1 is composed of Concept 1 (C1), Concept 2 (C2), and Concept 3 (C3).

In the days to come you must memorize the rules of inference. It is absolutely essential, since no student is successful in natural deduction without having these rules well-integrated into their working memory. To actually enjoy natural deduction, you'll need to be able to see how a combination of rules can help you complete a proof. In other words, these combinations will be like metaconcepts. I liken this to a combination of moves on the chessboard that you know are typically successful. You can also think about it as gears working together. One gear alone doesn't make a clock tick. But there is a beauty to the gears working together in concert to perform a function. But only the clockmaker knows the necessary pattern. It's time to start looking for patterns.

 

 

 

Rules of Inference

 

 


 

Storytime!

Storytime!

You might have noticed that one of the rules of inference discussed in the video was the Stoic argument form modus ponens. This is not all that Stoics have handed down to us. In fact, the Stoics are mostly known for their ethical views.

Bust of Marcus Aurelius
Bust of
Marcus Aurelius.

In his inspiring 2019 How to Think Like A Roman Emperor, Robertson reminds us that, although Academic Philosophy today is stuffy and esoteric, Philosophy in the ancient world was much more a way of life. It appears to be the case that you could identify philosophers by the way they looked. The Stoics were no exception. Even the most powerful Stoic to have ever lived, the Roman emperor Marcus Aurelius, only reluctantly wore his royal garbs, after being much pressured by his advisors.

Although most Stoic writings are lost (see Footnote 1), we can discern their basic view: you just live in accordance with nature. This was synonymous with living wisely and virtuously. The Greek word arete, the ultimate aim of a Stoic's life, is actually best translated not as “virtue” but as “excellence of character.” Something excels when it performs its function well. Stoics believed that humans excel when they think clearly and reason well about their lives. The Stoic cardinal virtues are wisdom, justice, courage, and moderation, a set of virtues that was co-opted from the Platonic school. The Stoics stressed that arete is the only intrinsic good. External factors, like fancy clothes, wealth, and social status, are only advantages, not goods. They have no moral status. The wise mind uses them to achieve arete.

Perhaps the most important contribution of the Stoics to Western thought, logic notwithstanding, was the concept of logos. The Stoics theorized that there was an all-pervasive cosmic force that organized everything rationally. This is the logos. This notion was the reason that the Stoics were the first philosophers in history to argue for what today we would call human rights. This is since humans exhibited rationality (at least sometimes). But logos is itself rational. This must mean, Stoics reasoned, that each of us contains a spark of this divine logos. And so, we are all worthy of dignity and respect.2

 


 

 

FYI

Homework!

  • Memorize this.
  • Read this.
    • Note: Read from p. 146-152.

 

Footnotes

1. As previously noted, Stoic writings, along with most of the literature of the classical world, are lost, a process that was initiated when Christians took control of the Roman empire in the 4th CE (Nixey 2018). In fact, only about 10% of classical writings are still in existence. If one narrows the scope only to writings in Latin, it is only about 1% that remains (ibid., 176-177; see also Edward Gibbon's History Of The Decline And Fall Of The Roman Empire, chapters 15 and 16). And so, Stoic logic was actually reconstructed (or "rediscovered") in the 20th century, as modern logicians were formalizing their field (O'Toole and Jennings 2004). A key figure in this reconstruction was Benson Mates, whose 1972 Elementary Logic I cited when describing the history of logic back in Unit I.

2. So important was the notion of logos that it was even incorporated into Christianity. It appears early on in the Gospel of John, but it is developed further by the so-called Doctors of the Church. The reason for this is interesting. Apparently, the register and accent that the various books of the Bible were written in tended to grate on educated ears. In short, it was “embarrassing” to many Christian intellectuals that texts allegedly inspired by God were written in a less than glorious style (Nixey 2018: 160-61). So full of grammatical mistakes and inelegant prose (at least compared to pagan writers) was the Bible that St. Augustine even felt the need to defend it (ibid.; see also Brown 1967: 458). Given the obvious superiority of pagan literature, argues Nixey, Christians began to Christianize and absorb it, at least in part due to self-interest: to raise the intellectual caliber of the Church. But it was only the parts that could be fit into existing Christian dogma. And so, some Greek concepts, like that of logos, found their way into Christian philosophy. Needless to say, writings that did not serve Christianity any useful function were either neglected or deliberately destroyed and eventually lost to history.

 

 

Rules of Inference
(Pt. II)

 

 

It is possible to invent a single machine which can be used to compute any computable sequence.

~Alan Turing

Important Concepts

 

Subderivational Rules

 

Clarifications

Using subderivational rules might take some getting used to. Students typically are unsure as to what the P's, Q's, and R's are supposed to represent. In addition to the assigned reading in the FYI section (in particular pages 155-163), I hope that this section will clarify some misconceptions.

Let's begin with an important label. The main scope is the far left bar. That is where your derivations begin and end. Whenever you use a subderivation, you should indent a bit. If you were to use a subderivational rule within a subderivational rule, you should indent again. We will eventually see nested subderivations. The main point here is that subderivations must be ended and the derivation must always conclude on the main scope.

While you are building your proof, you are allowed to cite any line above the line you are working on as long as it is on the main scope. If you are working in a subderivation, you are allowed to cite any line above your current line within the subderivation (including the assumption) as well as any line above the line you are working on from the main scope.

 

Image of schema for conditional introduction (⊃I)

 

Conditional and biconditional introduction

Let's take a look at conditional introduction (⊃I), which is pictured above. The general idea here is that if you want to couple two sentences of TL with a horseshoe, then all you need to show is that you can move from the antecedent (P) to the consequent (Q) within a subderivation. You place what you want the antecedent to be in the assumption of the subderivation and what you want the consequent to be at the bottom of the subderivation. Then you prove, through the use of the rules of inference, that you can in fact work your way from the assumption of the derivation (the antecedent) to the consequent. Once you've done that, you can end the subderivation and write in your conditional (PQ) on the main scope.

Take a look now at the proof below. Study in particular lines 2-4. This is a conditional introduction in action. During the proof we decided that we wanted to derive "A ⊃ B". The reasoning for this will become clear in the Getting Started (Pt. I) video. Once it has been decided that we want "A ⊃ B", then we have to reason about how to acquire it. Since it is a conditional, it is reasonable to assume that we can derive it using our conditional introduction rule. And so a subderivation is drawn out. Although what you see is the completed subderivation, the method of arriving at that conditional is simply to indent, draw scope lines for the subderivation, place the antecedent of the conditional you want to derive (which in this case is "A") at the top of the subderivation (as an assumption), and place the consequent of the conditional you want to derive (which in this case is "B") at the bottom of the subderivation. Once this setup is complete, your task is to show that you can indeed justify placing the consequent where you did. You will see how this is done in the video. What is important to realize here, however, is that once you've documented your justification of line 4, you can end the subderivation, return to the main scope, and write in your newly derived conditional there. That's how conditional introduction works.

 

Sample problem showcasing conditional introduction

 

Biconditional introduction is actually very similar to conditional introduction. Essentially, it's just two conditional introductions. This means that two subderivations are required. In the first subderivation, you have to show that, assuming the left part of the biconditional, you can derive the right part; and in the second subderivation, assuming the right part of the biconditional, you have to derive the left part. If you can do this, you can end both subderivations, return to the main scope, and write in your biconditional. You'll see an example of this in the video.

Negation introduction and elimination

Negation introduction and negation elimination work much in the same way. Let's begin with negation introduction. If you've decided that you want the negation of some sentence of TL, basically a sentence of TL plus a tilde to its left, then you should use negation introduction. What you do is: indent, draw scope lines for the subderivation, place the sentence of TL that you want minus the tilde at the top (as the assumption), and the sentence of TL that you want (now including the tilde) after the subderivation on the main scope (see image below). Your task then is to show that you can derive a contradiction "Q" and "~Q" within the subderivation somehow. This contradiction can be composed of any sentence of TL and its negation. For example, it could be "A" and "~A", or "B ∨ (C & ~D)" and "~(B ∨ (C & ~D))", etc. It is once you've established this contradiction that you can end the subderivation and return to the main scope with your newly acquired negated sentence, which should always be the negation of the assumption of that subderivation.

 

Image of schema for negation introduction (~I)

 

Negation elimination works exactly the same except that your initial assumption should be a negation and the sentence you derive after the subderivation should be the "un-negated" assumption, basically the assumption of the subderivation minus the tilde. In other words, you've eliminated the tilde.

Disjunction elimination

Lastly, there's disjunction elimination. This is probably the most cumbersome of the subderivational rules. It's so cumbersome, in fact, that I usually tell students to only use it as a last resort. Having said that, it is sometimes necessary to use it. It requires the following ingredients: 1. a disjunction (PQ), a subderivation that goes from the left disjunct (P) to some desired sentence of TL (R), and another subderivation that goes from the right disjunct (Q) to the same desired sentence R. If this can be done, you can conclude with R on the main scope (or whatever scope the disjunction you began with is on).

The intuition behind disjunction elimination is the "damned if you do, damned if you don't" situations we sometimes get ourselves into. Here's an example.

  1. Either I'm going to my in-laws this weekend or I have to clean the attic.
  2. If I go to my in-laws, I'll be unhappy.
  3. If I clean the attic, I'll be unhappy.
  4. Therefore, I'm going to be unhappy.

Premise 1 is the disjunction. Premise 2 is the first subderivation going from the left disjunct to the sentence to be derived, R. Premise 3 is the second subderivation going from the right disjunct to R. The conclusion is R.

 

Image of schema for disjunction elimination (∨E)

 

Getting Started (Pt. I)

 

 

 

Storytime!

Although I've been making connections between computer science and logic since day one, in this section we will review some of the most explicit connections between the fields of logic, mathematics, and computer science. The main protagonist in this story is Alan Turing, but we'll have to return to the turn of the 20th century for some context.

In 1900, German mathematician David Hilbert presented a list of 23 unsolved problems in Mathematics at a conference of the International Congress of Mathematicians. Given how influential Hilbert was, mathematicians began their attempts at solving these problems. The problems ranged from relatively simple problems that mathematicians knew the answer to but that hadn't been formally proved all the way to vague and/or extremeley challenging problems. Of note is Hilbert's second problem: the continuing puzzle over whether it could ever be proved that Mathematics as a whole is a logically consistent system. Hilbert believed the answer was yes, and that it could be proved through the building of a logical system, also known as a formal system. More specifically, Hilbert sought to give a finistic proof of the consistency of the axioms of arithmetic.1 His approach was known as logicism.

"Mathematical science is in my opinion an indivisible whole, an organism whose vitality is conditioned upon the connection of its parts. For with all the variety of mathematical knowledge, we are still clearly conscious of the similarity of the logical devices, the relationship of the ideas in mathematics as a whole and the numerous analogies in its different departments. We also notice that, the farther a mathematical theory is developed, the more harmoniously and uniformly does its construction proceed, and unsuspected relations are disclosed between hitherto separate branches of the science. So it happens that, with the extension of mathematics, its organic character is not lost but only manifests itself the more clearly" (from David Hilbert's address to the International Congress of Mathematicians).

As Hilbert was developing his formal systems to try to solve his second problem, he (along with fellow German mathematician Wilhelm Ackermann) proposed a new problem: the Entscheidungsproblem. This problem is simple enough to understand. It asks for an algorithm (i.e., a recipe) that takes as input a statement of a first-order logic (like the kind developed in this course) and answers "Yes" or "No" according to whether the statement is universally valid or not. In other words, the problem asks if there's a program that can tell you whether some argument (written in a formal logical language) is valid or not. Put another (more playful) way, it's asking for a program that can do your logic homework no matter what logic problem I assign you. This problem was posed in 1928. Alan Turing solved it in 1936.

Alan Turing
Alan Turing.

Most important for our purposes (as well as for the history of computation) is not the answer to the problem, i.e., whether an algorithm of this sort is possible or not. Rather, what's important is how Turing solved this problem conceptually. Turing solved this problem with what we now call a Turing machine—a simple, abstract computational device intended to help investigate the extent and limitations of what can be computed. Put more simply, Turing developed a concept such that, for any problem that is computable, there exists a Turing machine. If it can be computed, then Turing has an imaginary mechanism that can do the job. Today, Turing machines are considered to be one of the foundational models of computability and (theoretical) computer science (see De Mol 2018; see also this helpful video for more information).

A representation of a Turing machine
A representation
of a Turing machine.

What was the answer to Hilbert's second problem? Turing proved there cannot exist any algorithm that can solve the Entscheidungsproblem; hence, mathematics will always contain undecidable (as opposed to unknown) propositions.2 This is not to say that Hilbert's challenge was for nothing. It was the great mathematicians Hilbert, Frege, and Russell, and in particular Russell’s theory of types, who attempted to show that mathematics is consistent, that inspired Turing to study mathematics more seriously (see Hodges 2012, chapter 2). And if you value the digital age at all, you will value the work that led up to our present state.

Today, the conceptual descendants of Turing machines are alive and well in all sorts of disciplines, from computational linguistics to artificial intelligence. Shockingly relevant to all of us is that Turing machines can be built to perform basic linguistic tasks mechanically. In fact, the predictive text and spellcheck that are featured in your smart phones use this basic conceptual framework. Pictured below you can see transducers, also known as automatons, used in computational linguistics to, for example, check whether the words "cat" and "dog" are spelled correctly and are in either the singular or plural form. The second transducer explores the relationship between stem, adjective, and nominal forms in Italian.3 And if you study any computational discipline, or one that incorporates computational methods, you too will use Turing machines.

 

An automoton that can spell dog(s) and cat(s)

A much more complicated automaton

 

 

Timeline:
Turing from 1936 to 1950

 

Intelligent Machines

As the computation revolution was ongoing, Turing began to conceive of computational devices as potentially intelligent. He could not find any defensible reason as to why a machine could not exhibit intelligence. Turing (1950) summarizes his views on the matter. Important for our purposes is an approach to computation that Turing was already pondering.

"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain" (Turing 1950: 456).

When considering the objection that machines can only do what they are programmed to do, Turing had a very interesting insight. Although Turing was brilliant, he knew that the result of his work had in some part to do with his learning growing up. In particular, we've mentioned that the work of Hilbert, Frege, and Russell were very influential. Maybe that's the case with all of us. “Who can be certain that ‘original work’ that [anyone] has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles" (Turing 1950: 450; interpolation is mine; emphasis added). As the epigraph above shows, Turing was thinking about machine learning...4

 

 

Tablet Time!

 

FYI

Homework!

  • Reading: The Logic Book (6e), Chapter 5
    • Note: Read from p. 155-163 and 167-173.

    • Also do problems in sections 5.1.1E (p. 152-155) and 5.1.2E (p. 163-165).

    • Lastly, memorize all rules of inference, both derivational and non-subderivational rules.

      • Note: Your success in this class depends on this.

 

Footnotes

1. It was Kurt Gödel who dealt the deathblow to this goal of Hilbert. This was done through Gödel's incompleteness theorem in 1931. The interested student can take a look at the Stanford Encyclopedia of Philosophy's Entry on Gödel’s two incompleteness theorems or this helpful video.

2. Part of the inspiration for Turing's solution came from Gödel's incompleteness theorem (see Footnote 1).

3. Both transducers pictured were made by the instructor, R.C.M. García.

4. According to many AI practitioners (e.g., Lee 2018, Sejnowski 2018), we are in the beginning of the deep learning revolution. Deep learning, a form of machine learning, may prove to be very disruptive economically. We'll have to wait and see (or start preparing).

 

The RCG-phi logo

 

Rules of Inference
(Pt. III)

 

 

"I'm just one hundred and one, five months and a day."

"I can't believe that!" said Alice.

"Can't you?" the Queen said in a pitying tone. "Try again: draw a long breath, and shut your eyes."

Alice laughed. "There's no use trying," she said: "one can't believe impossible things."

"I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."

~Lewis Carroll
Through the Looking-Glass, and What Alice Found There1

Absurdity and Truth

Food for thought

Question: Have you ever wondered why contradictions are so frowned upon? Of course, contradictions are bad. Everyone knows this. But can you prove that they are absurd? Now you can.

If you begin with a contradiction as an assumption, you can literally deduce anything. For example, in the argument below, let's take "A" to represent "RCG plays piano". Let's also blatantly contradict ourselves and open with the assumption "A & ~A". Using the simple tilde elimination rule, you can easily establish that "B" naturally follows from this assumption (see derivation below). But "B" could be anything!

Proof with contradiction

This shows that if someone accepts even one contradiction as true, then any sentence can validly follow. In other words, if even one contradiction is true, then every sentence is true. This collapses the distinction between truth and falsity. But this is completely absurd. The functionality of our language, both TL and our natural languages, requires the basic distinction between truth and falsity. The only solution is to stipulate that contradictions cannot be. This is called the Law of Noncontradiction (for analysis, see Herrick 2013: 388-9).

 

 

 

Important Concepts

 

Clarifications on the different flavors of natural deduction...

As you learned in the Important Concepts above, we will be building various types of proofs. Fundamental to understanding these is the notion of derivability, the general idea being that a sentence P of TL is derivable from a set of sentences Γ if you can derive P by the use of a finite number of inference rules from the set Γ.

One type of derivation we can perform is to demonstrate that an argument is valid. The setup for this is to set the premise(s) of the argument as the initial assumption(s) and then derive the conclusions using valid inference rules. If this can be done, then the argument is valid. In this unit, I will only give you valid arguments. Your task will be to show the valid inference rules that take us from the premises to the conclusion.

You will also be asked to prove theorems. These are derivations in which there are no initial assumptions. Put more formally, a sentence P of TL is a theorem in SD if, and only if, P is derivable in SD from the empty set. In practice, this means that you must draw your main scope (as always), place the theorem to be proved at the bottom of the scope line, and then begin immediately with some subderivational rule. A good rule of thumb is to look at the main operator of the theorem to be proved and use a rule relating to that operator. For example, if the theorem is a conditional, you're most likely going to use conditional introduction. If all else fails, use one of the tilde rules. When you are asked to do problems of this sort, rest assured that the sentences in question will in fact be theorems. Your task will be to prove that they are.

One last derivation that we will be performing is the proof of inconsistency. The general idea is that you will show some set of sentences Γ to be inconsistent by deriving a contradiction on the main scope. When you are asked to do problems of this sort, you will be assured that the set is in fact inconsistent. Your task will be to derive a contradiction.

Note on symbols

I should add one more thing. Previously we had been using the double turnstile ("⊨") to mean logical consistency. Now that we've developed our formal language, we can be more precise. We will use the "⊨" symbol to mean semantic entailment. For example:

{"Rodrigo weighs 200lbs."} ⊨ "Rodrigo weighs more than 100lbs."

In this example, the meaning of the sentence in the set ("Rodrigo weighs 200lbs.") entails the meaning of "Rodrigo weighs more than 100lbs." In other words, accepting the truth of the meaning (or semantic content) of the first guarantees that you will accept the truth of the semantic content in the second.

This same notion should carry over to our formal language TL. We will use the single turnstile ("⊢") to mean syntactic entailment. These two concepts, we hope, should be isomorphic. That is to say, relationships of semantic entailment in natural language should carry over as syntactical entailment in formal languages. Here's an example of a set syntactically entailing another sentence in TL:

{~(A & B), A} ⊢ ~B

Put in the simplest terms, we'll be using "⊨" when using natural language and "⊢" when using TL.

 

 

 

Slow burn...

We have now entered the most frustrating (but rewarding!) part of the course. Believe it or not, the method of proof that we are using is much more user-friendly than previous systems. That's actually why it's called "natural deduction." But these "natural deductions", our derivations, take some getting used to either way. You must learn to "see" the patterns of reasoning, follow the inference rules rigorously, and double check your work to ensure you have not made any illegal moves or syntax errors. To be honest, it will be tough in the beginning. Here are some tips.

In chapter 2 of A Mind for Numbers, Oakley gives the following tips for working on tough math problems:

  • When doing focused thinking, only do so in small intervals (at first). Set aside all distractions for, say, 25 minutes. Use a timer.
  • Focus not necessarily on solving the problem, but do work diligently to make progress.
  • Then when your timer goes off, reward yourself.
  • Try to complete 3 of these 25min intervals any given day.

In chapter 3, Oakley emphasizes that breaks are essential. Surf the web, take a walk, workout, switch to a different subject. Your mind will naturally keep working on the problem in the background. But(!) have a paper and pen handy. Solutions may come at any time. And don’t stay away from a problem for longer than a day. You might lose your progress, just like in physical fitness.

Studying with phone
Distracted "studying" is
basically a waste of time.

I have to reiterate that these techniques really work. Students of mathematics, computer science, and logic would all benefit from truly setting aside 25-minute intervals (with no phones, no laptops, etc.) to focus on some problems. Getting rid of all distractions gives your mind the time it needs to solve the problem; distractions make it so that your mental effort is diluted and this may result in not solving the problem. Again, it may seem like you're spending a lot of time on a problem, but if you're letting yourself be distracted, more and more of your mental energy is subtracted from the task and you may never reach peak focus.

In chapter 7 of How We Learn, Dehaene (2020) reminds us that the ability to focus is one of the hallmarks of animal intelligence, human or otherwise. Interestingly, our ability to focus has some drawbacks. In particular, when you focus on something, you lose focus of all else. Inattentional blindness is example of this. Check out the following video:

 

 

Dehaene also discusses the myth of multitasking. In multiple experiments, subjects demonstrate a significant delay in processing information when they are “multi-tasking.” However, because subjects cannot assess how rapidly they are performing a task until they’ve switched their attention to that task, subjects are unaware of the delay in processing. And so, subjects often believe there is no delay in processing, even though objectively there is. Dehaene adds that perhaps one can multitask if they undergo intensive training in one of the tasks, but this is very rare. In short, you can't multitask while learning logic.

Frustrated student
This is what happens if you don't
learn your inference rules.

I might add that if you haven't memorized the rules of inference, you should focus on that first. Chunking will be necessary for natural deduction to actually be natural. You need to know those rules well. Practice writing them out without looking at them. Do this multiple times a day. Re-write those that you got wrong several times correctly. Eventually, you have to just be able to intuitively recognize these patterns. If you don't, you will only become frustrated as you try to continue with the lessons. More importantly, you cannot be successful in this class without having mastered the inference rules—plain and simple.

But I don't want to close this section on a negative note. So let me mention this. Chunking is a powerful technique. Hofstadter (1999, ch. 12) reminds us that chunking is precisely what chess grandmasters do, which explains why they are able to memorize hundreds of games. If you teach yourself this technique, you can apply it to the rest of your studies. This might be the difference between a successful journey through academia and a not so successful one. Remember: train smart, not hard.

Getting started (Pt. II)

 

 

FYI

Homework— The Logic Book (6e), Chapter 5

  • Re-do problems in 5.1.2E (p. 166).

  • Re-read p. 167-173.

  • Do problems in 5.1.3E (p. 173-174).

  • Read p. 174-209.

  • Do problems in 5.3E #1, a-i (p. 209).

  • By the way, here is the student solutions manual.

 

Footnotes

1. Lewis Carroll, born Charles Lutwidge Dodgson, was not only a talented writer in the genre of so-called literary nonsense; he was also a logician and mathematician.

 

 

Replacement Rules
(Pt. I)

 

 

Important Concepts

 

SD+

By now you should be comfortable with the general notion of natural deduction, even if you are still working out some kinks. Today we introduce another logical system: SD+. As you learned in the Important Concepts above, SD+ has more inference rules and some handy replacement rules. These new tools will you grant you the capacity to make your derivations radically shorter in some cases. A complete list of the rules that are found in SD+ but not in SD can be found here. Let's make sure you understand some of the most important rules.

DeMorgan's Rule (DM)

I find DeMorgan's to be one of the most handy tools of SD+. The general idea is that if you have a negated conjunction, you can turn it into a disjunction where both disjuncts are negated. It's important to not shy away from double negations when using this rule. For example, "~(A & ~B)" should be replaced by "~A ∨ ~~B".

DM also applies to negated disjunctions, which you can replace with a conjunction where both conjuncts are negated. For example, "~(A ∨ ~B)" can be replaced by "~A & ~~B".

Double Negation (DN) and Disjunctive Syllogism (DS)

Let's continue with the sentence from the preceding example: "~A ∨ ~~B". Let's suppose that you are trying to derive "B" and you have "A" as one of your assumptions. You want to do a disjunctive syllogism using "~A ∨ ~~B" and "A". The ingredients are right, but they need a little preparation. Recall that DS requires a disjunction and a negation of one of the disjuncts. In "~A ∨ ~~B", "~A" is the left disjunct and "~~B" is the right disjunct. In order to truly have a negation of the left disjunct, you need "~~A". Luckily, with DN, you can do just that. First, using DN, replace "A" with "~~A". Now you can use "~~A" and "~A ∨ ~~B" to infer "~~B", using DS. Lastly, recall that you are looking to derive "B". Simply apply DN to "~~B" and obtain "B". See the derivation below.

 

Sample of proof in SD+

 

Hypothetical Syllogism (HS) and Implication (Imp)

HS is a valuable tool for working with conditionals. The idea is simple. If you have two conditionals and there is a "middle man", where some sentence of TL is both the consequent of one conditional and the antecedent of another, you can get rid of the middle man and form just one conditional. For example, if you have "A ⊃ B" and "B ⊃ C", then you can infer "A ⊃ C". Similarly, if you have "(M ∨ (N ≡ O)) ⊃ ~(R & S)" and "~(R & S) ⊃ ((T & U) & V)", then you can infer "(M ∨ (N ≡ O)) ⊃ ((T & U) & V)".

Equally useful for working with conditionals is Imp. All you need is a disjunction where the left disjunct is negated, and you can eliminate the tilde and replace the wedge with a horseshoe. For example, "~A ∨ B" can be replaced with "A ⊃ B". Similarly, "~(R & S) ∨ ((T & U) & V)" can be replaced with "(R & S) ⊃ ((T & U) & V)".

These, in combination with association, commutation, and distribution, should go a long way towards making your proofs shorter and more elegant.

 

Remember!!!

Replacement rules work both ways!

 

Tablet Time!

 

Final comments

As mentioned in Tablet Time!, you can use the replacement rules in subsentential components, i.e., smaller parts of the larger sentence. For example, take a look at the following sentence:

(A & B) ∨ (C ⊃ D)

As we know by taking a look at our reference sheet for the rules of replacement, the rule known as commutation allows us to shift about the elements of a conjunction and/or a disjunction. The sentence above contains a conjunction as a subsentential component, namely the left disjunct. As such, we can apply commutation on it like this:

(B & A) ∨ (C ⊃ D)

With that step completed, we can also apply commutation to the whole thing:

(C ⊃ D) ∨ (B & A)

Now look at the following sentence:

~~~~~[(~B & ~C) ≡ (D ∨ E)]

We can apply a replacement rule just to the left part of the bracketed biconditional:

~~~~~[~(B ∨ C) ≡ (D ∨ E)]

Can you tell which rule it was? (Hint: It was DM.)

 

 

FYI

Homework— The Logic Book (6e), Chapter 5

 

 

Replacement Rules
(Pt. II)

 

 

Getting Started (Pt. III)

 

 

FYI

Homework— The Logic Book (6e), Chapter 5

 

 

 UNIT IV

tunnel.jpeg

Predicate Logic

 

 

But every error is due to extraneous factors (such as emotion and education); reason itself does not err.

~Kurt Gödel

Goals for Unit IV

At the beginning of Unit III, I posed some lingering problems that still needed to be worked out. We actually dealt with one pretty swiftly, demonstrating that Stoic valid argument forms really were always valid. I reproduce that list here:

 

  1. Aristotle or the Stoics? Once logicians had a formal language for logic, first-order logical systems were proven to be truth-functionally complete and truth-functionally consistent. But the question is: Under whose terms? Is the right approach to logic categorical or truth-functional? Both logics are "contained" within modern first-order logic, but who acquired who?
  2. Can we symbolize Aristotle and Boole? At the end of Unit I we were left with the puzzle of logical relativity: some arguments were valid depending on whether we took the hypothetical viewpoint or the Aristotelian existential viewpoint. But we had no way of expressing this distinction in our symbols. Is there a way to do this?
  3. Is Logicism true? Recall that Frege's impetus for developing his formal language was to demonstrate that mathematics can be derived out of logic. Is this possible?
  4. How do we know Stoic valid argument forms are actually valid? We used the Stoic pattern recognition method in the last unit, but can we prove that they are actually always valid?

 

With regards to #1, our goal will be to unify Aristotle's categorical logic with Stoic truth-functional logic. Only then will we realize whose logic is more fundamental. With regards to #2, we will develop a mechanism through which we can distinguish between the Aristotelian existential viewpoint and the Boolean hypothetical viewpoint. Lastly, as we complete our survey of the history of first-order logic, we will discover whether or not logicism is true.

All this, however, begins with PL (short for predicate language), the logical language featured in this unit.1

 

Important Concepts

 

 

 

 

Symbolizing atomic sentences

Let's begin mastering PL. As you learned in the Important Concepts above, PL breaks down natural language into just two components: singular terms and general terms. And yet, all propositional content can be expressed with just these two ingredients. In other words, propositions are simply a combination of one or more singular terms and one or more general terms. Let's begin with an atomic sentence, i.e., a sentence with a singular term as a subject and one (or more) general term(s) in the predicate.

Take as an example, "The tallest building in Chicago has more than 90 floors.” Recall that singular terms can be either proper names or definite descriptions. In this example, the singular term "the tallest building in Chicago" is a definite description, since there is only one thing in the world that "the tallest building in Chicago" could refer to. The second part of the sentence "____ has more than 90 floors" is a general term. This is because this predicate could be applied to many buildings in the world, such as the Empire State Building and Shanghai Tower (both of which have over a hundred floors). And so this atomic sentence is composed of one singular term ("the tallest building in Chicago") and one general term ("____ has more than 90 floors").

How would we symbolize this? Here are the steps:

  1. Replace the singular term(s), i.e., the subject part, with an individual constant. Remember that you can use any lower case letter a-v, with or without a subscript; and don't forget that x, y, and z are reserved for variables).
  2. Replace the general term(s), i.e., the predicate part, with a predicate letter. You can use any capital letter A-Z, with or without subscript.

 

Rule: When combining an individual constant (or variable(s)) with a predicate letter, the predicate letter is placed to the left of the individual constant(s) (or variable(s)).

 

Here are some examples

“Steve has a backpack.”

Using "s" to signify the singular term "Steve" (a proper name) and "B" to signify the predicate term "____ has a backpack", we can symbolize this as:
Bs

“Mariela is under 6 foot.”

Using "m" to signify the singular term "Mariela" (a proper name) and "U" to signify the predicate term "____ is under 6 foot", we can symbolize this as:
Um

“Pat is a pro golfer.”

Using "p" to signify the singular term "Pat" (a proper name) and "G" to signify the predicate term "____ is a pro golfer", we can symbolize this as:
Gp

Symbolizing truth-functional compounds

Scaling up, we can see that it is easy to join atomic sentences together to form truth-functional compounds. For example, say we wanted to translate the following sentence in PL: "Steve has a backpack, and Mariela is under 6 foot." Well, we've already translated each of the atomic sentences within this conjunction above. Now we just need to join them with the corresponding logical operator, which is an ampersand:

Bs & Um

Food for thought...

 

How would you symbolize the sentence “Yuri is Russian”?

"Ry"?

SyntaxError! Remember that ‘y’ is not a symbol that can stand for the singular term “Yuri”. Instead, we need an individual constant (i.e., a-v). We should symbolize this sentence instead as "Ru".

 

 

Translating to PL (Singular Terms)

The following practice problems are taken from Herrick (2013; p. 476). Also, be sure to watch the Tablet Time! below before starting.

  1. Grandpa Munster banks at the blood bank. 
  2. Wimpy likes hamburgers and Popeye likes spinach. 
  3. If Moe is happy, then Curly is happy and Larry is happy. 
  4. It is not the case that if Frege lectures on logic then Russell will attend. 
  5. The first man on the moon was an American astronaut. 

 

Tablet Time!

 

 

 

 

New symbols

The universal and existential quantifiers of PL are perhaps the most intimidating aspect of the language. However, by standardizing the way that we "read" these, we can go a long way towards mastering their use.

The universal quantifier: (∀x)

(∀x) should be understood as “For all x...” To take an example, the sentence “Everything is good” is symbolized as: (∀x)Gx. This could be read in two ways. Using the predicate letter "G" to stand in for the general term "____ is good", the first way to read this is “For all x, x is G.” Another way to read this is “Every x is such that x is G." Either way is equivalent. The sentences literally say that the predicate “____ is good” is true of everything in the universe.

Here are two more examples:

  • “Everything is not good" is translated as (∀x)~Gx
    • This is read as "For all x, it is not the case that x is good."
  • “All dogs have fleas” is translated as (∀x)(Dx ⊃ Fx)
    • This is read as "For all x, if x is a dog, then x has fleas."

The existential quantifier: (∃x)

(∃x) should be read as “There exists at least one x such that...” To take an example, “Some things are green” is symbolized as: (∃x)Gx. This is read as “There exists at least one x such that x is G.” It literally states that there is at least one thing in the universe of which “____ is green” is true.

Remember: For logicians, "some" means "at least one"!

Here are two more examples:

  • “Some cars are noisy” is translated as (∃x)(Cx & Nx).
    • This is read as "There exists an x such that x is car and x is noisy."
  • “Some students do not do their homework” is translated as (∃x)(Sx & ~Hx).
    • This is read as "There exists an x such that x is a student and x does not do their homework."

 

Storytime!

Mergers and acquisitions

As logicians were developing first-order logic, and simultaneously re-discovering Stoic logic (see O'Toole and Jennings 2004), they asked a question regarding primacy: which is the more fundamental logic—Aristotle's categorical logic or Stoic truth-functional logic?

In addressing this question, we should first note that both types of logic are "contained" within PL. Beginning with Stoic logic, convince yourself that any wff of TL counts as a wff of PL, but not every wff of PL counts as a wff of TL. This is what we mean when we say that PL is said to “contain” TL. But TL is simply the language that we learned to understand Stoic truth-functional logic and their main method of assessment for validity: pattern-recognition. This means that, isomorphically, Stoic logic is contained within PL. Next, note it's also the case that any categorical proposition can be expressed within PL. Enjoy the slideshow below:

 

 

So, categorical propositions and any wff of TL are expressible in PL. Hence, Aristotle’s categorical logic and Stoic truth-functional logic are both “contained” within PL. This means, in other words, that the two forms of logic have been unified.

Now that the general notion of containment is understood, the question is as follows: who acquired who? There are two ways of understanding this. First, note that the expressions of categorical logic can be expressed in PL without much effort. However, the expressive power of PL cannot in any way be matched by the categorical approach. That is, a propositional-approach to reasoning can express the categorical approach, but not vice versa. Aristotle's categorical approach, as we've noted before, has some expressive ambiguities. Just to give one example, recall that there was the problem of logical relativity. Clearly, the truth-functional approach subsumes the categorical approach.

Here's one more way of thinking about it. What type of logic does PL look more like: Stoic pattern-recognition or Aristotelian categories? Clearly, PL is a lot like the language that we used when learning Stoic pattern-recognition (TL). It looks like breaking down reasoning to the level of complete declarative sentences (and focusing on truth-functional compounds using logical operators) is the more fundamental approach. Historians of logic William and Martha Kneale summarize it this way:

“The logic of propositions, which [the Stoics] studied, is more fundamental than the logic of general terms, which Aristotle studied... Aristotle’s syllogistic takes its place as a fragment of general logic in which theorems of primary logic are assumed without explicit formulation, while the dialectic of Chrysippus appears as the first version of primary logic” (Kneale and Kneale, 1984: 175-76; emphasis added).

Down goes Aristotle...

 

 

FYI

Homework— The Logic Book (6e), Chapter 7

  • Read section 7.1 (p. 262-267) and 7.2 (p.268-274).
  • Finish in-class practice problems (from the Do Stuff section)!

 

Footnotes

1. If you're wondering about the theme behind the images for this unit, I'll fill you in on the secret: it's about perspective.

 

 

Quantifiers

 

 

If the task of philosophy is to break the domination of words over the human mind [...], then my concept notation, being developed for these purposes, can be a useful instrument for philosophers [...] I believe the cause of logic has been advanced already by the invention of this concept notation.

~Gottlob Frege

Natural language reconsidered

One of the historical antecedents to the development of formal logical languages was the widely held view that natural language is ambiguous and lacks a logical structure. As we've noted before, there certainly are some ambiguities in natural language, as when the Japanese Premier Kantaro Suzuki's words were interpreted as more hostile than they truly were meant to be (see Lesson 2.1). To add to this, there is the major philosophical debate over the possibility of translation (e.g., see Quine 2013/1960). Consider how, in order to translate the modes of thought and concepts of an alien culture, you need to first interpret them. But the very process of interpretation is susceptible to a misinterpretation—distortion due to unconscious biases. Perhaps the whole process of translation is doomed.

If one takes seriously this skepticism about translation, then one might begin to harbor similar beliefs about simply interpreting the thoughts and beliefs of people within our own culture. How do you know when you've interpreted someone's statements correctly? You may perhaps be able to clarify with them; but how will you know that the clarification was interpreted correctly? Even if you wanted to use—for example—the tools of neuroscience, how would you know that some particular thought is translated as some particular brain state?

It is no surprise that the same thinkers who pondered these types of questions were also logicians who were deeply invested in the development of non-ambiguous logical languages. For example, W.V.O. Quine (who was cited above) was deeply preoccupied with both of these tasks. And so, driven by these sorts of beliefs, logicians toiled away at the development of rigorous logical languages with the hopes of giving an example of what language should be like.

“The development of the predicate calculus and its theory of quantification was one of the triumphs of the modern logical tradition in Europe... Many of the philosophers and logicians who took part in this development, as I mentioned before, took the view that natural languages were far too obscure, ambiguous, and ill-structured to be amenable to the kind of treatment that they were proposing in their treatment of logic. And some took the view that they were providing something like an instrument for exhibiting what the true logical form for language should be, if it were only well behaved. Thus, in a way, their attitude toward language was revisionist” (Bach 1989: 40).

But what if natural language already does have a logical structure—one that has been hitherto unnoticed? In fact, due in large part to the work of Richard Montague (1973), many linguists now think that natural language can be expressed in terms of formal languages, a view which has led to the development of the subfield known as formal semantics. This line of reasoning (perhaps) supports a nativist theory of language like that of Noam Chomsky. Although this is now getting far beyond the topic of this course (and into some really thorny issues), the main point here is that our logical language PL does indeed seem to have the capacity to help us to analyze our natural language more fully than we could've done without a formal language (see Tomasello 2014: 142).1

But first, we have to learn how to use it...

Important Concepts

 

Clarifications

One of the main points of the Important Concepts above is to give you the tools to distinguish between sentences of PL and mere formulas of PL. To be a sentence of PL, the formula cannot have any free variables. In other words, for any variable(s) in the formula there must be a quantifier which ranges over said variable(s). In other words, “(∃x)(Nx & Cx)” is a sentence of PL; “(Nx & Cx)” is not a sentence of PL.

Another important point is to recognize which of the logical operators has the largest scope. Whereas TL only had the five logical connectives serving as logical operators, PL uses those same five logical connectives plus(!) the two quantifiers. The quantifiers range over the expressions in parentheses immediately to their right (much the way tildes do). You should be able to identify which logical operator, whether it be a logical connective or a quantifier, is the operator with the largest scope. For example, consider the following sentences:

  • (∀x)(Fx & Gx)
  • (∀x)(Fx) & (∀x)(Gx)
  • Um

Notice that in the first sentence, the universal quantifier applies to itself and to what's inside the parentheses to its right, which is the rest of the formula. This means that in this sentence, the quantifier has the largest scope; i.e., this sentence is a quantified sentence. In the second sentence, it is actually the ampersand that has the largest scope. This means that the second sentence is a truth-functionally compound sentence. The last sentence is an atomic sentence, since it is a sentence of PL but there are no quantifiers or truth-functional connectives.2

 

 

Translating to PL (Quantifiers)

The following practice problems are mostly taken from Herrick (2013; p. 485). Also, be sure to watch the Tablet Time! below either before starting or if you get stuck.

  1. All human beings are intrinsically valuable creatures. 
  2. Some human beings are interested in poetry and some human beings are not.
  3. All things that are made of matter are tangible. 
  4. All visible things are material things. 
  5. All persons are immaterial souls animating material bodies. 
  6. All atoms are cloud-like entities. 
  7. Every person possesses dignity and equal moral worth. 
  8. Every human being deserves to be treated respectfully.
  9. Some human beings commit punishable offenses.

 

Tablet Time!

 

 

 

Practice

There's an old joke where a visitor to New York asks a local how to get to Juilliard. "Practice, practice, practice" is the response. Similarly, the only way to master PL is to look at many examples and translate as many sentences as you can.

 

 

 

FYI

Homework— The Logic Book (6e), Chapter 7

  • Do 7.2E #2 (p. 275).

  • Read section 7.3 (p. 276-294).

  • Do 7.3E #1 (p. 294).

  • Review FAQ.

 

Footnotes

1. For more on Chomsky's nativist theory of language, Pinker (1994) is an excellent introduction and defense.

2. If there are any lingering confusions, be sure to do the readings from the last lesson as well as from this one.

 

 

Translations

 

 

Important Concepts

 

As you learned in the Important Concepts above, today we are working with relational (or dyadic) predicates. Although this becomes intuitive with practice, initially the order of the individual constants and predicate letters might throw you off. The idea, however, is simple. In natural language, some predicates establish a relation between two (or more) singular terms (or variables), which means that, in PL, some predicate letters will have two (or more) individual constants (or variables) attached to them.

The following slideshow gives numerous examples that will begin to get you accustomed to using relational predicates.

 

 

Reflexive sentences

Be sure to be careful about reflexive sentences, i.e., sentences where the predicate establishes a relationship from an individual constant to that same individual constant. For example, consider the sentence “Narcissus loves himself”. If we let "Lxy" stand for “x loves y” and let "n" stand for “Narcissus”, the sentence would be symbolized as: Lnn. Take a moment to convince yourself that this is the case.

 

 

The following practice problems are taken from Herrick (2013; p. 511-12). See Tablet Time! section for help getting started.

  1. Pam is taller than Sue but Sue is older than Pam. 
  2. Archie Bunker does not like any liberal.
  3. Archie Bunker does not like every liberal.
  4. Wimpy, the hamburger man, respects himself. 
  5. Someone knows himself. 
  6. Everybody knows Bill Gates. 
  7. All elephants are larger than Nathan’s pet mouse. 
  8. Lorraine likes any horse. 
  9. Somebody knows Katie. 
  10. Matt is a friend of Elliot’s. 
  11. Sam dislikes somebody. 
  12. Sam dislikes everybody.

 

 

 

Mastering PL

Overlapping Quantifiers

In order to truly appreciate how expressive PL can be, we must take a look at instances of overlapping quantifiers. If one quantifier appears immediately to the right of another quantifier, then the scope of the left quantifier is that quantifier itself plus the scope of the right quantifier. For example, the sentence "(∀x)(∃y)Tyx" is a universally quantified sentence. This is because the universal quantifier has the largest scope. It applies to itself and to what is immediately to its right, namely the existential quantifier (which itself applies to the rest of the sentence).1

The sentence "(∃y)(∀x)Tyx" is an existentially quantified sentence. In this sentence, it is the existential quantifier that has the largest scope.

Here are some more examples of sentences of PL with overlapping quantifiers:

 

 

Universe of Discourse

One other characteristic of PL is that it can limit the range of objects that a variable can accept as values. Put more formally, the domain of a variable is the set of things the variable can take as values. When we specify the universe of discourse for a sentence containing a variable, we are stating the domain of the variable; i.e., we are specifying what it ranges over. If the domain of a variable is unrestricted (i.e., where literally any possible predicate is applicable to the variables in question), then we call this the universal domain. However, if the domain is confined to a set of things within the universe, we call this a restricted domain.

Restricting our domain allows us to simplify our expressions in PL. For example, say we wanted to translate the following sentence:

"All humans have moral rights."

Paraphrasing this as "For all x, if x is a human, then x has moral rights", our sentence in PL would be something like:

(∀x)(Hx ⊃ Mx)

However, suppose we stipulate that the domain is restricted only to persons; i.e., we stipulate that the variable x ranges only over persons. The result is that we can symbolize the sentence above as:

(∀x)Mx

This is because we no longer have to specify that the predicate letter H applies to the variable x since it is a given that H is true of all variables in this restricted domain. This convention will allow you to express more complex sentences with fewer symbols and in a way that is more human readable.

Putting it all together

Restricting domains can be combined with overlapping quantifiers to produce sentences featuring just a few symbols but that stand for propositions with complex semantic content. For example, consider the sentence “Someone knows someone.” If the universe of discourse is restricted to persons, we can translate this as:

(∃x)(∃y)Kxy

 

 

The following practice problems are taken from Herrick (2013; p. 518). See Tablet Time! section for help getting started.

Universe of Discourse: Human Beings

  1. There is a person who is universally respected. 
  2. There is a person who respects everybody. 
  3. Everybody respects someone or other. 
  4. Some people do not know anybody. 
  5. If everybody knows somebody, then somebody knows everybody. 
  6. If someone helps someone, then God is pleased. 
  7. Anyone who loves no one is to be pitied. 
  8. Someone is not known by anyone.

 


 

Food for thought...

Some students might wonder if the increased complexity of PL, when compared to TL, is justified—given that it seems to do a lot of what Stoic logic was able to do, such as make explicit certain patterns of reasoning and emphasize the truth-functionality of our argumentation. However, PL does feature a significant improvement over TL, and it has to do with relational predicates. Relational predicates are what allow PL to do what neither categorical nor Stoic truth-functional logic can: express n-place predicates. Some predicates, it turns out, relate to singular terms to each other. You hear it all the time. "Dan is taller than Pedro." "Trish graduated college before Micah." "Sally makes more money than her husband." In short, we use relational predicates all the time. Moreover, unsurprisingly, these are oftentimes a part of our arguments. And so we need to be able to express these in a logically precise way. TL doesn't do a very good job at this. In TL, these sentences would be something like "D", "T", and "S", respectively. But that doesn't tell you much of anything. Categorical logic, as studied by Aristotle, does even worse. You have invent all sorts of weird categories, like "persons who are taller than Pedro", "persons who graduated college before Micah", and "persons who make more money than their spouses". But PL can do all this with ease: "Tdp", "Gtm", and "Msh".

These are just two-place relational predicates, by the way. In other words, these are just predicates to which you attach two singular terms. You can have any number of singular terms attached, however. For example, here's a sentence with a three-place predicate: "Devin met Andy at the coffee shop." Here, the predicate is "_____ met _____ at _____", and the singular terms are "Devin", "Andy", and "coffee shop". Here's a sentence with a four-place predicate: "Devin met Andy at the coffee shop to talk about calculus". And on and on.

The point here is that PL is a quantum leap in our capacity to express highly-detailed propositions in a logically precise way. If you are a student of computer science, you may already recognize why this is so important. I'll keep it brief here. Just like the Industrial Age encouraged thinking in mechanical terms which served as an inspiration to Boole and DeMorgan, the increased precision of our logical languages in turn inspired visionaries like Alan Turing, Claude Shannon, and John von Neumann to build machines which we could "speak" to using a rigorous formal language (see Shenefelt and White 2013, chapter 9). The result? It's a brave new world.2

 

 

 

Tablet Time!

 

 

 

FYI

Homework— The Logic Book (6e), Chapter 7

  • Do 7.3E #2 (p. 294-5).
    • Here is the portion of the student solutions manual for the relevant chapter.
  • Read section 7.4 (p. 296-310), in particular p. 300-310.

Related material—

 

Footnotes

1. Perceptive students might notice that this is much the same way as how the tilde works. For example, in the sentence "~~~K", it is the leftmost tilde that is the main operator, just like how in "(∀x)(∃y)Tyx" it is the leftmost quantifier that is the main operator.

2. One context in which you can see how logical languages like PL influenced researchers in the nascent field of computer programming is at the command line, where not only do you use truth-functional operators like "&&" and "||", but also issue commands that require two arguments, like a two-place predicate. For example, the mov command requires two arguments. For an intro to one of my favorite shells (i.e., the one I learned on), click here; see also Shotts (2019).

 

 

Quantifier Rules
(Pt. I)

 

 

Making Progress

At the beginning of Unit IV, we laid out some queries that still needed to be addressed. We have dealt with two of those. One, we know that Stoic truth-functional logic is the first version of primary logic even though it was Aristotle who first initiated the study of validity. Two, we also know that Stoic valid argument forms do grasp at an underlying logical form that is always valid.

We can ponder a bit about this second point. Why is it that logical truths are true? What makes logical truths, like mathematical truths, be (or appear to be) necessary? Some people have thought that it is simply a function of our language. Although this predates the 20th century, one very famous philosopher named Ludwig Wittgenstein argued forcefully for this thesis, claiming that logical truths are only true because they are true by definition. This, in effect, makes logical truth a consequence of our language.

But a moment's reflection tells you that this can't be. This is because linguists have come to learn that our language already follows an internal logic. In other words, languages—whether we recognize it or not—appear to have a rule-driven organizing scheme to them. In short, logic is what makes our language possible and not the other way around (see Shenefelt and White 2013: 65-70).

Tegmark's Our mathematical universe: My quest for the ultimate nature of reality

The inquiry into why logico-mathematical truths seem to be necessary and timeless have driven some to believe that there is an underlying logico-mathematical reality that is more fundamental than the physical reality that we see. Famously, Plato believed that the study of mathematics is a pathway to understanding the true nature of reality. Recently, relatively speaking, even Darwinian processes—which are instantiated in physical, living organisms—have been explained in purely (non-physical) mathematical terms (see Smith 1982). More ambitiously, physicist Max Tegmark (2015) argues that reality itself is a mathematical construct, and cognitive scientist Donald Hoffman, in his own way, argues that mathematics has a primacy over physical reality. Is reality based on a logico-mathematical foundation? How would we know?

These questions are far beyond this course, obviously; and they are perhaps even past the realm of what can be known by humans. We can only hope to make slow, steady progress in our own limited understanding.

The Modern Square of Opposition

One thing we can solve at this moment is the puzzle of logical relativity. Recall that some arguments were valid depending on whether one took the hypothetical viewpoint or the Aristotelian existential viewpoint. Recall also that, when we posed this puzzle, we still had no way of expressing this distinction in our symbols. But now we do: "(∀x)" and "(∃x)".

Note that if we want to represent the existential viewpoint, we simply need to use the existential quantifier. Note also that if we make a universal claim, like "All F are G", we use a conditional: (∀x)(Fx ⊃ Gx). This, of course, is the hypothetical viewpoint: if F's exist, then they are G. And thus, we can represent both viewpoints in our symbolizations. The puzzle of logical relativity is solved.

 

 

 

Important Concepts

 

Clarifications

Universal elimination (∀E)

Universal elimination

∀E is a very intuitive and easy to use rule. The general notion is that you can move from a universally quantified sentence to a sentence where the universal quantifier is removed and the variable(s) that the quantifier was ranging over is/are replaced with individual constant(s) (a-v). For example, we can legally move from "(∀x)(Px)" to "Pa". This is because the first sentence states that the property P is true of everything. That's why the universal quantifier is used. It literally says, "For all x, x has the property P." And so, if everything in the universe of discourse has the property "P", certainly the individual constant "a" has this property. And so we can legally infer "Pa".

Here are some other examples:

"(∀y)(Hy ∨ Ty)" → "Hb ∨ Tb"

"(∀w)(Rww)" → "Rcc"

"(∀z)(Mz & Nz)" → "Md & Nd"

 

Existential introduction (∃I)

Existential introduction

∃I is also fairly intuitive and easy to use. An ∃I occurs when you move from a predicate letter paired with one (or more) individual constants to that same predicate letter now paired with one (or more) variables and a matching existential quantifier. The basic intuition is that if you know that some individual constant has some property, then you know that something has that property. For example, if you know "Socrates is a philosopher", then you know "Someone is a philosopher". You are basically abstracting away from a more descriptive statement containing an individual term to a more general statement utilizing an individual variable.

Here are examples:

"Ra ⊃ Qa" → "(∃x)(Rx ⊃ Qx)"

"Kss" → "(∃y)(Kyy)"

"(Lu & Mu) & Ou" → "(∃w)((Lw & Mw) & Ow)"

 

Universal introduction (∀I)

Universal introduction

∀I allows you to move from a predicate letter paired with one (or more) individual constants to that same predicate letter now paired with one (or more) variables and a matching universal quantifier. This move is legal to use as long as the transition meets the following qualifications:

  1. The individual constant you are replacing with a variable does not appear anywhere in the assumptions.
  2. The individual constant you are replacing with a variable does not appear in the sentence you performing ∀I on.

The general idea of the second point is that, if you get rid of some individual constant, you have to get rid of it in the entire sentence. For example, you can't move from "Faa" to "(∀x)(Fxa)". Clearly "a" appears in the sentence you are doing an ∀I on, and that's not allowed.

I suggest that you read p. 474-490 of Chapter 7 of The Logic Book for more explanations, analysis, and examples. The link is in the FYI section below.

 

Existential elimination (∃E)

Existential elimination

∃E is a subderivational rule. In fact, it is the most powerful subderivational rule. The setup, however, is a little tricky. You have to begin, obviously, with an existentially quantified sentence (much like how ∨E has to begin with a disjunction). Then you make a subderivation with the assumption being an instance of the existentially quantified sentence. What this means is that you have to replace the variable(s) in the existentially quantified sentence with an individual constant.

However, the constant that you use has to meet the following qualifications:

  1. The individual constant cannot appear anywhere in the assumptions of the derivation, i.e., the premises.
  2. The individual constant cannot appear anywhere in the existentially quantified sentence that you are basing your subderivation on.
    • This is the existentially quantified sentence that you have to begin with. Sometimes this sentence appears in the premises and so this criterion is redundant. Nonetheless, it's important to check for this.
  3. The individual constant cannot be in the sentence you are trying to derive.

The power behind ∃E is that anything that you can derive in that subderivation you can place on the main scope. So, typically, if you use ∃E, your subderivation will get you your conclusion. Once you've arrived at your target sentence, you simply end the subderivation and then write the last line of your subderivation on the main scope, justifying it with the existentially quantified sentence you began with and the subderivation.

This is a tough one to understand, so I suggest you watch the portion of the video where I explain this a few times. Also, note that the name ∃E is a little deceptive. This is because what you end up with might still be an existentially quantified sentence. This makes it seem like you didn't eliminate anything. However, the elimination portion of the rule occurs when you move from the existentially quantified sentence you are beginning with to the assumption for the subderivation. That's why it's called existential elimination.

Again, I insist that you take a look at the readings for this lesson for more clarifications.

 

Getting Started (Pt. IV)

 

Tablet Time!

 

 

 

FYI

Homework— The Logic Book (6e), Chapter 10

  • Read p. 474-490.
  • Do 10.1E #1-2, p. 490-1.
    • Here is the portion of the student solutions manual for the relevant chapter.

 

 

Quantifier Rules
(Pt. II)

 

 

All men can see these tactics whereby I conquer, but what none can see is the strategy out of which victory is evolved.

~Sun Tzu

Strategies

 

Quantifier Negation (QN)

QN rounds out PD+ for us. This is the last technical concept you'll be learning. QN is a replacement rule, which means that it goes both ways (much like DM, Imp, DN, etc.). Here it is:

Quantifier Negation (QN)

~(∀x)P ←→ (∃x)~P

~(∃x)P ←→ (∀x)~P

The variable is bold to signify that it is a metavariable; i.e., it could stand for variables w-z (with or without subscripts). The "P" is also bolded since it can stand for any truth-functionally compound sentence, another quantified sentence, or an atomic sentence of PL. You'll see this rule at work in the Getting Started (Pt. V) video below; see also The Logic Book, pages 521-24.

Getting Started (Pt. V)

 

 

 

FYI

Homework— The Logic Book (6e), Chapter 10

  • Read p. 492-500, 521-24.
  • Do 10.2E #1-2, p. 518-19.
    • Here is the portion of the student solutions manual for the relevant chapter.

 


 

 

Conclusions

 

Gottlob Frege

 

Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build arithmetic.

~Frege, in a letter to Russell, 1902

Is logicism true?

Frege's Begriffsschrift
Frege's
Begriffsschrift.

When we last spoke of Frege, we had mentioned that his goal in creating the world's first logical formal language was to show that logicism was true. Logicism, recall, is the theory that mathematics can be derived out of logic. Thus, Frege believed that logic is the root and that mathematics is one of the branches. And so Frege labored furiously first in formalizing logic, giving us symbolic logic, and then in systematically defining all the concepts and relations of arithmetic in his "concept notation". The goal was that, once he was done, proving logicism to be true would be a purely mechanical task.

You might remember that this goal of Frege's was shared with David Hilbert. Hilbert, much like Frege, wanted to prove that mathematics is complete and consistent. In other words, he wanted to formalize mathematics. If you recall, Hilbert popularized this issue in 1900, at a mathematics conference, by framing it as a problem that 20th century mathematicians should attempt to solve (see Rules of Inference (Pt. III)). If logicism could be proved, Hilbert reasoned, then this problem would be that much closer to getting crossed off his list.

Frege's work was obviously very esoteric, as we've mentioned before, but at least one important thinker paid close attention when Frege published his Begriffsschrift in 1879. This person was none other than Bertrand Russell. In 1902, while the second volume of his Grundgesetze der Arithmetik (The Basic Laws of Arithmetic, 1893, 1903) was in press, Frege received a letter from Russell, who claimed to have found an inconsistency in one of Frege’s axioms.

Paradoxes entail a contradiction, and a contradiction could doom Frege's whole enterprise of proving logicism. The paradox, though, didn't appear in Frege's formalization of logic (Lucky us!), but it was clearly present in his attempted formalization of arithmetic. In a kind letter, Frege responded to Russell, a portion of which can be seen in the epigraph above.

What was this paradox? It is a technical detail which is difficult to summarize. Suffice it to say that Frege wanted to define arithmetical numbers in terms of classes (or sets). But this raises the possibility of there being sets that contain themselves. So the paradox is this: does the set of all sets that do not contain themselves as members contain itself? This is much easier to understand when explained by Russell through his barber paradox (see also Irvine 2016, section 2), but here's what you need to know: Frege lost hope.

But Russell didn't. In fact, Russell revered Frege's work and so he set up to make the necessary amendments and complete the task himself. He eventually, along with Alfred North Whitehead, published Principia Mathematica, the first volume of which was published in 1910. Although Principia had problems, Russell and Whitehead made a heroic attempt to continue Frege's work. In fact, the language that we used in this course is closely related to the one used by Russell and Whitehead. Logicism seemed to have hope.

And then, in 1931, Gödel (pictured below) publishes On Formally Undecidable Propositions. Although Gödel's incompleteness theorems cannot be done justice here, we can summarize his main findings. Gödel’s first incompleteness theorem states that:

  • in any consistent formal system F within which a certain amount of arithmetic can be carried out,
  • there are statements of the language of F which can neither be proved nor disproved in F.

In other words, any formal system that can do some arithmetic is either complete or consistent, but not both. Hofstadter summarizes it this way:2

 

 

If a logical system that performs arithmetic cannot be complete, then Frege and Russell's attempts will always be futile (see Shapiro 2005, chapter 5 and Nagel & Newman 2001). Most mathematicians and logicians agree. Logicism is false.

Does this mean it was all for nothing? Was Frege's life work pointless? Not at all. Frege's work is an essential component of our shared intellectual history. Without him, logic and mathematics wouldn't have been the same.

"The formalization of arithmetic—and other branches of mathematics too—still rests largely on techniques that Russell and Frege had forged" (Shenefelt and White 2013: 143).

But that's not all...

 

Kurt Gödel (1906-1978)

 

 

The Digital Revolution

Geography and ideas

An interesting approach to the writing of history that has become increasingly popular is to emphasize the role of geography on historical events. For example, Jared Diamond's (2017; originally published in 1997) Guns, Germs, and Steel makes the case that what enabled some Eurasian and North African civilizations to thrive and conquer other less-developed civilizations has nothing to do with greater intellectual prowess or inherent genetic superiority (as much as some white supremacists would love to think it to be the case). Rather, the gap between these civilizations, which is manifested in more complex organizational structures and advanced technology and weaponry, originated mostly due to environmental differences. The interested student can refer to Diamond's book or see the documentary series that was inspired by the book.

Of course, Diamond isn't without his critics. But many of the criticisms of his view also emphasize the role of environmental factors. In other words, they are disagreeing with his findings but not with his method. For example, in chapter 1 of their If A then B, Shenefelt and White make the case that, more than anything, it was the ubiquitous access to water routes (which allowed the ease of travel from city-state to city-state), that led to the flourishing of Greek philosophy. This is because the ease of travel served as a means of escape should a thinker’s ideas become too heterodox. In short, thinkers were allowed to push the envelope, challenge authority, explore new ideas—all of which are essential ingredients in intellectual breakthroughs—with more peace of mind than elsewhere. They could just leave if things were getting too hot for them!

Politics and ideas

Representatives of Athens and Corinth at the Court of Archidamas, King of Sparta
Representatives of Athens and
Corinth at the Court of
Archidamas, King of Sparta.

But it's not only geography that influences the history of ideas; politics and war are influential in subtle ways that are difficult to see at first. In the first lesson of this course, in a footnote, I argued that Aristotle truly did develop a field that was unique and could not be found in other civilizations in the same form. For example, Indian logic was preoccupied with rhetorical force, i.e., the persuasive characteristic of argumentation. It featured repetition and multiple examples of the same principle. But Aristotle’s logic focused on logical force, the study of validity itself. The Greeks wanted to know why an argument’s premises force the conclusion on you. Interestingly, Chinese logic apparently did study some fallacious reasoning, but never its counterpart (i.e., validity). In short, it was only Aristotle that studied logical force unadulterated by rhetorical considerations.

But this was not enough. In order for an idea, like the idea that studying validity is a worthwhile endeavor, to take hold, you need not only an innovative thinker, i.e., Aristotle, but you also need a receptive audience. So here's the puzzle: Why was Aristotle’s audience so receptive to the study of validity? Because 5th century BCE Athenians made two grave errors that 4th century BCE Athenians hadn't forgotten and would never forgive:

  1. They lost the Second Peloponnesian War (431-404 BCE); and
  2. They fell for “deceptive public speaking.”

In other words, 5th century BCE Athenians knew that the people from a century earlier had really messed up. And Plato, for one, blamed both of those errors on the sophists. The sophists were paid teachers of philosophy and rhetoric (or persuasion). Plato in particular makes them seem more like they were intellectual mercenaries who would argue for any position if you paid them. In truth, the sophists were probably more scholarly and less mercenary than Plato makes them out to be. For instance, sophists generally preferred natural explanations over supernatural explanations, and this preference might’ve been an early impetus for the development of what would eventually be science. Nonetheless, sophists would often argue that matters of right or wrong are simply custom (nomos). Moreover, these first teachers were more pervasive in Athens than elsewhere due to the opulence of the city. And so, it was easy to place the blame on them. Once public opinion turned on the sophists, things got violent. One person accused of sophistry (whose name was Socrates) was even put to death.

And so it was for these reasons—remembering the mistakes of the past and the violent turn against the sophists—that the populace was ready to restore the distinction between merely persuasive arguments and truly rational ones. The study of the elenchus (argument by contradiction) was divided into rhetoric (persuasion) and rational argumentation, thanks to Plato. From here, it was just one step further to begin to standardize rational argumentation and study its logical form. Enter Aristotle. But without the preceding turmoil, it is possible that Aristotle's logic would've fallen on deaf ears (see Shenefelt and White 2013, chapter 2).

Technology and ideas

In a previous lesson, we looked at how the rules of logic, in particular Boole's algebra, were realized physically in computer circuits. For a refresher on the connection between Boolean algebra computer circuits, you can see this in the slideshow below (see also this helpful video).

 

 

A Roberts loom in a weaving shed in 1835
A Roberts loom in a
weaving shed in 1835.

It turns out that the even 19th century logicians that we covered, George Boole and Augustus De Morgan, were inspired by their environment. Let me explain. The Industrial Revolution began in the late 1700s in Britain. This was a time when mechanics were developing a new generation of steam engines (fired by coal), there was a boom in the construction of public works, clothing began to be manufactured on large industrial machines, and, interestingly enough, these new developments were themselves stimulated by improvements in agricultural science. It was also during this time period that diesel power, photography, telegraphs, telephones, and petroleum-based chemicals were invented. In sum, mechanization was in full swing.

Enter George Boole and Augustus De Morgan. When the Industrial Revolution was fully underway, Boole and De Morgan published important treatises on mathematical logic. In the year 1847, both published the first major contributions to logic (arguably) since Aristotle. This is when Boole published his Mathematical Analysis of Logic which we discussed briefly back in Unit I. According to Shenfelt and White, it was the Industrial Revolution itself that inspired these two thinkers and many others:

“The Industrial Revolution convinced large numbers of logicians of the immense power of mechanical operations. Industrial machines are complicated and difficult to construct, and unless used to manufacture on a large scale, they are hardly worth the trouble of building. Much the same can be said of modern symbolic logic. Symbolic logic relies on abstract principles that are difficult, at first, to connect with common sense, and they are particularly complicated. Nevertheless, once a system of symbolic logic has been constructed, it can embrace a vast array of theorems, all derived from only a few basic assumptions, and suitably elaborated, it can supply a clear, unequivocal, and mechanical way of determining what counts as a proof within the system and what doesn't. The Industrial Revolution convinced many logicians from the nineteenth century onward that complicated mechanical systems are truly worth building” (Shenefelt and White 2013: 205-206).

Hints of the complicated story of computation

It appears that the history of computation, just like logic itself, is influenced by environment, politics, and technology.

First off, as we've seen, logic itself was integral to computer science:

“Just as machines first gave rise to symbolic logic in the nineteenth century, so, in the twentieth and twenty-first centuries, symbolic logic has given rise to a new generation of machines: digital computers, which are essentially logic machines whose programming languages are analogous to the symbolic languages invented by logicians” (Shenefelt and White 2013: 206).

One individual responsible for radical new developments in theory of computation was Alan Turing. Turing's contributions to mathematics, logic, and computer science are difficult to summarize, and I mentioned only a few of them in Lesson 3.2. Nonetheless, here's a refresher of Turing's insights:

 

 

Colossus, the world's first single-purpose electronic computer
Colossus.

Turing, of course, was instrumental in bringing about the world's first large-scale electronic computer, Colossus, in 1944. However, this computer was special-purpose—it was for cryptanalysis. But once World War II was over, theorists wanted to make a general-purpose electronic computer.

Enter John von Neumann. Von Neumann was a mathematician, physicist, and computer scientist who was, by all accounts, (frighteningly) brilliant. As a child, he would memorize entire pages of the telephone directory, to the amusement of his parents' houseguests. As the first general-purpose computers were being built, other researchers would ensure that the computer's calculations were correct by checking with von Neumann (who would do the calculations in his head!). In fact, he was so brilliant that we might actually give him too much credit for some of his ideas (since others were also involved; see Ceruzzi 2003: 50).3

In any case, whether he deserves the whole credit or not, we know that von Neumann was important to the development of the stored program principle. The general idea is to store both the programs and data in a common high-speed memory (or storage). This allowed programs to be executed at electronic speeds, and it allowed programs to be operated on as if they were data. This second innovation is what allowed the blossoming of high-level languages like Python and Java. Computers that follow these general architectural guidelines are sometimes called "von Neumann machines."

Was von Neumann connected at all with the field of logic? Yes—and in a stunning way. He was the former research assistant to David Hilbert (who posed the question that Frege had already been working on and that Russell continued to work on which inspired Turing to focus on mathematics). It's all connected.4

Also, I would be remiss if I didn't also mention the pioneering work of Claude Shannon, whose masters thesis(!) laid the foundations for information theory. In the resulting 1938 paper, he showed that switching circuits could solve any problems that Boolean algebra could solve. The interested student can check out this helpful video.

Left to right: Julian Bigelow, Herman Goldstine, J. Robert Oppenheimer, and John von Neumann at Princeton Institute for Advanced Study, in front of MANIAC I.
Left to right: Julian Bigelow,
Herman Goldstine,
J. Robert Oppenheimer,
and John von Neumann at
the Princeton Institute for
Advanced Study, in front of
MANIAC I.

And so this is how logic, which itself was influenced by environment, politics, and technology, helped to bring about the digital revolution. But logic alone did not make the digital revolution possible. It seems that politics and technology also played a role. With regards to politics, Jim Holt reminds us that the digital universe and the hydrogen bomb were brought into existence at the same time, by the same people. In fact, the first task of MANIAC I was to do the calculations for the hydrogen bomb (see Holt 2018, chapter 9). War, an extension of politics according to Carl von Clausewitz, was the catalyst. Moreover, it appears that without this heavy government investing (which founded and continually funded the research of countless computer science departments across the country, etc.), the computing revolution would not have happened (see Mazzucato 2015, chapter 3). Lastly, with regards to technology, developments in lithography (which is a printing process) enabled computer scientists to develop better integrated circuits, which affects everything from computational power to the competitiveness of the computing industry.5

It boggles the mind that so many things had to fall into place for the computing revolution, each with its own story. You at least now know one of those stories: the story of logic. Having said that, there's no guarantee that this story has a happy ending...

 

 

 

The Threat

In his 2017 Superintelligence, Nick Bostrom lays out his case for why AI might be an existential risk to humankind. He does this by describing several problems that might arise and for which we are not preparing. For example:

  1. The value alignment problem has to do with ensuring that when an AI obeys one of our commands, they do not do so by exploiting a loophole or otherwise accomplishing the task in a way that goes against their human master’s values.
  2. The motivation selection problem involves the worry about what sort of motivational states the AI will have. The concern is that we should begin to attempt to make "friendly" AI.

To get into the issue of AI as an existential risk is far beyond the scope of this course. This is lamentable since global catastrophic risks are some of my favorite things to think about—weird, I know. Nonetheless, here I'll simply say: 1. This is a real problem; and 2. The clock is ticking. Enter logic. In his chapter of Possible Minds (2020), Stewart Russell, who has worked on the synthesis of first-order logic and probability theory, discusses the value alignment problem. After discussing the risks behind AI, he gives a possible solution: expressing human problems in a purely formal way and then designing AI to be solvers of these carefully specified problems. This means, of course, re-defining what AI is; but Russell thinks this is necessary for our survival. In short, we must not merely focus on problem-solving but on building machines that are provably in the interest of humans. And the primary tool for this would be—you guessed it—a formal language.6

 

 

 

That's it!

You've made it. This puzzle is solved. Go solve some more.

FYI

Related Material—

Advanced Material—

 

Footnotes

1. We also mentioned that Frege's work, along with that of Russell and Hilbert, were what inspired Alan Turing to study mathematics more rigorously.

2. Interestingly, Hofstadter believes that our cognitive capacity to construct categories and the capacity for self-reference are what make consciousness possible; see his (2008) I am a strange loop.

3. For a great overview of the time period and von Neumann's multiple roles in it, from the birth of general-purpose computers to nuclear strategizing, see Poundstone (1993).

4. Turing almost became a research assistant to von Neumann, who was ten years Turing's elder. Turing, however, opted to do the patriotic thing and went back to the UK in 1938. There was, of course, a world war brewing, and Turing played an important role in ensuring his country was on the winning side.

5. I hasten to add that simple and and or logic is even useful in exploitation, i.e., hacking, and that hacking (in a way) boosts software security. This is because hackers find holes and vulnerabilities in existing software, which prompts firms to patch up those holes and fortify their products. This creates a cycle of tit-for-tat with one side improving their exploitation techniques and the other their software (see Erickson 2008: 451-52).

6. The interested student can find Stewart Russell's essay in Brockman (2020, chapter 3).