
“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an increase in the number of well-endowed men and an advancement in the standard of morality will certainly give an immense advantage to one tribe over another… and this would be natural selection.
At all times throughout the world tribes have supplanted other tribes; and as morality is one important element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase.”
~Charles Darwin, The Descent of Man
The Tree of Knowledge...
To accept as a theme for discussion a category that one believes to be false always entails the risk, simply by the attention that is paid to it, of entertaining some illusion about its reality.
~Claude Lévi-Strauss
Systematic error

Welcome to this introductory course on philosophical ethics. Although I'm very excited about teaching this material, I'd like to begin with an admission: this is a very difficult class to teach. This is because philosophical ethics has, despite its two millennia history, failed to produce a dominant theory. To be honest, this is to put the point charitably. Nick Bostrom makes the point a little more forcefully. Since there is no ethical theory (or meta-ethical position) that holds a majority position (Bourget and Chalmers 2013), that means that most philosophers subscribe to an ethical theory that is false (Bostrom 2017: 257). Put differently, imagine that a theory that we will eventually cover, say, utilitarianism, turns out to be true. Per the survey by Bourget and Chalmers, only about a third of ethicists subscribe to this view. That means that, at least in our little hypothetical scenario, two-thirds of ethicists are wrong (since they don't subscribe to utilitarianism). The same could be said of any ethical theory, since about a third is the most any ethical theory gets. In other words, the field of philosophical ethics is characterized by systematic error. How is one expected to teach an introductory course to a field where most theorists are wrong?
What I've decided to do in this course is to frame the entire course within the context of systematic error. My assumption is that, if systematic error is the state that the field is in, then a proper introduction to the field should leave you in utter moral disarray. What I'd like to do in this first lesson is give you a theory as to why the field of ethics is in a state of utter chaos—a theory I'll give in the next section. I'd also like to make three important points today in the hopes that, at the conclusion of the course, you might be able to understand how these points are manifested in the way that I've taught this class. Here are the three points:
- The field of philosophical ethics is characterized by systematic error.
- The field of philosophical ethics has many famous theorists who themselves proposed morally abhorrent positions.
- The field of philosophical ethics cannot credibly make the case that ethical reflection has improved the world.
Let's cover the first point in this section. First off, there is no dominant view in ethics. As mentioned before, per the survey Bourget and Chalmers, about a third of ethicists are consequentialists (some sort of utilitarianism). Another third lean towards deontology, which we will cover in its Kantian form—coming from the mind of Immanuel Kant. Lastly, the virtue tradition (which was popularized by Aristotle) comes at a distant third, with less than a fifth of ethicists subscribing to the view. Other views are influential in other subfields. For example, newer versions of social contract theory seem to dominate in political philosophy (Mills 2017). But at least in the subfield of ethics, there is no majority view. You have, at best, a tie for a plurality. That looks like a whole field of inquiry that has failed to achieve consensus

Francis Crick (1916-2004).
Second, the failure of philosophers to arrive at a dominant view in ethics is glaringly obvious to other disciplines. In her recent book Conscience, Patricia Churchland gives an anecdote about how she and the famous molecular biologist Francis Crick attended an ethics lecture, and Crick was dismayed that the focus was on pure reason, attempting to arrive at moral truth through reason alone. Clearly, Crick argued, our genetics play a role in our sociality, including our moral behavior; and philosophers need to discuss that. Churchland reports that Crick essentially made an argument in the style of a theorist that we will eventually cover: David Hume. Crick's point was very Humean indeed—reason cannot motivate action; morality is about action; thus, reason cannot drive moral behavior.
Crick's disbelief brings me to the third point. Even worse than the recognition by biologists, among others, that ethicists have failed at their job is the fact that ethicists are fighting back when others try to do what they couldn't do. Although there has been a push for more empirical approaches in ethics, i.e., more incorporating of findings from the sciences into our ethical theories, there has been strong and consistent push-back from non-empirical philosophers. In fact, some philosophers would be horrified to see that I teach this course the way that I do—devoting all of Unit III to an empirical assessment of some popular ethical theories. But my reasoning is simple. I wanted to show ethics as it really is: with all ethical theories running into problems.1
Lastly, even empirical approaches to understanding our moral behavior have not panned out. One camp of theorists that uses the natural sciences to fuel their arguments—a group we will refer to as moral skeptics—agree that moral objectivism is wrong, i.e., that there is no such thing as objective moral values, but they don't know where to go from there (see Garner and Joyce 2019). Their general options appear to be: conservationism (keeping moral language and moral reasoning intact), moral fictionalism (pretending morality is real but acknowledging that it is a fiction), and abolitionism (discontinuing the use of moral reasoning and moral language). The most prominent skeptic, Richard Joyce, chooses moral fictionalism, but his own intellectual descendants disagree with him. This is pervasive disagreement—a likely indicator of systematic error.2
Food for Thought...
Moral disarray

Before moving on to the second and third points that I want to make about philosophical ethics, it might be good for us to consider why it is so difficult to define the word good, at least in its moral sense. To this end, I will enlist the aid of Alisdair MacIntyre, whose After Virtue and A Short History of Ethics were instrumental in building this course. Here's his point in a nutshell. MacIntyre argues that moral discourse is in disarray and that the only discipline (history) that is poised to discover this fact, as well as its cause, was not codified as an academic discipline until after (or during) the period in which normative language was disrupted inexorably. Clearly, there's a lot to unpack here.
What MacIntyre is getting at is that we have lost track of the very meaning of moral and then we forgot we lost track of it. Per his research, MacIntyre has discovered that there has been distinct moral discourses (with their own moral logics) throughout history. In the ancient period of the Western tradition—think Ancient Greece—moral terms were only used in a means-end context. This means that you only used moral terms in an "if you want this, then you should do that" sense. For example, in one theory we'll be covering, what is moral is simply what God commanded. So, the moral logic is this: if you don't want to go to hell, then follow these commandments (and that's what good is). In another theory we'll be covering, the whole point of morality is to build ourselves so that we'll respond to the right situations with the right actions; this is called the virtue tradition. The moral logic is this: if you want to flourish at your social role (whatever that may be), then develop those virtues that will lead to success in this role (and that's what good is).
What happened after this, according to MacIntyre, is the increased contact between radically different social orders. To continue with our Greek example, due to conquest and trade, the Greeks came into contact with many different peoples and cultures. Once they learned of their norms and customs, they began to debate what actions were simply done due to custom and which actions truly where right for everyone. And this is when moral debate began to get messy.

The state of philosophical
ethics today.
But it didn't stop there. There was only more contact between different peoples as new empires rose and fell. Eventually, thanks in part to the printing press (and fastforwarding substantially), there was a facility to the exchange of ideas, including ideas about morality. And so in the 18th century, Enlightenment thinkers began to continue this debate about what moral terms mean, and they pushed it to its logical extreme. This pushing of the envelope culminated in the work of Immanuel Kant, who defined moral terms in the most absolutist sense possible: morality is independent of desires, of context, and of consequences. What is moral, according to Kant, is what is commanded by reason; for no other reason other than it must be.
By this point, there were many moral discourses and many moral logics. And the debate continued. Sometimes we use the word good in the sense that what is moral is simply what God commanded. Sometimes we use it more like in the sense that a virtue ethicist would use it. Sometimes we use it the way Kant uses it. And sometimes we use it in culturally-dependent way. And this is why MacIntyre characterizes modern philosophical ethics as interminable. The debates are, or appear to be, endless. Moreover, there seems to be no method by which to resolve disagreements. Why? Because we all use moral terms in different senses at different times, moral terms lost their previously-fixed meaning, and we've completely forgotten this whole history. Utter linguistic anarchy.3
Per MacIntyre, the only real moral logic that makes sense is that of the virtue tradition. This is because to use the term good correctly, one must have some role in mind. It doesn't make sense to simply say, "So and so is good." The rest of us would rightfully ask, "Good at what?" So there is no good in general, there's only good manager, good athlete, good teacher, etc. Contextualized by a social role is the only sense in which moral terms mean anything at all.4
What's been swept under the rug...
Let's consider now the second point I want to make: the field of philosophical ethics has many famous theorists who themselves proposed morally abhorrent positions. I will only discuss one ethicist's morally abhorrent positions, but you'll likely agree with me that it's enough. The thinker I have in question is none other than Immanuel Kant, champion of deontological ethics, as you will soon learn. Ethicists usually sidestep the various empirical claims that Kant made which are empirically false, arguing that his ethical theory is independent of these claims (see Mills 2017: 97-102). This is very convenient since these empirical claims are not only false, but one can easily make the case that they're dangerous too. In fact, Kant was a pioneering theorist of “scientific” racism. Eze (1997) cites various “findings” in Kant’s “anthropology.” Be warned: these quotes are disturbing.5
I obviously don't sweep these under the rug. We gain nothing from not exposing white supremacy when we see it. I'll go further. Philosophy as a field is guilty of down-playing the racism of many of its most famous theorists. Political philosopher Charles W. Mills gives us a quick overview below, and I've taken the liberty to bold those theorists that we will cover.
“John Locke invests in African slavery and justifies aboriginal expropriation; Immanuel Kant turns out to be one of the pioneering theorists of ‘scientific’ racism; Georg Hegel’s World Spirit animates the (very material and non-spiritual) colonial enterprise; John Stuart Mill, employee of the British East India Company, denies the capacity of barbarian races in their 'nonage' to rule themselves” (Mills 2017: 6; emphasis added).
The field itself still might harbor silent animosity towards non-whites. Jorge Gracia (2004) discusses how the stereotype of the philosopher excludes the mannerisms of various Latin American cultures. This effectively means that if a Latin American person wants to be a philosopher, they have to drop their cultural mannerisms, including such personal characteristics as humor (since philosophers are supposed to be serious), speed of conversation (since philosophers are supposed to be slow and methodical in their speech), and even their accent.
“To have a British accent is an enormous asset, particularly in philosophy. Some American philosophers actually adopt one after they visit Britain… But the situation is different with other accents, a fact Italian, Irish, and Polish immigrants know only too well. For Hispanics, matters are even worse because our accent is not perceived as being European—it is associated with natives from Latin America, Indians, primitive people! For this reason, there is a strong predisposition among American philosophers not to take seriously anything said by Hispanics with an accent” (Gracia 2004: 305).
No proof of effectiveness
Let's close with the third point I want to make in today's lesson: philosophical ethics cannot credibly make the case that ethical reflection has improved the world. Sure. There are some thinkers, like Steven Pinker, that argue that the decrease in interpersonal violence and warfare over the last few centuries is a by-product of Enlightenment values. In his Better Angels of Our Nature, he argues that part of the reason for the dawn of Enlightenment values was a coherent moral philosophy. Take a look at the passage below (again with the thinkers we will be covering in bold).
“I am prepared to take this line of explanation a step further. The reason so many violent institutions succumbed within so short a span of time was that the arguments that slew them belonged to a coherent philosophy that emerged during the Age of Reason and the Enlightenment. The ideas of thinkers like Hobbes, Spinoza, Descartes, Locke, David Hume, Mary Astell, Kant, Beccaria, Smith, Mary Wollstonecraft, Madison, Jefferson, Hamilton, and John Stuart Mill coalesced into a worldview that we can call Enlightenment Humanism. It is also sometimes called Classical Liberalism” (Pinker 2012: 180; emphasis added).

Although Pinker might be partially right, there are other viable explanations for this decrease in violence. Just like we will see Kyle Harper explain the rise of Christianity without assuming its truth (see the Death in the Clouds series), we can similarly explain our improved moral behavior and outlook without assuming that moral judgments actually have this world-changing force. One example of this might be found in how the age of colonialism came to an end.
Today colonialism is looked at as morally abhorrent. We lament the genocide of the Native Americans, the eradication of thousands of native cultures and languages, and the practice of occupying a territory only to drain it of its natural resources as was done in Latin America (by Spain, Portugal, and France), in Africa (by Britain, France, Germany, Portugal, Belgium, and Italy), and in Asia (by Britain, France, Portugal, Spain, the Netherlands, and the US). We are righteously upset over the overthrow of governments (sometimes democratically-elected ones) by both the US and the USSR during their Cold War. We now acknowledge that this was all bad (even if it is ongoing in some parts of the world). The question is: why did imperialists change their minds? Was it a moral awakening? Or something else?
Immerwahr gives an explanation of why the US gave up many of its colonies in the middle of the 20th century in the slideshow below. And it has nothing to do with being morally enlightened...
In fact, my friends, there is literally no evidence that studying ethics makes you a more ethical person (see Schwitzgebel 2011).
But then again...

It might seem like this class is all for nothing, but let me say one thing that might change your mind. We’ll be looking primarily at explanations and theories as to why some things are morally right or morally wrong. At this level, there is much disagreement. But at the surface level, the waters today are much calmer. Ethicists actually do agree on several things. Devoting your time and effort to helping others and to social movements is almost universally encouraged. Ethicists agree that we should all be more charitable. Although not everyone agrees that being vegan is morally necessary, most ethicists acknowledge that our current animal agriculture needs to be reformed. Ethicists agree that gender and racial equality must be strived for. And ethicists nearly unanimously claim that we need to take better care of our planet, both as individuals and as a collective, for ourselves, our friends, and our descendants.
And so perhaps studying ethics can at least help us imagine a better tomorrow. Who knows? Maybe one day our grandchildren, or our great great great grandchildren, will live in a world free from gender and racial injustice, free from animal cruelty, free from human self-destructiveness. Maybe they'll have to go to museums to see what life was like when the world was full of unnecessary suffering. Their history classes will teach them about how sapiens used to spend so much time and effort in fighting and war, and they'll find it strange that poverty existed at all. Racism, sexism, and homophobia won't even make sense to them. And the history courses that they will take surveying all of the injustices of the past can end with the following words... "And then there were none."
To be continued...
Footnotes
1. Two points that I should add here. It's also the case that I teach the course the way that I do because I am a so-called analytic philosopher. In fact, I am not only in the analytic branch of Philosophy (which seeks to make its theories continuous with the natural sciences), but I am considered a radical even within this branch. My position is officially referred to either as philosophical naturalism or neopositivism, but I've also been referred to as an empirical philosopher, if the person is being kind, and as a ruthless reductionist or a logical positivist (which is supposed to be an insult since that view was refuted) when they don't like my views very much. Second, there is actually an increasing number of empirical philosophers, but they tend to publish in various disciplines besides Philosophy, such as in the Cognitive Sciences, e.g., Daniel Dennett. I wanted to make sure we included them in this introductory class and so that is another reason why Unit III is the way it is.
2. On a personal note, one of my primary philosophical interests is in the area of the philosophy of artificial intelligence. An important philosopher who works in this field is Nick Bostrom (mentioned above). Bostrom argues that the possibility of general domain artificial superintelligence poses an existential risk to humankind; the interested student can see the lesson titled Turing's Test from my 101 course. Among the many problems he challenges us to consider, an important one for me is the motivation selection problem: the question of how to make "friendly" AI—AI with a friendly disposition that wouldn't be inclined to harming humans. Perhaps another way of posing this problem is to wonder how to make AI that follows moral rules such as, "Don't destroy all of humanity". As previously mentioned, Bostrom argues that since there is no ethical theory (or meta-ethical position) that holds a majority position, then that means that most philosophers subscribe to an ethical theory that is false. As such, they are no help when attempting to build “friendly” AI. They can't help us decide whether artificial minds deserve moral rights. They can't even tell us whether morality can even be programmed for.
3. In chapter 2 of After Virtue, MacIntyre also makes the interesting point that if his hypothesis is true, it will seem completely false. This is because the very function of the moral and evaluative terms we use is corrupted and in disarray, and so we do not have the language by which to point out the corruption and disarray. Anarchy indeed.
4. MacIntyre makes several arguments against other moral discourses and other moral logics. The problem with each theory is unique to itself. For example, the problem with a view called divine command theory is that it relies on the existence of God—something which MacIntyre wouldn't bet on. The problem with, say, utilitarianism, is that it relies on naturalism about ethics, the view that moral properties (like moral goodness) are actually natural properties (like pleasure). The interested student should refer to MacIntyre's After Virtue for a full analysis, although you should be warned that it is a very challenging text.
5. Kant also made accurate empirical predictions. We will cover those in time.
...Of Good
and Evil
It has often and confidently been asserted, that man's origin can never be known: but ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.
~Charles Darwin
Back to the beginning...

Let's turn the clock back—way back... A genus is a biological classification of living and fossil organisms. This classification lies above species but below families. Thus, a genus is composed of various species, while a family is composed of various genera (the plural of genus). The genus Homo, of which our species is a member, has been around for about 2 million years. During that time there has been various species of Homo (e.g., Homo habilis, Homo erectus, Homo neanderthalensis, etc.), and these have overlapped in their existences, contrary to what you might've thought. And so it's the case that, just like today there are various species of ants, bears, and pigs all existing simultaneously, H. erectus, H. neanderthalensis, and H. sapiens all existed concurrently, at least briefly. Now, of course, they are all extinct save one: sapiens (see Harari 2015, chapter 1).
The species Homo sapiens emerged only between 300,000 and 200,000 years ago, relatively late in the 2 million year history of the genus. By about 150,000 years ago, however, sapiens had already populated Eastern Africa. About 100,000 years ago, Harari tells us, some sapiens attempted to migrate north into the Eurasian continent, but they were beaten back by the Neanderthals that were already occupying the region. This has led some researchers to believe that the neural structure of those sapiens (circa 100,000 years ago) wasn’t quite like ours yet. One possible theory, reports Harari, is that those sapiens were not as cohesive and did not display the kind of social solidarity required to band together and collectively overcome other species, such as H. neanderthalensis. But we know the story doesn't end there.
Around 70,000 years ago, sapiens migrated out of Africa again, and this time they beat out the Neanderthals. Something had changed. Something allowed them to outcompete other Homo species. What was it? Well, here's one theory. It was this time period, from about 70,000 to 40,000 years ago, that constitutes what some theorists call the cognitive revolution—although other theorists (e.g., von Petzinger 2017) push the start date back as far as 120,000 years ago.1 Regardless of the start date, it is tempting to suggest that it was the acquisition of advanced communication skills and the capacity for abstract thinking and symbolism that were somehow evolved during this time period that allowed sapiens to build more robust social groups, via the use of social constructs, and dominate their environment, to the detriment of other homo species (see Harari 2015, chapter 2). In short, sapiens grew better at working together, collaboratively and with a joint goal.2
The idea that sapiens acquired new cognitive capacities that allowed them to work together more efficiently is fascinating. It is so tempting to see these new capacities as a sort of social glue that allowed sapiens to outcompete, say, H. neanderthalensis. As anyone who has played organized sports knows: teams that work well together are teams that win. What makes this idea even more tempting is that this capacity for large-scale cooperation happened again. Between 15,000 to 12,000 years ago (the so-called Neolithic), sapiens’ capacity for collective action increased dramatically again, this time giving rise to the earliest states and empires. These are multi-ethnic social experiments with massive social inequalities that somehow stabilized and stayed together—at least sometimes. What is this social glue that allows for the sort of collectivism displayed by sapiens?
Two puzzles arise:
1. What happened ~100,000 years ago that allowed the successful migration of sapiens?
2. What happened ~15,000 years ago that allowed sapiens to once again scale up in complexity?

Perhaps evolutionary theory has the answer. Although many find it counterintuitive, the forces of natural selection have not stopped affecting Homo sapiens. Despite it being the case that sapiens today are more-or-less anatomically indistinguishable from the way they were 200,000 years ago, there have been other changes under the hood, so to speak. In fact, through the study of genomic surveys, Hawks (et al. 2007) calculates that over the last 40,000 years our species has evolved at a rate 100 times as fast as the previous evolution. Homo sapiens has been undergoing dramatic changes in its recent history.
It is, in fact, the father of evolutionary theory, Charles Darwin (1809-1882), pictured left, that first suggested that it was an adaptation, an addition to our cognitive toolkit, that allowed sapiens to work together more collaboratively and with more complex relationships. He tended to refer to this new capacity as making those "tribes" who have it more "well-endowed", giving them "a standard of morality", and, interestingly, he also posited that this wasn't an adaptation that occurred at the individual-level but rather at the level of the group.2 Here is an important passage from Darwin's 1874 The Descent of Man:
“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an increase in the number of well-endowed men and an advancement in the standard of morality will certainly give an immense advantage to one tribe over another… and this would be natural selection. At all times throughout the world tribes have supplanted other tribes; and as morality is one important element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase” (Darwin 1874 as quoted in Wilson 2003: 9).
And so, the intellectual heirs of Darwin's conjecture (e.g., Haidt 2012, Wilson 2003) suggest that it was cognitive revolutions that are likely responsible for what I'll be referring to as increases in civilizational complexity. For our purposes, civilizational complexity will refer to a. an increased division of labor in a given society and b. growing differences in power relations between members of that society (hierarchy/inequality), despite c. a constant or perhaps even increased level of social solidarity (cohesiveness). With this definition in place, we can see that hunter-gatherer societies were low in civilizational complexity (very little division of labor, very egalitarian) and a modern-day republic, say, Canada, is high in civilizational complexity (a basically uncountable number of different forms of labor, various social classes). Intuitively, the more egalitarian societies would seem more stable, but, with our new cognitive additions, sapiens can find social order in high-diversity, massively populated and massively unequal societies.
Does this solve our puzzles? Quite the contrary. Many more questions now arise. Why did sapiens develop this new capacity for complex communication? What kinds of communication and what kinds of ideas are available to sapiens now that weren't available to them before the cognitive revolution? What specific ideas led to the growth in civilizational complexity? Are there any pitfalls that accompany our new cognitive toolkit? Obviously, we're just getting started.
What's ethics got to do with it?
The puzzles above, which I hope you find engaging, to some theorists are clearly related to the phenomenon of ethics. In a tribe, team and society, there are right and wrong ways to behave, and the more people behave in the right ways, the more stable that society will be. Others argue that the evolutionary story of our capacity for competitive, complex societies, although interesting, is unrelated to ethics. I guess ultimately that really depends on what you mean by ethics. For starters, many people assume a distinction between ethics and morals. A simple internet search will yield funny little distinctions between these two. For example, one website3 claims that ethics is the code that a society or business might endorse for its members to follow, whereas morals is an individual's own moral compass. But this distinction already assumes so much. We'll be covering a theory in this class that tells you that the moral code society endorses is the only thing you have to abide by. This renders "your own morals" as superfluous. We'll also look at a view that argues that the only thing that matters is what you think is right for you, so what society claims doesn't matter. We'll even cover a view that argues that both what society endorses and what you think is right is irrelevant: morality comes from reason. So this distinction is useless for us since it assumes that we've already identified the correct ethical theory.

So then what is the study of ethics? Lamentably, there is no easy answer to this question. For many philosophers, to consider the origin of our cognitive capacities and how they allow us to work together more effectively is completely irrelevant to the field of ethics. For them, ethics is the study of universal moral maxims, the search for the answer to the question, "What is good?". For Darwin, as we've seen, it was perfectly sensible to call sapiens' capacity for collective action a 'moral' capacity. And, unfortunately, there are even more potential answers to the question "What is ethics?"
Thankfully, I have it on good authority that we don't need to neatly cordon off just what ethics is at the start. One of the most influential moral philosophers of the 20th century, Alasdair MacIntyre, begins his A Short History of Ethics by making the case that there is in reality a wide variety of moral discourses; different thinkers at different times have conceived of moral concepts—indeed the very goal of ethics—in radically different ways. It is not as if when Plato asked himself “What is justice?” he was attempting to find the same sort of answer that Hobbes was looking for when reflecting on the same topic. So, it is not only pointless but counterproductive to try to delimit the field of inquiry at the outset. In other words, we cannot begin by drawing strict demarcation lines around what is ethics and what is not.
If MacIntyre is correct, then the right approach is a historical one. We must place moral discourse in its historical context to understand it correctly. Moreover, this will allow us to see the continuity of moral discourse. It is the case, as you shall see, that one generation of ethicists influences the generation after them, and so one can see an "evolution" of moral discourse. This is the approach we'll take in this course. For now, then, when we use the word ethics, we'll be referring to the subfield of philosophy that addresses questions of right and wrong, the subfield that attempts to answer questions about morality.

You might be wondering what the field of philosophy is all about. I have a whole course that tries to answer that question. Let me give you my two sentence summary. There are two approaches to doing philosophy: the philosophy-first approach (which seeks fundamental truths about reality and nature independently and without the help of science) and the science-first approach (which uses the findings of science to help steer and guide its inquiries). The philosophy-first approach (Philosophy with a capital "p") was dominant for centuries but fell into disrepute with the more recent success of the natural sciences, thereby making way for the science-first approach (the philosophy with a lowercase "p" that I engage in)—although many philosophy-first philosophers still haven't gotten the memo that science-first is in. Obviously the preceding two sentences are a cartoonish summary of the history of philosophy, a history that I think is very instructive. The interested student should take my PHIL 101 course.
Important Concepts
Course Basics
Theories
What we're looking for in this course is a theory, one that (loosely speaking) explains the phenomenon of ethics/morality.5 Now it's far too early to describe all the different aspects of ethics/morality that we'd like our theory to explain, so let's begin by just wrapping our minds around what a theory is. My favorite little explanation of what a theory is comes from psychologist Angela Duckworth:
“A theory is an explanation. A theory takes a blizzard of facts and observations and explains, in the most basic terms, what the heck is going on. By necessity, a theory is incomplete: it oversimplifies. But in doing so, it helps us understand” (Duckworth 2016: 31).
There are some basic requirements to a theory, however; these are sometimes referred to as the virtues of theories. What is it that all good theories have in common? Keas (2017) summarizes for us:
“There are at least twelve major virtues of good theories: evidential accuracy, causal adequacy, explanatory depth, internal consistency, internal coherence, universal coherence, beauty, simplicity, unification, durability, fruitfulness, and applicability” (Keas 2017).
These theoretic virtues are best learned during the process of learning how to engage in science—a process I suspect that you are beginning since you are taking a class in my division, Behavioral and Social Sciences. However, we can highlight a few important theoretic virtues right now.

Symbol for the Illuminati,
protagonists in several
preposterous grand conspiracies.
For a theory to have explanatory depth means that a good theory has greater depth with regards to describing the chain of causation around a phenomenon. In other words, it explains not only the phenomenon in question but also various nearby phenomena that are relevant to the explanation of the phenomenon in question. Another important theoretic virtue is simplicity, or explaining the same facts as rival theories, but with less theoretical content. In other words, if two theories explain some phenomenon but one theory assumes, say, secret cabals intent on one-world government while the second does not, you should go with the second simpler theory. (Why assume crazy conspiracies when human incompetence can explain the facts just as well?!) This principle, by the way, is also known as Ockham's razor. Lastly, a good theory should have durability; i.e., a good theory should be able to survive testing and perhaps even accommodate new, unanticipated data.
Can we find a theory that explains the phenomenon of ethics/morality and has the abovementioned theoretic virtues? We'll see...
Philosophical jargon
For better or worse, some of the first inquiries into ethics/morality came from the field of philosophy. So, in order to learn about these first attempts to grapple with ethics, we'll have to learn some philosophical jargon. Thankfully, the theoretic virtues of philosophical theories often overlap with the theoretic virtues of social scientific theories. As such, let's begin with a theoretic virtue that is a fundamental requirement of any theory: logical consistency—or, as Keas puts it, internal coherence. As you learned in the Important Concepts, logical consistency just means that the sentences in the set you are considering can all be true at the same time. In other words, none of the sentences contradict or undermine each other. This seems simple enough when it comes to theories that are only a few sentences long. But we'll be looking at theories will lots of moving parts, and it won't be obvious which theories have parts that conflict with each other. Thus, we'll have to be very careful when assessing ethical theories for logical consistency.
One more thing about logical consistency: you care about it. I guarantee you that you care about it. If you've ever seen a movie with a plot hole and it bothered you, then you care about logical consistency. You were bothered that one part of the movie conflicted with another part. This is being bothered by inconsistency! Or else think about when one of your friends contradicts him/herself during conversation. If this bothers you as much as it bothers me, we can agree on one thing: consistency matters.
How do we know if one philosophical theory is better than another? For this, we'll have to look at one of the main forms of currency in philosophy: arguments. In philosophy, an argument is just a set of sentences given in support of some other sentence, i.e., the conclusion. Put another way, it is a way of organizing our evidence so as to see whether it necessarily leads to a given conclusion. You'll become very familiar with arguments as we progress through this course. For now, take a look at this example, comprised of two premises (1 & 2) and the conclusion (3). If you believe 1 & 2 (the evidence), then you have to believe 3 (the conclusion).
- All men are mortal.
- Socrates is a man.
- Therefore, Socrates is mortal.
Food for Thought...
Roadblocks: Cognitive Biases
For evolutionary reasons, our cognition has built-in cognitive biases (see Mercier and Sperber 2017). These biases are wide-ranging and can affect our information processing in many ways. Most relevant to our task is the confirmation bias. This is our tendency to seek, interpret, or selectively recall information in a way that confirms one’s existing beliefs (see Nickerson 1998). Relatedly, the belief bias is the tendency to rate the strength of an argument on the basis of whether or not we agree with the conclusion. We will see various cases of these in this course.
I'll give you two examples of how this might arise. Although this happened long ago, it still stands out in my memory. One student felt strongly about a particular ethical theory. This person would get agitated when we would critique the view, and we couldn't have a reasonable class discussion about the topic while this person was in the room. I later found out that the theorist who was highlighted in that theory worked in the discipline of anthropology, the same major that the student in question had declared. But the fact that the theorist who endorses a particular theory is in your field is not a good argument for the theory. In fact, I can cite various anthropologists who don't side with the theory in question. As a second example, take the countless debates that I've had with vegans about the argument from the Food for Thought section. There is an objection to that example every time I present it. Again, this is not to say that veganism is false or that animals don't have rights, or anything of the sort(!). But we have to be able to call bad arguments bad. And that is a bad argument. I'll give you good arguments for veganism and animal rights. Stay tuned.
As an exercise, try to see why the following are instances of confirmation bias:
- Volunteers given praise by a supervisor were more likely to read information praising the supervisor’s ability than information to the contrary (Holton & Pyszczynski 1989).
- Kitchen appliances seem more valuable once you buy them (Brehm 1956).
- Jobs seem more appealing once you’ve accepted the position (Lawler et al. 1975).
- High school students rate colleges as more adequate once they’ve been accepted into them (Lyubomirsky and Ross 1999).
By way of closing this section, let me fill you in on the aspect of confirmation bias which makes it a truly worrisome phenomenon. What's particularly worrisome, at least to me, is that confirmation bias and high-knowledge are intertwined—and not in the way you might think. In their 2006 study, Taber and Lodge gave participants a variety of arguments on controversial issues, such as gun control. They divided the participants into two groups: those with low and those with high knowledge of political issues. The low-knowledge group exhibited a solid confirmation bias: they listed twice as many thoughts supporting their side of the issue than thoughts going the other way. This might be expected. Here's the interesting (and worrisome) finding. How did the participants in the high-knowledge group do? They found so many thoughts supporting their favorite position that they gave none going the other way. The conclusion is inescapable. Being more informed—i.e., being of high intelligence in a given domain—appears to only amplify our confirmation bias (Mercier and Sperber 2017: 214).
Ethical theory
The first step in this journey is to look at various ethical theories. In this first unit, we will be focusing on seven classical ethical theories from the field of philosophy. Given how influential they are in contemporary ethical theory, you will likely feel some affinity for some aspects of these theories. In fact, you might be convinced each theory we cover is the right one, at least at the time that we are covering it. This might be even more so the case with the first one, which is an ambitious type of theory that attempts to bridge politics and ethics. Moreover, central to this view is the notion that humans naturally and instinctively behave in purely self-interested way, a view that many find to be intuitively true.
It's time to take a look at the pieces of the puzzle...
FYI
Supplemental Material—
- Reading: Internet Encyclopedia of Philosophy, Entry on Ethics, Section 2
- Reading: Internet Encyclopedia of Philosophy, Entry on Fallacies
Note: Most relevant to the class are Sections 1 & 2, but sections 3 & 4 are very interesting.
-
Text Supplement: Useful List of Fallacies
Related Material—
- Video: TEDTalk, Stuart Firestein: The pursuit of ignorance
- Podcast: You Are Not So Smart Podcast, Interview with Hugo Mercier
- Note: Transcript included.
Footnotes
1. Von Petzinger (2017) makes the case that studying the evolution of symbolism and the capacity for abstract thinking in human cognition can be furthered by her field of paleoanthropology. Throughout her book, she details how, contrary to early paleoanthropological theories, the capacity for symbolism didn’t start around 40,000 years ago but much earlier. Her work, as well as that of others, shows that consistently utilized, non-utilitarian abstract geometric patterns can be seen since at least about 100,000 years ago, and perhaps as far back as 120,000 years ago(!) in Africa. Her argument is that there was a surprising degree of conformity and continuity to the drawing of different signs across these time periods and across vast geographic locations. It’s even the case that some patterns grew and waned in popularity. This shows that sapiens were already cognitively modern.
2. What brought about the cognitive revolution is actually hotly disputed (see Previc 2009). In fact, some theorists argue that it doesn’t even strictly-speaking exist (see Ramachandran 2000).
3. The theory of group selection is far beyond the scope of this course. However, the interested student can refer to Wilson (2003) for a defense of it, as well as an application of the theory to the question of the evolutionary origins of religion.
4. The website in question is diffen.com. If the page I visited is representative of the quality of distinctions that it makes, then you should not at all trust this website.
5. In chapter 4 of Failure: Why science is so successful, Firestein makes the case that the concepts of hypothesis and theory are outdated and not actively used by any scientists he knows—he is a biologist himself. Instead, they’ve replaced the words hypothesis and theory with the concept of model, which has less of an air of finality; it’s more of a work in progress. This is because the process of forming a model is more in line with how science is actually done(!), as opposed to the "scientific method"—which Firestein rails against.
The Mind's I
“A man always has two reasons for what he does—
a good one and the real one.”
~J. P. Morgan
The Afflicted City
We begin our survey of ethical theories—or as I'll sometimes call them moral discourses—in more or less the order in which they appeared historically, sort of. The thing about some moral discourses is that they tend to come in and out of fashion, as you'll see in this lesson. Although we'll be covering more modern versions of them, the inspiration for the two theories we are covering today is ancient. This is because thinkers far back in the Western tradition have been thinking about why we behave in the way we do, why we build big cities, build empires; why we sometimes work together and at other times slaughter each other. We know that Western thinkers had theories about this because the works of some very important thinkers have survived to tell us about them. One such thinker is Plato (circa 425 to 348 BCE).
Plato wrote primarily in dialogue form. Typically his dialogues would take the form of an at least partly fictionalized dialogue between some ancient thinker and Plato's very famous teacher, Socrates. In Plato's masterwork Republic, the character of Socrates attempts to define justice while responding to the various objections of other characters, which expressed views that were likely held by some thinkers of Plato's time. In effect, this might be Plato's way of defending his view against competing views of the time, although there is some debate about this.
Barley bread.
In the dialogue, after some initial debate, the characters decide to build a hypothetical city, a city of words, so that during the building process they can study where and when justice comes into play. At first they build a small, healthy city. Everyone played their own role which served others. There was a housebuilder, a farmer, a leather worker, and a weaver so that they could have all the essentials. At this point, a character named Glaucon objected to the project. He argued that this is not a real city but it's a "city of pigs", a city where people would be satisfied with the bare minimum. A real city, with real people, would want luxuries and entertainment, and they would be dissatisfied with eating barley bread as their primary source of sustenance. So, at Glaucon's behest, the characters expanded the city to give its inhabitants the luxuries they likely wanted. Soon after, the characters realized the city would have to make war on its neighbors; they would need an army and they would need rulers.
Ground Rules
The way in which Plato's story proceeds after this point won't be covered here.1 Suffice it to say that Republic is one of the most influential documents in the history of political philosophy—and the story doesn't at all go in the direction you might think it does. In any case, what is relevant to us is the view endorsed by Glaucon, which we will take a closer look at in a moment. Before we do so, however, it is important to have ground rules for assessing ethical theories. I've made a checklist of some basic desiderata, i.e., things that we want, from an ethical theory. Let's review it briefly.

Ethical Theory Checklist
First off, we'd like an ethical theory that fits in with our moral intuitions. We don't want an ethical theory that mysteriously suggests that murder is ok. That would be a huge red flag. Next, we want an ethical theory that explains how we actually form moral judgments. If, for example, an ethical theory claims that we make our moral judgments by flipping a coin, then we know that ethical theory isn't very strong. Sure, maybe some people have done this, but making moral decisions seems to be more deliberate than that; it seems like we are usually very conflicted about how to proceed and wouldn't be satisfied by the outcome of a coin toss. Next, ideally we want an ethical theory that can resolve our moral debates. For example, some people think that capital punishment is morally abhorrent, while others think it is the proper thing for a society to do, to punish those who break its most fundamental laws. We'd like a theory that convincingly shows that one of these positions is right and the other is wrong. Lastly, we hope that the theory in question will help us shed light on how and why civilizational complexity has increased in the way that it has.
Important Concepts+
Glaucon's Challenge
Picking up where we left off, the characters in Plato's dialogue Republic begin to discuss how a luxurious city, one in which citizens would enjoy delicacies and splendor, would come into being. The need for this arose from what I will call Glaucon's challenge, a challenge I'm not sure Socrates ever satisfactorily responded to. The challenge has to do with the question of who is happier: the perfectly just person or the perfectly unjust person. In other words, Glaucon wants to know if it is better to be good or bad. He suggests that it might be better to be perfectly unjust, and he uses the story of the ring of Gyges to show just how advantageous being bad can be. In this story, a man comes into the possession of a ring which makes him invisible, and before long he has usurped a king and taken over his kingdom. Glaucon's challenge is this: show me that I'm wrong, show me that it is better to be just.
It is important to note that the sort of answer that Socrates, Glaucon and the others were looking for might not be what you would be looking for in a more modern context. MacIntyre (2003, chapter 8) draws a basic distinction between Greek ethics and modern ethics: Greek ethics is concerned with the question, “What am I to do if I am to fare well?” Modern ethics is concerned with the question, “What ought I to do if I’m to do right?” In other words, for the Greeks, being good and being happy where either the same thing or at least siblings. For many of us, though, doing good is not always what will make us happiest.

The Death of Patroclus.
To supplement this point, consider the history of the Greek word agathos, which is the ancestor of our word good. In Homeric times, the word was used to describe noblemen who were successful in battle and in ruling. It also implied that they had the wealth and status to be able to train so that they can be successful in battle and in ruling. These were the highest aim of a Greek. The attribution of agathos to others is inextricably linked to the descriptive: success in battle, wealth, etc. Notice, though, that these are not our highest aims now—at least for most of us, I think. For us, doing right thing is often very far from what is most personally advantageous. And so, what the Greeks meant by good is fairly distant from what most of us mean by it.
With this context in place, you can see that Socrates had his work cut out for him. Could he convince Glaucon that being just is better than being unjust? Well, as I've said, you'll have to read Republic on your own to find out. For our purposes, there are just a few things to note. First off, Glaucon seems to be assuming that pleasure is the only intrinsic good and that it would be rational to maximize our own pleasure. Moreover, he adds an important psychological insight—one that modern science has confirmed (Tversky and Kahneman 1991). It is this: we feel the bad more than the good. In psychological jargon, this is called loss aversion. Out of these assumptions, Glaucon builds his theory. Maybe we only band together to avoid even worse suffering. Maybe a long time ago we just agreed to not commit injustices against each other because that would be, on balance, better than constantly being at risk of having injustice committed against you. We came to the realization that it is better to live and let live than to always have to be on guard against others. This is Glaucon's case against justice: justice isn't an intrinsic good; it's just the best we can do.
“Hence, those who have done and suffered injustice and who have tasted both—the ones who lack the power to do it and avoid suffering it—decide that it is profitable to come to an agreement with each other neither to do injustice nor to suffer it. As a result, they begin to make laws and covenants; and what the law commands, they call lawful and just. That, they say, is the origin and very being of justice. It is in between the best and the worst. The best is to do injustice without paying the penalty; the worst is to suffer it without being able to take revenge. Justice is in the middle between these two extremes. People love it, not because it is a good thing, but because they are too weak to do injustice with impunity” (Republic, 359a).
Sidebar
Hobbes' Leviathan
As I mentioned in the Sidebar, moral discourses tend to go in and out of fashion. Just as Glaucon gave an account of justice that was based on a social contract, nearly two thousand years later, Thomas Hobbes (1588-1679) would arrive at much the same conclusion.
Hobbes, like other philosophers of the period, had an interesting profession: a sort of live-in scholar for noblemen. The role left little time for marriage and family life, as is the case with both Hobbes and Locke, but allowed for intellectual and philosophical pursuits. Hobbes, in his role as servant to his first employer William Lord Cavendish, entailed functioning as a secretary, tutor, financial agent, and general advisor.
Hobbes' views on ethics/politics will seem very similar to the views of Glaucon, if Glaucon indeed held these views. They both assume hedonism and psychological egoism is true, and they claim that prosocial behavior is merely a state of affairs we submit to purely out of self-interest. Morality, they say, is convenient fiction. In short, we submit to an authority and give it a monopoly on violence because the alternative, the state of nature where everyone is at war with each other, is substantially worse. But remember: justice and morality are mere social contracts; if society collapses, you can feel free to ignore these contracts. We'll call this view social contract theory, or SCT for short.
“Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man... In such condition there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short” (Thomas Hobbes, Leviathan, i. xiii. 9)
Hobbes and SCT
Is Hobbes' right? Certainly we have seen unthinkable acts of violence and theft when there is a breakdown in central authority. From the LA Riots in 1992 to involuntary euthanasia during the Hurricane Katrina disaster, and even more recently during the looting after the George Floyd protests.2 We will give a more careful assessment of SCT in the next lesson, but for now I want you to think about how the breakdown of central of authority is at least correlated with some instances of immorality and blatant disregard for the law. But Hobbes' theory mostly rests on one biological trait: psychological egoism. Is it truly the case that all human actions are driven by self-interest?
Food for Thought...
Ethical egoism
So our first theory, SCT, is based primarily upon the assumptions of hedonism and psychological egoism. Both have been challenged and we will look at those objections. However, before moving on to that, I'd like to introduce you to a close cousin of Hobbes' SCT: ethical egoism. Ethical egoism makes the same starting assumptions as Hobbes, but it is—perhaps we can say—less ambitious. It does not seek to explain how societies coalesce, or how the laws could stabilize social order, or any of Hobbes' more grandiose claims. Ethical egoism is much simpler: an action is right if, and only if, it is in the best interest of the agent performing the action. Here is a simple argument for the view.
- If the only way humans are able to behave is out of self-interest, then that should be our only moral standard.
- All human actions are done purely out of self-interest, even when we think we are behaving selflessly (psychological egoism).
- Therefore, our moral standard should be that all humans should behave purely out of self-interest.

Mandeville's
The Fable of the Bees.
In a nutshell, this argument states that if all we can do is behave in a self-interested way, that’s all we should do. Premise 1 seems reasonable enough. A thinker that we will come to know eventually, Immanuel Kant, argued that if we should do something then that implies that we can do it. Although Kant did not believe in psychological egoism, we can accept his dictum that we should be able to do what we are required do. This implies that if we can't help but to act in a self-interested way, then that's the only rational standard we should be held against.

Ayn Rand.
If it were up to me, I wouldn't even bother covering ethical egoism (EE for short). Hobbes' SCT almost seems to 'contain' EE, but is much better argued for. But I cover the view for two reasons. One is that, like SCT, this type of moral discourse has cropped up in history multiple times. In the early 18th century, for example, Bernard Mandeville advocated something very much like this view in his 1714 The Fable of the Bees. Then, in the 20th century, novelist and philosopher Ayn Rand endorsed greed as being good—a sentiment that is very much in line with ethical egoism.3
In any case, the proponents of ethical egoism argue that psychological egoism can explain all human actions. We can admit at least that it does seem to account for many of our behaviors. For one, sometimes people are selfish. Sometimes, however, people cooperate and behave in a seemingly altruistic way, which is to say for the benefit of others. Egoists claim their view can also account for this sort of behavior because it’s possible people behave this way only to: get the benefits of working cooperatively, or enjoy moral praise (from themselves and others), or just avoid feeling guilt. Let's be honest, some of you don't lie or steal simply because you couldn't bare the guilt.
Does ethical egoism meet our desiderata? Well it might fit with some of our moral intuitions. Most of us think that murder is wrong. An ethical egoist might agree given that the chances of being caught are pretty good, and being caught for murder is definitely not in our best interest. Does it reflect how we form our moral judgments? Maybe for some of us, but it's hard to say definitively that every one reasons in this way about moral matters. The egoist can push back, though. They might argue that even Mother Teresa acted in a self-interested way. After all, if her faith was well-placed, she did get a reward for her life's work: eternal bliss in heaven. Does it resolve our moral debates? Hardly. This is a definite setback for ethical egoism. Lastly, does it solve the puzzle of human collective action? Not really. It doesn't even address it.
And so we have our first two ethical theories: Hobbes' social contract theory and ethical egoism. I think you should begin to associate the views with the name of the relevant thinker, so try to think of Hobbes when I discuss SCT from now on. As for EE, you can either think of Mandeville or Rand, but, to be honest, I always think of the lead character from the show Breaking Bad, Walter White. SPOILER ALERT: Early on, he said he did it for his family. But anyone that got to the end of the series knows the truth: he did it for himself. And this brings me to the second reason why, despite my better judgment, I do indeed cover EE: some of you actually like the view.
The ethical theories covered today took as their starting point two assumptions. Hedonism is the view that pleasure is the only intrinsic good. In other words, a hedonist believes both that good is equivalent to pleasure and that there is no other thing that qualifies as good for its own sake. Psychological egoism is the view that all human actions are rooted in self-interest.
Thomas Hobbes, sounding very much like Glaucon in Plato's Republic, argued that all prosocial behavior is merely a state of affairs we submit to purely out of self-interest. In his social contract theory, morality is merley a convenient fiction. In short, we submit to an authority and give it a monopoly on violence because the alternative, the state of nature where everyone is at war with each other, is substantially worse.
Reflecting on Hobbes' theory for how we form moral judgments is illuminating. For Hobbes, moral judgments are formed by our feelings and emotions, which are themselves influenced by self-interest. Since the self-interest of individuals will drive them towards conflict with each other sooner rather than later, Hobbes suggests that a set of stabilizing laws—a sort of fictional but useful morality—be established so as to avert catastrophe.
Ethical egoism, flavors of hich seemed to be advocated by Bernard Mandeville and Ayn Rand, is simply the view that act is right if, and only if, it is in the best interest of the person doing the action. Under this way of thinking, many thing are technically morally permissible, including stealing if you can get away with it and helping others all your life so you can get into heaven.
FYI
Suggested Reading: Plato, The Republic, Book II
-
Note: Read from 357a to 367e.
TL;DR: The School of Life, POLITICAL THEORY: Thomas Hobbes
Supplemental Material—
- Video: Peter Millican, Introduction to Thomas Hobbes
Advanced Material—
-
Reading: Stanford Encyclopedia of Philosophy, Entry on Egoism, Sections 1 & 2
-
Reading: Thomas Hobbes, On the Social Contract
-
Reading: Plato, The Republic, Book IX
-
Book: James Coleman, Foundations of Social Theory
-
Note: This is a more modern treatment of what is now dubbed “rational choice theory.”
-
Footnotes
1. I cover Plato's Republic in some detail in my PHIL 105: Critical Thinking and Discourse.
2. I recommend the documentary LA 92 on the social injustice and unrest leading up to the LA riots.
3. Rand is most known for her novels, Atlas Shrugged and The Fountainhead. She called her ethical theory objectivism; however, we are using this label for another theory later on. What she meant by objectivism was that our lives are governed by some central pursuit, i.e., some objective. We will be using the label, however, to denote the view that moral terms are objective and mind-independent, i.e., existing independent of humans.
Eyes in the Sky
An unjust law is no law at all.
~St. Augustine
Making room...
Last time we covered two ethical theories: Hobbes' social contract theory (SCT) and ethical egoism (EE). Of these two, SCT is clearly the more ambitious theory. It seeks to explain both the origins of civil society and our moral judgments; and it grounds the theory in an account of human nature. This last point is, according to MacIntyre (2003: 121-145), Hobbes' lasting contribution to moral discourse: to point out that a theory of ethics has to be grounded in a valid account of human nature. Note, however, that MacIntyre is not agreeing with Hobbes that all of our behaviors are driven by our self-interested nature; rather, he is noting that if we are to produce a viable ethical theory, it must come in tandem with a theory of human nature.
SCT, at least in my way of looking at things, has an edge over EE in another way. In his influential Inventing Right and Wrong, John Mackie (1990) discusses how pervasive Social Contract Theory has been, spanning back millennia. It appears, then, that SCT is intuitively appealing. This is because something about human society, with its built-in complexity and it's hierarchy, seems out of place in nature; no other animals have achieved this level and kind of cooperation (although some insects have come close).1 Mackie acknowledges that it looks like there was a sudden organizing principle given to humans. Law, justice, and morality abruptly appear as mutual agreements to work together, a type of cooperation which is not seen elsewhere in the animal kingdom.
“This [SCT] is a useful approach, which has been stressed by a number of thinkers. There is a colourful version of it in Plato’s dialogue Protagoras, where the sophist Protagoras incorporates it in an admittedly mythical account of the creation and early history of the human race. At their creation men were, as compared with other animals, rather meagerly equipped. They had less in the way of claws and strength and speed and fur or scales, and so on, to enable them to find food and to protect them from enemies and the elements.... Finally Zeus took pity on them and sent Hermes to give men aidōs (which we can perhaps translate as ‘a moral sense’) and dikē (law and justice) to be the ordering principles of cities and the bonds of friendship” (Mackie 1990: 108).
But SCT has its detractors. Despite acknowledging its influence, many thinkers do not believe SCT is accurate or tells the whole story. For example, linguist and developmental psychologist Michael Tomasello (2014) believes that even if SCT is intuitively appealing, is wrong in at least two ways. First off, Tomasello argues that complex social arrangements in human groups could not have arisen due to contracts, since the very act of making contracts itself requires various social conventions. In other words, Tomasello thinks that Hobbes' SCT puts the cart before the horse. Contracts can't be the origin of civilizational complexity since they are only possible once a certain degree of civilizational complexity has already been achieved.
“Some conventional cultural practices are the product of explicit agreement. But this is not how things got started; a social contract theory of the origins of social conventions would presuppose many of the things it needed to explain, such as advanced communication skills in which to make the agreement” (Tomasello 2014: 86).

Second, Tomasello argues that certain psychological capacities would've needed to evolve before any contracts could be made. However, these requisite psychological capacities could not have arisen—or at least did not arise—in a species with purely egoistic motives. According to Tomasello's theory, it looks like the evolution of our capacity for language required that we first evolve the capacity to act for the benefit of others, at least sometimes. This is why human language is, more often than not, informative, in some sense: because we evolved to give truthful information to each other. In non-human primates, on the other hand, communication typically takes the form of imperatives, i.e., commands. And so, humans first evolved the capacity for non-egoistic motives along with language, and then humans started making contracts. In other words, a pillar of SCT, psychological egoism, might turn out to be false.
Tomasello's point is well-taken. But it gets worse if we look at data from other empirical disciplines. As Singer (2011: 3-4) points out, fossil finds show that five million years ago our ancestor Australopithecus africanus was already living in groups. So, Singer concludes, since Australopithecus did not have the language capacities and full rationality requisite for contracts, then it’s clear that contracts are not a necessary antecedent to social living, like Hobbes, Rousseau and others have argued.
Another criticism of SCT comes from the study of history. New evidence is leading deep history scholars to the conclusion that the earliest states could not hold their population; they had to use coercion to reinvigorate their pool of subjects. Scott (2017) attempts to dispel the notion that states came first, and then the state bureaucracy facilitated agriculture. Nothing could be further from the truth. Scott notes that there are instances of agriculture that pre-date the formation of states by as much 4,000 years, thereby discrediting the notion that state formation is what facilitated agriculture. And so, although the timeframes varied by region, in general, there was a several thousand year period during which humans were practicing agriculture, but there was not yet a centralized authority.

Scott explains that earlier scholars, not unreasonably, projected the aridity of Mesopotamia of recent history back to ancient times and figured only states could’ve facilitated this process. In other words, when we look at where the earliest states formed (Mesopotamia), we look at how dry it is now and figure that it's been dry all along. We naturally infer that only state bureaucracies could've arranged for the coordination needed to irrigate those lands. However, now we know, through climate science, that the region was not arid back then; we can see more clearly that agriculture came first, and then (for some other reason) came the earliest states.
If true, this looks like Hobbes' account of how early states formed is false. It was not a mutual agreement to establish a common authority; instead, it was more like a coalition of warlords who forced subjects to settle in their lands. We now call these warlords the government.2
“If the formation of the earliest states were shown to be largely a coercive enterprise, the vision of the state, one dear to the heart of such social contract theorists as Hobbes and Locke, as a magnet of civil peace, social order, and freedom from fear, drawing people in by its charisma, would have to be re-examined. The early state, in fact, as we shall see, often failed to hold its population. It was exceptionally fragile epidemiologically, ecologically, and politically, and prone to collapse or fragmentation. If, however, the State often broke up, it was not for lack of exercising whatever coercive powers it could muster. Evidence for the extensive use of unfree labor, war captives, indentured servitude, temple slavery, slave markets, forced resettlement in labor colonies, convict labor, and communal slavery (for example, Sparta’s helots) is overwhelming” (Scott 2017: 25-9; emphasis added).
All this to say that SCT has some problems. As of now, we are leaving the status of the truth of psychological egoism as an open question. And so we move towards another ethical theory, this one having to do with supernatural beings...
Divine Command Theory
Divine Command Theory (DCT) is easy to summarize. Just as it was the case with SCT, the moral discourse behind DCT goes back millennia. We will be covering the version of DCT that was most popular in the Middle Ages, as embodied in the work of William of Ockham (ca. 1287 to 1347). Nonetheless, it should be clear that there are hints of this view far back in the history of Philosophy, as evident in Plato's dialogue Euthyphro, as well as long after Ockham, as in the work of Martin Luther. The following is taken from Keele's (2010) intellectual biography of Ockham who begins his book with the historical and ideological context into which he was born.
Per Keele (2010: 27), DCT typically entails the following theses:
- God is the source of moral law.
- What God forbids is morally wrong.
- What God allows is morally permissible.
- The very meaning of “moral” is given by God’s commands.
In a nutshell, the divine command theorist argues that morality has no cause but God. Morality simply is what God has stipulated it to be.
The Strangeness of DCT
Some people initially don't see how counterintuitive DCT really is. They think they understand it, but they haven't thought the whole thing through. To show you the strange of implications of DCT, we can look at an ancient debate between Socrates and Euthyphro from Plato's dialogue Euthyphro. The setup of the dialogue is not terribly important (although you can watch this video if you are interested). The important part is that, in a conversation on what piety means, Socrates corners Euthyphro into a strange dilemma.

Stealing, which, according to
divine command theorists is
only wrong because God
said it is wrong.
To better understand this dilemma, let's do two things. First, let's not talk of "gods". The Greeks were polytheists, but most people reading this are probably not. We'll assume monotheism (belief in one god) here.3 Second, let's replace the word piety for morality. So then, the dilemma that Euthyphro finds himself in is that either he defines morality as being invented by God or he defines morality as independent of God (but still being a code that God wants you to follow).
The first option makes morality very strange. It makes it so that some actions are wrong only because God said so. Had God not said anything, they would've been permissible or even morally required. But it seems obvious some things, like murder, are wrong no matter what. The second option doesn't actually define morality. It just tells you that God isn't directly involved with its creation but that God still wants you to follow moral rules. The divine command theorist bites the bullet and chooses the first option.
“[For Ockham and other divine command theorists], God, by his absolute power, was so free that nothing was beyond the limits of possibility: he could make black white and true false, if he so chose: mercy, goodness, and justice could mean whatever he willed them to mean. Thus not only did God’s absolute power destroy all [objective] value and certainty in this world, but his own nature disintegrated [in terms of the capacity for rational reflection]; the traditional attributes of goodness, mercy and wisdom, all melted down before the blaze of his omnipotence” (Leff 1956: 34; interpolations are mine).
There are a two more ways in which DCT is strange. First off, it seems to trap believers in a maze of circuitous reasoning. How so? Notice that the divine command theorist argues that moral values can only be grounded in God’s commands. However, they also (typically) argue that God is good. But(!) the very meaning of good was established by God. And so, it seems as if God basically defined his way into being good. And so, there appears to be no non-circular way of arguing that God is in fact good.

The third way in which DCT is strange is in what it makes out moral properties to be. Recall that for Hobbes and Glaucon, good could be equated with pleasure. Although this is questionable—as we will see when we cover David Hume—there is something appealing in this view, which we've been referring to as hedonism. What's appealing about hedonism is that it is a form of naturalism: the view that moral properties are natural properties. Why is this appealing? Because it certainly is the case that we at least know what pleasure is. We know it has to do with neurotransmitters in the brain, we know the kinds of things that give rise to pleasure (calorie-rich foods, sex, etc.), and we can come to know even more about it with the help of science. However, now think about what DCT says moral properties: they're actually God's commands. They are the proclamations of an all-powerful, supernatural being. Can you even begin to picture that? Can you really wrap your mind around understanding a divine command? If naturalism is true, then all is well and good; I can picture, say, dopamine. I even know it's chemical makeup (see image). But if DCT is true, then moral properties become a little less intelligible. Moral properties, in other words, turn out to be non-natural properties: non-physical, abstract, and invulnerable to being probed by the sciences.
Sidebar
An argument for DCT from moral properties
Interestingly enough, the strangeness of moral properties that DCT endorses could actually be used to motivate an argument in support of DCT. The argument is that non-naturalism gives the best account of why morality must be followed. Let me give you an example...
Imagine something you really think is morally wrong. Ok, now what if I ask you, "Why am I not allowed to do that thing?" What if you say, "Well, it's because it would make someone unhappy." If I were a real psychopath, I could say, "I kinda don't care." You try saying other things too. Maybe you'd tell me that it's not in my best interest, or that it is against the law. I can respond in the same way. "That doesn't matter to me." But what if instead you told me that there is an all-powerful being who told all of us that we cannot perform the action in question. Moreover, if I break this rule, I will suffer punishment for all eternity, I would live and re-live unthinkable horrors in a lake of fire. THAT might get me to think, "Hmmm... maybe this action shouldn't be done after all." There's something about moral predicates (like "______ is morally wrong") that assumes something supernatural is going on. It seems like the property of moral wrongness has not-to-be-done-ness built into it. It's hard to express this, but the idea is that, if you really believe something is morally wrong, you're likely not to do it. And it's hard to explain this with something simple and natural, like pleasure or the law. It's something else. Something divine.
Strange bed fellows...

Perhaps the best support for DCT I can give you is to inform you that even some atheists buy into this view. Let me explain. Some thinkers (e.g., Joyce 2016) both reject the notion of a supernatural being (atheism) and also argue that there is something descriptively accurate about DCT. To be clear, normative claims are those claims which prescribe how something or someone should be. For example, "You should wash your hands" is a normative claim. It's a claim about what you ought to do. Descriptive claims are those claims which merely purport to describe how something or someone actually is. For example, "Your hands are dirty" is a descriptive claim.
Joyce argues that DCT is normatively false, because there is no God, and so God doesn’t actually command anything. In other words, there isn't really something you ought to do because there are no supernatural beings actually commanding you to do it and threatening you with eternal damnation if you don't. But DCT is descriptively accurate, in that the force of moral concepts does seem to come from supernatural beings (as if they actually existed). In the words of a thinker with views similar to that of Joyce's: the world is, of course, entirely natural, but we have non-naturalistic ways of thinking about it, such as moral predicates (see Gibbard 2012: 20-1).
Let me put this in the simplest terms. Some people believe that morality comes from God. Call this DCT (theist version). Other people believe God doesn't exist. But they also believe that the idea of God had to be invented at some point, and right around when that happened the idea of morality was also born. And so DCT is sort of right: God did invent morality. It just happens to also be the case that we invented God; neither God nor morality are actually real. Call this DCT (atheist version). You can find more support for both theories in the Food for Thought below.
Food for thought...
DCT & Non-Naturalism
Problems with DCT (theist version)
The Contradiction Argument
This argument comes from Randy Firestone's Critical Thinking and Persuasive Argument (see his chapter on religion). We've been assuming thus far that religions have a coherent, non-contradictory moral code. But, for example, the moral precepts in the Bible form an inconsistent set; i.e., they contradict each other.4 Hence, it is unclear just what God’s Law is. Consistency, or internal coherence, as we've seen, is a theoretic virtue, and DCT seems to be lacking.
The Moral Argument
This argument also comes from Firestone's Critical Thinking and Persuasive Argument. The argument is this: It appears that some of the moral precepts in the sacred scriptures of some religions, for example the Bible, can only be described as morally abhorrent. Hence, DCT renders itself extremely counterintuitive. For example, the Bible seems to advocate:
- genocide (see Deuteronomy 7),
- the prevalence of capital punishment (for example, see Leviticus chapters 20-21),
- misogyny (see 1 Timothy 2:11–14),
- strange marriage customs, as when Abraham (the common patriarch in Judaism, Christianity, and Islam) marries his half-sister (see Genesis 20:12), and
- there even appears to be a prescription for abortion if a woman has been unfaithful (see Numbers 5: 11-31).5
Problems with DCT (atheist version)
Lingering Empirical Problems
The problems for DCT (theist version) all apply to the atheist version as well. Why would the invention of an inconsistent moral code get people to behave more cooperatively? Moreover, there are deeper empirical problems. Why should you feel closer to someone who has similar supernatural concepts? To simply posit that the invention of “Big Gods”, i.e., gods that are morally concerned and threaten to punish those who break moral and social norms, led to greater cooperation in society fails to detail just which cognitive mechanisms lead to this prosocial behavior and why. In other words, the "Big Gods" hypothesis has some challenges, and it's not clear that it can overcome them (see Boyer 2001: 286). Stay tuned.
Hobbes' social contract theory is vulnerable to several criticisms. For example, a. it appears that complex—and perhaps even cooperative—social life predates the linguistic and intellectual capacities contracts; and b. it is unclear whether psychological egoism is true.
Divine command theory (DCT) is the view that God is the arbiter of morality; i.e., what is moral is what God has commanded.
Bringing DCT into the mix ushers in a new distinction: moral naturalism and moral non-naturalism. Moral naturalism is the view that moral properties are actually natural properties. Hedonism is an example of this view. Moral non-naturalism is the view that moral properties are non-natural (i.e., non-physical, like souls or God). DCT appears to imply that moral properties are non-natural.
In this course, we will be covering two versions of DCT: the theist and the atheist version. The theist version is straightforward in that it is simply the view that a. God exists, and b. God stipulated a moral code for everyone to follow. The atheist version is a. the denial that God exists, but b. the admission that belief in supernatural beings (and the desire to obey their commands) are an integral part of moral discourse.
FYI
Suggested Reading: Steven Cahn, God and Morality
TL;DR: CrashCourse, Divine Command Theory
Supplementary Material—
-
Reading: Michael Austin, IEP Entry for Divine Command Theory
-
Video: Transliminal, Interview of Ara Norenzayan
Related Material—
- Reading: Charles Tilly, Warmaking and statemaking as organized crime
Advanced Material—
-
Reading: Plato, Euthyphro
-
Reading: Internet Encyclopedia of Philosophy, Entry on Ockham
-
Note: Read Sections 1, 2, and 6
-
For historical context on William of Ockham, the interested student can also read Gordon Leff’s, The Fourteenth Century and the Decline of Scholasticism
-
-
Book: Ara Norenzayan, Big Gods
Footnotes
1. Some are a little too quick to compare ant colonies with human civilization. I am an admirer of ants, I should admit; but there are important differences. First off, ants are not conscious in a way even remotely approaching that of humans. Second, the cooperation you see in ant colonies makes evolutionary sense since the ants are all genetically related; they're siblings. Sure, this is not the case in all species of ant, but there are better explanations for human cooperation that don't involve the analogy to ants. Stay tuned.
2. The views of James C Scott are very interesting. You can see a lecture by him here. In this talk, he discusses the view that the planting of grains domesticated sapiens, and he refers to the earliest states as "multi-species re-settlement camps". Also of interest might be the work of Charles Tilly; see the Related Material section.
3. My apologies to the Wiccans.
4. I've linked an easily accessible compendium of the inconsistencies in the Bible here. Of course, there are more academic sources. I'd recommend the work of biblical scholar Bart Ehrman, in particular Misquoting Jesus and Lost Christianities.
5. Sometimes students object that these are all coming from the Old Testament. This is not the case. The First Epistle to Timothy is written by Paul and is housed in the New Testament. Let's not ignore the obvious fact that some people who object this way don't know which books of the Bible belong in the Old and New Testaments. But more importantly, this doctrine (known as Supersessionism) isn't without its critics. Why would the whole first half of a book not be valid once the second half is written? In any case, this topic is best covered in a Philosophy of Religion course.
Virtues and Vices
My glory will not be forgotten.
~Homer's Iliad
Making room...
Last time we covered two versions of one ethical theory: divine command theory. Towards the end of the lesson we considered some objections to the theory that stem from the many open questions surrounding the phenomenon of religion. There is, of course, the question of whether God exists at all, but we'll put that aside for now. The question that we considered was: Why should you feel closer to someone who has similar supernatural concepts? This question came from the work of Pascal Boyer.

In Minds Make Societies, Pascal Boyer (2018) continues the discussion on the complications behind trying to understand the phenomenon of religion. Boyer, by the way, is considering religion as a natural phenomenon; that is, he is taking the atheistic perspective which seeks to explain religion as having arisen from our social construction of reality. Despite the apprehension that some might feel about this approach to religion, his insights into religion are fascinating. Related to the issues concerning this course, Boyer discusses the historical fact that religious beliefs endured a radical shift in focus between 600 BCE and 100 CE.
Boyer points out that what we commonly take to be necessary features of religion (like a doctrine, clergy, and the cultivation of a soul that must be saved) are recent developments that only appeared with the development of large-scale state societies with an extensive division of labor (see Boyer 2018: 108). This connects to the work of Ara Norenzayan, whom we covered last time. It looks like it was around the same time that civilizational complexity started increasing that religions of the type that we recognize today started cropping up. In fact, the notions of souls and salvation—which are, intuitively, fundamental components of any "real" religion—did not emerge until the time period that Boyer focuses on, 600 BCE - 100 CE. The philosopher Karl Jaspers calls this the Axial Age.
"These new movements emphasized cosmic justice, the notion that the world overall is fair, [and] they described the gods themselves as interested in human morality... The most important theme, which to this day shapes our understanding of religious activities, is the notion of the soul, as a highly individual component of the person that could be made better or purer and, crucially, could be ‘saved.’ The doctrines centered on the many ways one could eschew corruption or perdition of the soul... So the Axial Age matters, because the movements that appeared at that point in history had a considerable influence on subsequent religions. Indeed, the so-called world religions of today are all descendants of these movements” (Boyer 2018:108-110; emphasis added).
Clearly, if religion prior to the Axial Age is essentially unrecognizable relative to the way we conceive of religion today, this paradigm shift needs an explanation. So if we want to use religion and “Big Gods” as an explanation for collective action, then we’d have to understand why “Big Gods” (who care about your soul) arose during the Axial Age. This, however, is an open question. Moreover, if we are taking the theist's perspective, we have to wonder why God waited until that time period for revelation, given that humans had been around for 200,000 years. If we are taking the atheistic perspective, we have to wonder why humans, through cultural evolution, invented the notion of "Big Gods" around that time.1
All this to say that DCT has some problems. And so we open up towards another ethical theory, this one coming from the mind of Aristotle...
The Stagirite
"Aristotle was born in 384 BC, in the Greek colony and seaport of Stagirus, Macedonia. His father, who was court physician to the King of Macedonia [Amyntas III, father of Philip II], died when Aristotle was young, and the future founder of logic went to live with an uncle... When Aristotle was 17, he was sent to Athens to study at Plato’s Academy, the first university in Western history. Here, under the personal guidance of the great philosopher Plato (427-347 BC), the young Aristotle embarked on studies in every organized field of thought at the time, including mathematics, physics, cosmology, history, ethics, political theory, and musical theory... But Aristotle’s favorite subject in college was the field he eventually chose as his area of concentration: the subject the Greeks had only recently named philosophy” (Herrick 2013: 8-9).
Aristotle is the most famous student of Plato. He was prolific, writing on all major topics of inquiry established during the time that he was alive. Like his teacher, he also founded a school: the Lyceum. He was the tutor of Alexander the Great, son of Philip II. He was also the founder of the first school of Logic, a discipline he created and which is still studied when learning philosophy, computer science, and mathematics.2
“[W]e can say flatly that the history of [Western] logic begins with the Greek Philosopher Aristotle... Although it is almost a platitude among historians that great intellectual advances are never the work of only one person (in founding the science of geometry Euclid made use of the results of Eudoxus and others; in the case of mechanics Newton stood upon the shoulders of Descartes, Galileo, and Kepler; and so on), Aristotle, according to all available evidence, created the science of logic absolutely ex nihilo [out of nothing]” (Mates 1972: 206; interpolations are mine).
Important Concepts
The Gist
Aristotle's theory has some similarities and differences with the views of his famous teacher. Plato famously believed that the study of mathematics could grant you access to the fundamental nature of reality. The details aren't relevant here. The relevant aspect about Plato's thought for us is that he emphasized that living a virtuous life is living a good life; in other words, the person that is moral and just is happier than the person who is not. This view isn't completely original to Plato. Many people from the classical world in general had a preoccupation with achieving excellence. The word that they used was the Greek word arete. Arete, much like agathos, is closely associated with doing personal success, although what counts as success is characterized in different ways by different people. Arete was stressed perhaps most strongly by Plato's teacher, Socrates.

Arete requires practice,
focus, and commitment.
Aristotle agrees with Plato and Socrates that being moral is part of a well-lived life. But he disagrees with Plato's assumption that only rigorous intellectual training gives you a glimpse into what goodness is. Other views that were around at the time also did not reflect Aristotle's thought. Whereas the precursors of divine command theory and social contract theory might correlate the concept of right action with commands from the gods or social conventions, Aristotle denied that you can define right action in a vacuum like that, without considering the social environment one finds him/herself in. Instead, he emphasized the role of habit. We must, Aristotle thought, train ourselves to respond in the right way to certain situations. We must have good reasons for acting in the ways that we do, and we must train ourselves to not feel any internal conflict about our actions. In other words, we must achieve intellectual and emotional excellence. We have to perform the right actions, in the right situation, at the right time, for the right reasons, and feel the right way about it, i.e., have no internal conflict. This, of course, takes lots of practice. Richard Kraut puts it this way:
“Therefore practical wisdom, as he conceives it, cannot be acquired solely by learning general rules. We must also acquire, through practice, those deliberative, emotional, and social skills that enable us to put our general understanding of well-being into practice in ways that are suitable to each occasion."
It is important to notice that both Plato and Aristotle had a tremendous degree of confidence in reason. Plato literally believed that through the power of reason we could come to know fundamental reality. Aristotle thought that through reason we could train ourselves to perform the right actions in a particular situation. It appears to be the case that this is a new role given to reason. During this time period, we could say that science and philosophy were invented, re-invented, and refined. We hardly think about this today, but standards about what is rational had to be established. The distinction between fact and opinion had to be analyzed. Magic and superstition had to be rejected (see Lloyd 1999 and Vernant 2006). This was the time period that Aristotle was living through and so he relied heavily on the faculty of reason—our power to make inferences. This leads us to the Food for Thought...
Food for thought...
The Virtue Theory Tradition
Just as with Aristotle's studies in logic, there isn't just one version of virtue ethics. After Aristotle initiated the study of logic, competing schools of logic cropped up, namely the Stoic school of logic. Similarly, Aristotle gave the first account of virtue ethics, but competing accounts of virtue have cropped up since Aristotle's time. This is why we refer to to virtue ethics as a tradition; there are various accounts of what it means to be virtuous. We will cover two views that are well-known in this tradition: Aristotle's own account and that of Virginia Held. We will also cover two approaches to ethics that can be interpreted as virtue theories.
Aristotle

Aristotle.
How did Aristotle arrive at his account of virtue? Aristotle argued that a virtue is an intermediate state between two conditions that aren't conducive to human flourishing. In other words, virtues are the mean (or middle point) between two vices. For example, being a coward is obviously a vice. One cannot advance in life if one is paralyzed by fear. On the other hand, though, some people are beyond brave: they are rash. They are too quick to try to be a hero, and it doesn't always workout for them. Between cowardice and rashness lies courage. Courage is a virtue. A deficiency of courage (i.e., cowardice) is a vice; an excess of courage (i.e., rashness) is also a vice. Train yourself to be in the middle point, and you're on your way to virtue. You can see a list of Aristotle's virtues in the slideshow below.
Ethics of Care

Virginia Held.
Virginia Held's view takes Aristotle's virtues and expands on them. As you can tell in the slideshow below, Aristotle's virtue are geared towards social excellence and not harming others. In other words, Aristotle's virtues are designed for an aristocrat in training. This is no secret, of course, since he was the tutor of Alexander the Great. Virginia Held wanted to make an account of virtue that isn't just about being a great statesman. For Held, it's not enough to just not harm people; you should strive to actively help others. And so Held gives an account of virtue ethics that shifts from Aristotelian virtues (that saw the culmination of eudaimonia in aristocratic ideals) towards more interpersonal virtues with a focus on how we can care for each other.
Buddhist Ethics

Buddha statue.
Typically, Buddhist ethics is associated with a view that we haven't covered yet: utilitarianism. But Matt Lawrence, author of Like a Splinter in Your Mind: The Philosophy Behind the Matrix Trilogy, sees Buddhist ethics as having many similarities with the virtue tradition, although emphasizing different virtues. You can see a list of "Buddha virtues" in the slideshow.
Nietzschean Virtues
Lastly, the views of Friedrich Nietzsche can also be seen through a virtue ethics lens. Recently, Randy Firestone made a comparison between Nietzsche's Übermensch and Aristotle’s virtuous person. We won't dive into this paper here, but the interested student can access the article through the link.
Aristotle and Arete
Problems with Virtue Ethics
Which is the right set of virtues?
There are many different accounts of just what virtue is because there are many different accounts of what eudaimonia is. How do we decide which is the right account? Wouldn’t this require another code of ethics just to decide which is the best one? How would we know what human flourishing really is? Do all humans flourish in the same way? These are not just trivial questions. Excellence can't be a goal if you don't know what it is. Without an account of just what eudaimonia and arete is, virtue theory can't really get off the ground.

Alexander the Great.
Moreover, it seems that different theorists have defined excellence in egregiously subjective ways. For example, it is no secret that Aristotle's virtues were conjured up to ensure the success of a middle-class Athenian stateman. Remember, personal success was, to many, the goal of ethics in the ancient world. And so, Aristotle's virtues fit in only within that social order, and this becomes alarmingly evident once you consider his views on, say, work. Like other middle- and upper-class Athenians, Aristotle saw work as punishment, since it took so much time that one was not allowed to think and improve oneself. Many of us, however, see dignity in work. It is an outrage that, say, someone like Jamie Dimon makes the annual income of one of his lowest paid workers in about 2 hours and 12 minutes.
MacIntyre (2013, chapter 7) comments on this. He raises the question of whether or not Aristotle successfully defends his view of eudaemonia and his ideas about the purpose (or telos) of human life. He reminds us that Aristotle was a speculative philosopher with an exceptional degree of wealth and with links to the regional superpower Macedonia. (Recall that he was Alexander the Great's tutor.) This is clearly not a telos for humanity itself but a telos for a particular kind of life—the life of a wealthy Athenian. Aristotle’s audience was clearly a leisured minority. To modern ears, MacIntyre admits, Aristotle seems like a supercilious prick. Even Aristotle's admiring biographer, Jonathan Barnes, can only say this much: “As a man, he was, perhaps, admirable rather than amiable” (Barnes 2000: 1).
But to argue that Aristotle's views on human flourishing are wrong, does not automatically mean that, say, Virgnia Held's views about caring are the right ones. By questioning what human flourishing amounts to, we've opened a can of worms. It's unclear whether any truly universal account of eudaemonia can be given.
What about the puzzle of collective action?
Although Aristotle argues that citizens must actively participate in politics if they are to be happy and virtuous, this theory doesn’t directly address how collective action is possible. It does not give us any hints as to how and why civilizational complexity took off during the Axial Age and earlier. In other words, it fails to meet one of our desiderata from our checklist.
How will we know when we’re virtuous?
We can look for virtuous people to help us, but how will you know someone is virtuous if you don’t know what virtue is? This is a puzzle that Plato himself considered. How do you know what the right way to live is? How do you find something if you don't know what you're looking for? Related to this is predicament is the Cognitive Bias of the Day.

The Dunning–Kruger effect is a cognitive bias in which people of low ability have illusory superiority and mistakenly assess their cognitive ability as greater than it is (Kruger & Dunning 1999). In other words, the more unskilled someone is in a given domain, the more unaware of how unskilled they are. You can easily think of examples of this. You probably have a friend who claims to be an outstanding driver but is actually a real danger behind the wheel. There are various other examples. See Kruger and Dunning's article for more.
How is this related? This again brings up the problem of not knowing when we've achieved virtue. You might think that you're a good person, and you might even justify your actions to yourself. But if it's the case that the less virtuous you are the more you'll think you're virtuous, how will you know you are actually good?
Looks like virtue theory leaves us with more questions than answers...
There's a tradition in ethics, popularized by Aristotle, which focuses on eudaimonia (human flourishing) and arete (excellence).
Aristotle argued that if someone developed their virtues, achieving intellectual and emotional excellence, then the right actions would simply flow from them.
There are various competing accounts of what virtue is, e.g., Aristotle's version and Virginia Held's ethics of care. Some even see Buddhist ethical practices and Nietzsche's philosophy as a form of virtue theory.
There are various questions that arise with virtue theory. For example, it is unclear how one is supposed to know which set of virtues is the "right" set of virtues, since the virtue theorists neither give a theory of right action or convincingly argue for what human flourishing actually consists in.
FYI
Suggested Reading: Julia Annas, Virtue Ethics
Note: Read at least pages 1-14.
TL;DR: The School of Life, Aristotle
Supplemental Material—
Reading: Richard Kraut, Aristotle's Ethics
Advanced Material—
Reading: Aristotle, Nicomachean Ethics
Reading: Aristotle, Virtues and Vices
Reading: Internet Encyclopedia of Philosophy, Entry on Virtue Ethics, Introduction and Sections 3 & 4
Reading: Virginia Held, The Ethics of Care
Footnotes
1. Questions surrounding the Axial Age are complicated. Some historians even think the Axial Age hypothesis is overly ambitious. For example, have a chat with Jason Suarez, and he'll tell you some problems with Jasper's analysis. As for Boyer, his views don't rely on the Axial Age hypothesis. He's examining changes in religious ideologies, and the changes in these are supported independently of Jasper's theory. Boyer even hints at why religions might have endured this radical shift: “A striking aspect of this development is that religious innovations appeared in the most prosperous societies of the time, and among the privileged classes in these societies. Gautama was a prince, Indian and then Chinese Buddhism spread primarily among the aristocracy, and Stoicism, too, was an aristocratic movement” (Boyer 2018: 110).
2. A basic building block of computers is a logic gate, which is functionally the same as some of the truth-tables learned in an introduction to symbolic logic course. Through a combination of logic gates you can build gate-level circuits, and out of gate-level circuits you build modules, such as the arithmetic logic unit (a unit in a computer which carries out arithmetic and logical operations.) Eventually, all the elements of a module were forced onto a single chip, called an integrated circuit, or an IC for short (see Ceruzzi 2012: 86-90). In the image you can see a gate-level circuit.
Patterns of Culture
No man ever looks at the world with pristine eyes. He sees it edited by a definite set of customs and institutions and ways of thinking.
~Ruth Benedict
1934
Our next ethical theory requires us to jump forward all the way to 1934. We will be walking through the halls of Columbia University in New York City. In particular, we will be visiting the anthropology department, where one of the characters in this story—Franz Boas—taught for 40 years. The reason for studying the ideas of Boas, as well as those of his students, will become apparent below. For now, let's prime our moral intuitions with an example.

Members of an uncontacted
tribe photographed in
Brazil, 2012.
In their popular (but controversial) book Sex at Dawn, Christopher Ryan and Cacilda Jethá (2010: 90-91) report on the sexual practices of some Amazonian tribes. In some of these tribes, pregnancy is thought to be a condition that comes in degrees, as opposed to either being pregnant or not. In other words, the members of these tribes believe you can be "a little" pregnant. In fact, all sexually active women are a little pregnant. This is because they believe that babies are formed through the accumulation of semen. In order to produce a baby, a woman needs a constant supply of sperm over the course of about nine months. Moreover, the woman is free to acquire the semen from any available men that she finds suitable. It may even be encouraged to do so in order for the baby to acquire the positive traits of each of the men that contributes sperm. As such, perhaps the woman will seek out a man who is brave, a man who is attractive, a man who is intelligent, etc. All told, up to twenty men might contribute their seed to this pregnancy, and all twenty are considered the father.
"Rather than being shunned... children of multiple fathers benefit from having more than one man who takes a special interest in them. Anthropologists have calculated that their chances of surviving childhood are often significantly better than those of children in the same societies with just one recognized father. Far from being enraged at having his genetic legacy called into question, a man in these societies is likely to feel gratitude to other men for pitching in to help create and then care for a stronger baby. Far from being blinded by jealousy as the standard narrative predicts, men in these societies find themselves bound to one another by shared paternity for the children they've fathered together” (Ryan and Jethá 2010: 92; emphasis in original).
This practice is called partible paternity, and it is not at all the way that paternity is viewed most everywhere else, especially for us in the West. We are privy to the way pregnancy actually works, and we readily distinguish between, say, the biological father and an adoptive father. But notice something interesting here. In my experience teaching, it is very rare that people judge the behavior of some of these Amazonian tribes to be immoral. Typically, students claim that it is ok for them to practice that "over there", but "over here" we do things differently. If you feel this way, then you might be a cultural relativist.
Boas and his students

Ruth Benedict (1887-1948).
The reason for this shift into the 20th century is that the seeds of classical cultural relativism, the version of cultural relativism that we'll be studying, are found in the work of anthropologist Franz Boas (1858-1942). Boas was a groundbreaking anthropologist who is often referred to as the "Father of American Anthropology." In chapter 2 of The Blank Slate, Steven Pinker explains how Boas’ research led Boas to realize that the people of more primitive cultures were not in any way deficient. Their languages were as complicated as ours, allowing for complex morphology and neologisms (new words). Their languages were also rich in meaning and could be updated rapidly, as when new numerical concepts were adopted as soon as a society needed them. Although Boas still thought Western civilizations were superior, he believed that all the peoples of the world could rise to this level.
Boas was himself likely not a cultural relativist. He never expressed the particular views (which we will make explicit below) which are now known as classical cultural relativism. What he did express was a reluctance to definitively rank societies as either "more evolved" or "less evolved". Instead, he saw the members of different cultures as fundamentally the same kind of being, just ones with a different system of beliefs and ways of living. His students, however, took these ideas and morphed them. It was above all else Ruth Benedict (1887-1948), who like many other intellectuals were thrown into a moral panic after World War I, that developed what we now know as cultural relativism.
“The story of the rise to prominence of cultural relativism, [is] usually attributed to the work of Franz Boas and his students... Although Boas’s position on cultural relativism was in fact somewhat ambiguous, he laid the groundwork for the full elaboration of cultural relativism by redirecting anthropology away from evolutionary approaches... and by elaborating on Tylor’s notion that culture was an integrated system of behaviors, meanings, and psychological dispositions... The flowering of classical cultural relativism awaited the work of Boas’s students, including Ruth Benedict, Margaret Mead, and Melville Herskovits. Their articulation of a comprehensive relativist doctrine was appealing to intellectuals disillusioned by the pointless brutality of World War I, which undermined faith in the West’s cultural superiority and inspired a romantic search for alternatives to materialism and industrialized warfare… The ethnographer must interpret a culture on the basis of its own internal web of logic rather than through the application of a universal yardstick. This principle applies to everything from language and kinship systems to morality and ontology… Complementing the core principle of cultural coherence is insistence that societies and cultures cannot be ranked on an evolutionary scale. Each must be seen as sui generis [i.e., unique] and offering a satisfying way of life, however repugnant or outlandish particular aspects of it may seem to outsiders” (Brown 2008: 364-5; interpolations are mine).
Comments on historical recurrences
Like many of the other moral discourses we've seen before, classical cultural relativism has its antecedents far back in history, as you'll see in the Storytime! below. Perhaps this is a good time to take note of these recurrences. Some form of social contract theory, also known as contractarianism, can be found, for example, in Glaucon's speech in Plato's Republic (4th century BCE), in Hobbes' views on ethics and politics (17th century), and in Rawls' works in the 20th century.1 Ethical egoism also pops up in various guises. It's in the work of Bernard Mandeville (early 18th century) and the writings of Ayn Rand (20th century). Divine command theory can be found in the character of Euthyphro in Plato's dialogue of the same name (4th century BCE), in the work of theologians and philosophers of the medieval period such as Thomas Aquinas and William of Ockham (13th and 14th century), and in recent work by Robert Adams. The virtue tradition also has been construed in various ways, beginning with Aristotle in the 4th century BCE and recently in the work of Virginia Held and her ethics of care.
This is not intended to be an exhaustive list of moral discourses. In fact, there are two very important theories that we've yet to cover. Also, it would be a mistake to assume that all theorists who can be lumped together into, say, social contract theory, all have identical views. Rather the point I want to make here echoes the point made by MacIntyre (2013, chapter 11): the moral discourses covered thus far can be distinguished by the sort of justification given for moral rules and by the type of moral logic that they use. For Plato and Aristotle, for example, the virtuous life is tied to the pursuit of certain goods (arete), which lead to a good life (eudaimonia). The key concept is good and the key judgments are about how well-fitted people are for certain roles. For divine command theorists, the sort of backing that is given to moral rules is that a divine being will reward you for following said rules and punish you for not. The key concept is thou shalt, and the key judgments express the consequences of reward and punishment. For contractarians, the backing for moral rules is that they are the best way to get most of what society needs—a view not only endorsed by Hobbes, but by ancient skeptics and sophists. The key concepts, as well as the judgments, have to do with means-end reasoning; i.e., moral behavior is simply a means to an end (that end being social order).
And so, by now it should be no surprise that the cultural relativism of the early 20th century that we'll be studying has hints of it far back in history, for example in the works of Herodotus, Protagoras, and Zhuangzi. Moreover, cultural relativism has its own views on moral justification (i.e., culture) and a very unique take on moral logic (i.e., the relativistic notion of truth). Stay tuned.
Storytime!
Important Concepts
Decoding Relativism
Some comments
Perhaps the most important thing to clarify here is that classical cultural relativism assumes the relativistic notion of truth. It's not just the claim that different cultures do things differently. In other words, it's not merely the descriptive claim that different cultures have different moral codes. Rather, it is the claim that different actions might be truly morally permissible or impermissible in relation to the moral codes of different cultures. This implies that the sentence "Eating guinea pig is morally permissible", which contains the moral predicate morally permissible, is true for some cultures, e.g., in Peruvian gastronomy.
Relatedly, this is why accusing relativists of self-contradiction is a strawman argument. Relativists aren't saying that eating guinea pig is both ok and not ok. They're saying that eating guinea pig is ok for some cultures and not ok for others. Perhaps we can say that they are pluralists about truth, i.e., they believe that not all sentences are true in the same way. Although I don't want to get into pluralistic notions of truth here, the main point is this: if one is opposed to the relativistic notion of truth, then it's not obvious that the accusation of self-contradiction will work. There are some problems with relativistic truth, however. We won't cover them just yet. Stay tuned.

Cultural Practices from Around the World
Welcome to the intermission. In this section, we'll look at different cultural practices from around the world. The point of this activity is to challenge the cultural relativists, to make them see how difficult it really is to say that each culture can develop their own moral code. Warning: Some of the things you'll see here are graphic. However, this is the only way to show you the counterintuitive nature of CR. If you are sensitive, you may skip this section.
The Bathroom Ban of the Tidong (Indonesia)

A Tidong community wedding.
An ancient custom of the Tidong tribe is to ban newlyweds from using the bathroom for three days after being married. Relatives go as far as staying with the newlyweds during this period of time to make sure that they don't eat or drink, or else they'll have to use the restroom. To break this taboo is considered bad luck on the marriage as well as the families of the newlyweds. I might add that holding in your urine causes bacteria build-up which is associated with urinary tract infections—and worse.
Is this morally permissible for them?

Living with the dead (Indonesia)
Is this morally permissible for them?
Dani Amputations (Papua New Guinea)
Is this morally permissible for them?
Baby Swinging (Russians in Egypt)
One of my mentors, a Russian physicist/philosopher, alerted me to this practice, which is thought to be a form of therapy in Russian alternative medicine circles. We can, it seems, refer to these alternative medicine practitioners as a sub-culture. I found a video of some Russians living in Egypt—part of a ploy to establish greater Russian influence in the region, according to some—engaging in this practice. See the video below:
Is this morally permissible for them?
WARNING: Some of the following images are graphic in nature and might be disturbing to some.
Sensitive viewers may skip to the next section.
Cannibalism (various)
There are various reports of cannibalism throughout history and even isolated tribes that still practice the eating of humans.
If cannibalism is performed as a way of honoring the dead, is this morally permissible for them?
Baby Throwing (India)
Both Muslim and Hindu parents engage in a baby throwing ritual in some parts of India. The baby drops about 30 feet, and is caught in a sheet by a group below. It is said to bring good luck. You can read more in this article or watch the video below.
Is this morally permissible for them?
Americans
If we are extending cultural relativism to include sub-cultures, this might include fringe groups. What would a relativist say about:
- anti-vaxxers?
- the Followers of Christ (Idaho), who reject modern medicine and rely solely on faith healing?
- Evangelical homeschoolers, who teach their children only creationism?
One problem....

Joshua Greene's
Moral Tribes.
Ultimately, though, CR does not address all of our concerns. In particular, it does not meet all of our desiderata of an ethical theory. Here is how. Joshua Greene gives what he calls the meta-morality argument: Cultural relativism answers the question of how morality works within a “tribe”, but it does not and cannot guide us on how morality should work between “tribes.” What Greene is pointing out is that CR does not resolve our moral debates. This is, in addition to being the most pressing problem in the 21st century, a failure to meet one of our desiderata from our checklist. It should be stressed that our desire that an ethical theory should resolve moral debates isn't merely a trivial nicety. We need to resolve the moral conflicts of our age. From the crisis of liberal democracies to globalization and the resurgence of fundamentalist religious factions, it is evident that we need a resolution on moral matters. CR fails to guide our actions on this front, and, hence, fails as a moral theory.
Classical cultural relativism is the view that an act is morally right if, and only if, the act is permitted by the code of ethics of the society in which the act is performed.
Classical cultural relativism is distinguishable in that it relies on a relativistic notion of truth, which states that some things are true not in an absolute sense but relative to some group or individual.
Another important tenet of classical cultural relativism is that there is usually no disagreement on the facts but only on their moral value. In other words, classical cultural relativists make the case that there is no empirical disagreement between most cultures; it's only a disagreement about values. For example, Americans and Peruvians (in general) agree on empirical facts about guinea pigs, but they disagree (in general) on the moral question of whether or not it is ok to eat them.
Tension rises between relativists and non-relativists when different practices around the world appear to be unacceptable or, in the very least, suboptimal.
FYI
Suggested Reading: Gilbert Harman, Moral Relativism Explained
TL;DR: Crash Course, Metaethics
Supplementary Material—
-
Reading: Theodore Gracyk, Relativism Overview
-
Reading: James Rachels, The Challenge of Cultural Relativism
Related Material—
-
Video: Steven Pinker: Linguistics as a Window to Understanding the Brain
-
Note: A common analogy used by relativists is the language analogy: Moral systems vary as widely as language systems. This video introduces the viewer to some core concepts in linguistics.
-
Advanced Material—
-
Video: Noam Chomsky on Moral Relativism and Michel Foucault
-
Related Video: The School of Life, Michel Foucault
-
-
Reading: Kenneth Taylor, How to be a Relativist
-
Note: This is a novel, psycho-functional approach to relativism. It is also a very challenging read. It's mainly here to demonstrate that there are various approaches to cultural relativism.
-
-
Video: Common Sense Society, Roger Scruton on Moral Relativism
Footnotes
1. It's interesting to note that Hobbes also had a defense of relativism embedded in his views. Recall that Hobbes makes the case that we all decide what's right for ourselves when in the state of nature. In other words, we're all the arbiters of our own moral code. This is typically referred to as moral subjectivism. The view that we're covering today differs in that it is not individuals but cultures that determine the moral code we ought to follow.
Endless Night (Pt. I)
Man naturally desires, not only to be loved, but to be lovely; or to be that thing which is the natural and proper object of love. He naturally dreads, not only to be hated, but to be hateful; or to be that thing which is the natural and proper object of hatred. He desires, not only praise, but praiseworthiness; or to be that thing which, though it should be praised by nobody, is, however, the natural and proper object of praise. He dreads, not only blame, but blame-worthiness; or to be that thing which, though it should be blamed by nobody, is, however, the natural and proper object of blame.
~Adam Smith
The Great Infidel
Whereas classical cultural relativism has hints far back in history, the theory being covered in the next two lessons has its origins in the middle and late 18th century in the work of, happily enough, two best friends: David Hume and Adam Smith. This theory is known as moral sentimentalism, the view that moral judgments and preferences are rooted in our emotions and desires. It can't be avoided to frame this lesson in the context of the friendship between Hume and Smith, since the view really did grow out of their discussions of and elaborations on each other's work. Moreover, their friendship is simply too charming to not mention in this course. So, in this lesson, we will look at the theory as it originated in the work of David Hume and was further developed by Adam Smith. To this end, I will rely heavily on Rasmussen's (2017) The Infidel and the Professor. In the sequel to this lesson we will look at the views that are the descendants of moral sentimentalism—very radical views indeed.
David Hume (1711-1776).
Hume began his philosophical career with the publication of his A Treatise on Human Nature, published in three volumes from 1739-1740. In this work, Hume sought to establish a brand new science of human nature that would undergird all other sciences, since—Hume reasoned—all other sciences rely on human cognition as part of its investigations. Importantly, Hume makes the case that reason cannot truly take us to a complete knowledge of the world, as René Descartes famously believed. Rather, he believed that the experimental method is the only appropriate way. In this method, Hume himself acknowledged that he was not the first—counting John Locke, Mandeville (see The Mind's I), and Joseph Butler as predecessors. But Hume took it far further than they did.
Hume concluded that if we reject the idea of courageous reasoning, then it turns out we can know very little about the world and ourselves with certainty. Hume's conclusions must've been extremely disconcerting to readers in the 18th century. For example, here are some things that we cannot be certain about per Hume: the reality of the external world, the constancy and permanence of the self, and that the laws of causation (cause and effect) are real. In early editions of the Treatise Hume even had sections denying the existence of souls and arguments against the reality of miracles—sections he had to omit for fear of repercussions. In the end, Hume concluded, all that reason can come to know on its own are mathematical propositions and axioms of pure logic.1
Rasmussen points out that the great diminution of the role of reason in Hume’s system correlates with an expansion of the roles of custom, habit, the passions (what today we would call emotion), and the imagination. Moreover, Rasmussen notes that since Hume doesn’t include the supernatural as part of his explanatory scheme, the work is purely secular. Thus, Hume is implicitly making the case that God isn’t necessary when explaining human nature—an invitation to accusations of atheism that would cause trouble for Hume throughout his life.
In volume 3 of the Treatise, which was the volume that was added later, Hume gives his views on morality (also without need of God). Virtues are merely those character traits that we collectively have deemed to have utility (i.e., usefulness) in society and in our interpersonal relationships. We are predisposed somehow to find these agreeable, and to find vices disagreeable—an extremely interesting argument given that this is all before Darwin's theory of evolution. It is our passions that flare up when they perceive virtue or vice, feeling approbation (i.e., a feeling of approval) for the former and disapprobation for the latter.
Unfortunately for Hume, the Treatise fell “deadborn in the press”, failing to secure commercial success. Hume then became a tutor and then a secretary to a distant relative during a military campaign. He then returned to Ninewells, his family's estate, and began working on the Enquiry Concerning Human Understanding, a rewrite of Book 1 of the Treatise. Whereas Hume says he deliberately “castrated” the Treatise, ridding it of its most controversial sections, Hume left them in the Enquiry Concerning Human Understanding. And so we know his views on, for example, miracles. Hume also delivers his objection to intelligent design in this first Enquiry. Put simply, Hume says we cannot rationally infer from the imperfect nature of our world that there is a perfectly knowledgeable and loving creator—the lamentable aspects of this world are too numerous to justify this inference. Moreover, even if we did infer some intelligent designer, there is no way to move from this belief to other religious doctrines, such as the existence of heaven and hell. Again, a skeptic until the end, Hume does not argue that God and heaven don’t exist; just that you can’t know that they do if they exist. They are untestable and, hence, useless hypotheses.
It was apparently during his time at Oxford in the 1740s that Adam Smith encountered the work of Hume—work that had been gaining notoriety the more that Hume published. Humorously, it appears that Smith was even caught by the inquisitor at Oxford reading the scandalous Treatise. In any case, despite what the inquisitor might've said to Smith, it seems that Smith did take up some of Hume’s ideas. As evidence, Rasmussen cites Smith’s earliest surviving essay, The principles which lead and direct philosophical inquiries. In it, Smith reliably downplays the role of reason, both in its role as a motivating force as well as in its capacity—just like Hume. According to Smith, science is primarily driven by an attempt to alleviate the disquiet caused in our minds by complex, unexplained phenomena; scientific explanations relieve us of the tension caused by perceived randomness in the world. Moreover, scientific explanations are inventions of the imagination, not known with certainty to correspond to the world itself. Consequently, every scientific theory must remain perpetually open to revision, on the chance that there might be an even better theory eventually. “In Smith's view, then, science is a permanently open-ended activity, one that is prompted by our passions and forged by the imagination” (Rasmussen 2017: 42).2

Adam Smith (1723-1790).
The astute student might wonder why Smith's controversial views (see especially Footnote 2) didn't relegate him to the same state of opprobrium that Hume suffered. This is because Smith’s Principles stayed in his notebook until after his death. He worked on it periodically and made sure it was only published posthumously. But from our vantage point, we know the truth. Both Hume and Smith’s first works are generally skeptical of the power of reason, an unpopular view at the height of the Enlightenment. Moreover, they both had a deflationary account of natural phenomena and human pursuits, fueled by unflattering psychological accounts—unflattering relative to those accounts of more reason-centered Enlightenment thinkers.
But the two had not yet met. Rasmussen guesses that it was in 1749, while Smith was giving freelance lectures in Edinburgh, that Smith and Hume met. He also speculates as to their first impressions of each other. Hume, although associated with heretical principles, was disarming, jolly, and charismatic. He loved food, drink, and games, especially the card game whist. He was also famously stout but was the first to make a crack about his girth. Smith was tall and slender, and he was reported by many to be absentminded. He also would mumble and laugh to himself, even during church services. In general, Smith was socially awkward, although he improved on that front with age. Obviously, we don't know what they thought of each other when they first met. But we know that they quickly became friends.
The Professor
Smith takes up the chair of moral philosophy at the University of Glasgow in 1752, although he had been appointed to the chair of logic a year earlier but didn’t begin lecturing due to scheduling conflicts in Edinburgh. His course had four parts: natural theology, ethics (which was eventually incorporated into his Theory of Moral Sentiments), jurisprudence, and political economy (which was eventually incorporated into his Wealth of Nations). He immediately turned some heads for multiple reasons. Despite his awkwardness, he was apparently an excellent professor. He also asked to not have to begin lecture with a prayer; he was denied the request. He also discontinued giving religious discourses on Sundays, as the first chair of moral philosophy, Francis Hutcheson, did. Rasmussen reports that, as a result, some considered his principles to be lacking, both for his failing to follow tradition as well as for the company he was beginning to keep—that of David Hume.

Smith's vacating of the chair of logic prompted several of Hume’s friends to suggest he apply for the position. But, for the second time, the clergy vigorously opposed Hume’s appointment and he was denied. Hume never tried again to secure a professorship. Instead, Hume was offered a librarian position in Edinburgh, where he could do research for his History of England, which would earn him fame and wealth. It was around this point that Hume published his Enquiry Concerning the Principles of Morals, a reworking of his ideas from the third book of the Treatise, as well as his Political Discourses, in which he advanced many arguments similar to what Smith would eventually publish in Wealth of Nations.
Hume's second Enquiry, published in 1751, if anything, was even more scandalous to the clergy and the devout than the Treatise. Hume included more examples of virtues and vices, so as to make his case more clear, and he made the case that many of the qualities of the devout were actually vices: celibacy, fasting, self-denial, humility, and other "monkish" virtues. Hume makes the case that these qualities do not render someone a more valuable member of society, and, thus, are vices. So, Hume once more made the case that religion is superfluous, not being needed at all for explaining natural phenomena; to this he added that it is also pernicious. During all this, Smith seems to have recommended to his students, as we can gather from surviving student notes, several works by Hume. So, for all Hume’s notoriety, Smith didn’t hide their friendship.
Hume and Smith’s friendship grew through the 1750s, with Hume insisting to his friend that he stay whole breaks from classes over in Hume’s Edinburgh—where Hume could make all the books Smith wanted available to him. During this time, Smith’s maid would also insist that Smith kept his own key to his apartment so that she wouldn’t have to wait up for him, since the two friends liked to talk until well past midnight. During the 1750s, Hume’s notoriety for irreligiosity grew, and he was even blocked from taking part in some scholarly societies. The culmination of this growing notoriety was the attempted expulsion of Hume from the Church of Scotland, an attempt which was not successful.

Smith published much less than Hume—only two books—and he lamented that he never got better at writing, unlike his friend. Although today he is associated mostly with his Wealth of Nations, Smith himself considered his Theory of Moral Sentiments, where he expanded upon Hume’s second Enquiry (albeit in a much less provocative way), his best work. Smith argued that it is neither reason nor a god-given moral intuition that leads us to consider one action virtuous and another vicious. Rather, it is our moral sentiments that give rise to moral judgments. We have been predisposed by Nature to feel approbation towards prosocial behaviors and character traits and to feel disapprobation towards antisocial behaviors and character traits.
Some detractors sometimes call this the "boo/hurray" theory of morality, since it’s all just either “Boo!” or “Hurray!” about certain behaviors. This is far from what Smith actually meant. First off, once we have formed our moral preferences, reason does play a role in deciding how to practically bring about the desired virtues we wish to cultivate. Moreover, since we are prone to becoming overly emotional and biased sometimes, Smith argues that we are supposed to take the perspective of an objective observer when forming our moral judgments; that is, we are supposed to empathize with the moral sentiments of an impartial observer so as to know what are virtues and vices in a given social context. So it’s not as simple as saying that whatever you feel is right is right. One must generalize and try to surmise what it is that our collective moral sentiments, or what Hume calls the general point of view, are pointing towards. Those are the true virtues.
“When I endeavour to examine my own conduct, when I endeavour to pass sentence upon it, and either to approve or condemn it, it is evident that, in all such cases, I divide myself, as it were, into two persons; and that I, the examiner and judge, represent a different character from that other I, the person whose conduct is examined into and judged of. The first is the spectator, whose sentiments with regard to my own conduct I endeavour to enter into, by placing myself in his situation, and by considering how it would appear to me, when seen from that particular point of view. The second is the agent, the person whom I properly call myself, and of whose conduct, under the character of a spectator, I was endeavouring to form some opinion. The first is the judge; the second the person judged of. But that the judge should, in every respect, be the same with the person judged of, is as impossible, as that the cause should, in every respect, be the same with the effect” (Smith 2009/1759: 135-36).
It is important to note that, despite their broad agreement, Smith diverges from his friend’s views on four issues, issues which will be covered below.
After the publication of The Theory of Moral Sentiments, Hume wrote a letter to Smith, assessing the latter’s first book. It is one of the most playful letters in the history of philosophy. Hume repeatedly promises to give Smith his assessment but keeps inserting distractions and tangents. In the end, of course, Hume was elated over the success of his friend and told him as much. Moreover, Hume, a true friend, went to great lengths to promote Smith’s book. He sent it to several influential scholars and wrote an anonymous (obviously positive) review.
Moral Sentimentalism
True Friendship
In the 1760s, Smith and Hume spent some time together in London, where we have some humorous reports of Smith's absentmindedness. For example, it appears that he once accidentally made a tea out of his buttered toast. Hume, however, is called back to work for the state and must leave London. It is at this point that Smith begins work on his Wealth of Nations in Kirkcaldy in 1767.

Kirkcaldy, Scotland
The process for writing Wealth of Nations was long and arduous for Smith. Meanwhile, Hume improved his cooking skills and threw dinner parties in Edinburgh, with Smith sometimes attending (since Hume always had a room ready for him). Hume also acquired several anecdotes during this time. Once, while crossing over a bog, the bridge separated and he got stuck. Some women found him, but after recognizing him, wouldn’t help him out until he said the Lord’s Prayer. Another time in 1771, when Hume moved to a new house on a street that didn’t have a name yet, a woman named Nancy Ord (whom Hume might’ve considered proposing marriage to) wrote on the wall of the house “Saint David’s Street”, an obvious nod to Hume’s fame of impiety. The street retains the name to this day. It was precisely during this time that Hume decided to retire from writing, arguing that he was “too old, too fat, too lazy, and too rich.” Nonetheless, he prodded his friend to continue his writing career, providing papers and sources that Smith might need in writing Wealth of Nations.
Hume undoubtedly read the drafts that were to become Smith's Wealth of Nations. Moreover, it is almost certain that they must've discussed and improved some portions of it. This is because several of the most important points that Smith makes in Wealth of Nations are points made by Hume in his Political Discourses (which Hume published in 1752). Smith, however, greatly expanded on Hume's points. Here are the highlights:
- Smith argues that commercial society is unequivocally preferable to the alternatives, seeing the downfall of feudalism and the rise of personal Liberty as good things. Commerce, then, is seen as inextricably linked to liberty. The way to maintain this high degree of personal freedom is to ensure that commerce remains healthy and unobstructed. Relatedly, Smith argues that the fall of feudalism was linked to the opulence of the lords, much like Hume argued.
- Mercantilism, the view that international trade is zero-sum (i.e., my win is your loss), is attacked vigorously by Smith, who argues that nations actually benefit when other nations become wealthier, since they can purchase each other’s goods.
- Smith also argued against the view that precious metals are wealth. He argued a point that is obvious to us today: true national wealth is an abundance of goods and services. Moreover, personal freedom is the way to assure goods and services will proliferate.
It is important to note here that neither Hume nor Smith were free-market absolutists, arguing that the state should intervene for the sake of national defense, the administration of justice, and the provision of public works. Moreover, they argued for a strong state that could preserve order. They made the case that it was a weak central government that made the feudal era the sad, unfree spectacle that it was. In fact, it is only in market matters where politicians should not interfere, Hume and Smith argued, since this will either be ineffectual or counterproductive. So, if a libertarian ever cites Smith as a free market absolutist, kindly tell them they are mistaken.
It should also be added, however, that Hume and Smith did differ in opinion on some topics. For example, Smith discussed more openly the disadvantages of commerce. Hume did discuss some disadvantages of the commercial societies of his day: namely their imperialistic tendencies and their rapidly mounting public debts. Moreover, Hume argued that the latter caused the former. But Smith took these ideas and developed them further. For example, Smith adds points that Hume didn’t come close to making. In Book IV of Wealth of Nations, Smith argues that labor is toil and oftentimes unenjoyable, and that we spend our wages on trifles that provide only fleeting satisfaction. Smith continues to diverge from Hume by arguing that not only is the pursuit of wealth the engine that drives the economy, but that the public is deluded in thinking that wealth will actually provide happiness. So, the economy is driven by affective forecasting errors, errors about what will make us happy (see The Trolley (Pt. II)).3
Smith goes further still. Smith, unlike Hume, also argues that merchants have a tendency to collude to enrich themselves and hurt the public interest. In fact, Rasmussen reports that some Smith scholars notice a “pathological suspicion” of merchants. Smith also discusses the disparaging effects of the division of labor, not just the good ones. For example, he discusses how working for many decades at only one task will make one as stupid as it is possible to be, unable to carry a conversation, and can even harm his physical well-being. This causes Rasmussen to report about another scholar who wonders why it wasn’t Hume that became the poster child of capitalism, instead of Smith. Indeed, some quotes by Smith (even with context!) seem to come straight out of Marx and Engel's Communist Manifesto!4
Lastly on the topic of the drawback of commercial societies, and perhaps the biggest misfit of the popular conception of Smith to actual Smith, Smith claims that commercial society breeds inequality. He argues that for every very rich man there must be 500 very poor ones. Rasmussen also reports that in an early version of the manuscript, Smith writes about how it is those that labor the most that get the least; he also notes that the poor bare the weight of commercial society on their backs. Rasmussen adds a point that Smith made in Theory of Moral Sentiments: that the perceived utility of wealth and status lead the public to want to emulate the rich/powerful and, as a result, they end up despising the poor. So, the poor not only make commercial society function, but they also hold a position of opprobrium within it. In the sixth edition of Wealth of Nations, Smith added that the emulation of the rich is not a good idea since they are typically not good people, and that this emulation is one of the principle causes of moral degradation.

Smith’s Wealth of Nations was an immediate hit, and he received many letters full of praise. In multiple letters, though, there was a melancholy addendum. The message was clear... “Go see your friend. He is dying.”
As Hume’s death approached, he hosted a dinner party for his friends on July 4th, 1776. Naturally, Smith was there. Since Hume was by this point a quite famous and notorious infidel, there was widespread curiosity over his demise: will Hume repent at the last minute? By all accounts (e.g., James Boswell, Smith, Hume’s brother, etc.), Hume remained a skeptic until the very end, dying in as cheerful and tranquil a way as is possible. Though Hume and Smith had seen very little of each other in the three years leading up to Hume’s death, during the writing of Wealth of Nations, they tried to make up for lost time. Smith spent most of the rest of the year at Hume’s home. When he died, Hume said he admitted of only one regret: leaving good friends behind. On 25 August 1776, around 4pm, David Hume, The Great Infidel, died.
Soon after Hume's death, Smith publishes what has come to be known as the Letter to Strahan, which was to be published along with Hume's autobiography—as indicated in Hume's last will and testament. In it, Smith praised his friend, who died an infidel. Consequently, scorn was heaped upon Smith. What incensed most (religious people) is Smith’s claim that Hume was about as virtuous as you could be (and that he accomplished this feat completely without religion!). Moreover, Smith stressed that Hume was tranquil until the very end, never recanting his skepticism. Despite the scorn he received, Smith never retracted the letter nor does he ever mention regretting the publication. It was his last homage to his best friend.
To be continued...
FYI
Suggested Viewing: Then and Now, Introduction to Hume's Moral PhilosophySupplemental Material—
Video: The School of Life, David Hume
Video: The School of Life, Adam Smith
Related Material—
Video: Complexity Explorer, Agent-Based Modeling: What is Agent-Based Modeling?
Video: Khan Academy, Feudal system during the Middle Ages
Advanced Material—
Reading: Rachel Cohon, Stanford Encyclopedia of Philosophy Entry on Hume's Moral Philosophy
Reading: Samuel Fleischacker, Stanford Encyclopedia of Philosophy Entry on Adam Smith’s Moral and Political Philosophy
Reading: Antti Kauppinen, Stanford Encyclopedia of Philosophy Entry on Moral Sentimentalism
Footnotes
1. Hume adds, however, that human nature itself does not allow us to live in a perpetual state of skepticism, rationally justified though the skepticism may be. The very nature of our cognition makes it so that we can only be skeptics when we have our philosopher hats on, but we return to blissful non-skepticism as soon as we engage in healthy everyday activities. Moreover, there's no reason to despair, claimed Hume, since we can gain some probabilistic knowledge about matters of fact, assuming the external world is real, through the experimental method.
2. Interestingly, contra widely-held assumptions about Smith’s religiosity, Smith explains the origin of religions as being similar to the origin of science: as a search for explanations that relieve our tensions about the world. By implication, gods were invented by human beings—not the other way around. Gods are the result of human ignorance and lack of explanatory power. Interestingly enough, Smith conjectures that what gave rise to monotheism is the desire to have different explanations for natural phenomena to be part of a coherent whole. This is, by the way, exactly what Wright argued for in The Evolution of God. Smith implies but does not explicitly say that the desire for a unified theory is an artifact of the imagination and its desires; that there is no real reason to assume that a unified theory actually exists (as disappointing as this may be to some theoretical physicists).
3. Since we are discussing differences between these friends, we should note that Smith also diverged from Hume on the topic of religion. Both undoubtedly wanted to ensure that something like the wars of religion didn’t happen again, like many other intellectuals of the age. However, Hume thought that the best policy is to have the state sponsor a church, making sure that this monopoly would render fanaticism improbable, since the favored church wouldn’t have to denigrate other factions (since they would be either weak or non-existent). Smith, on the other hand, presents the very modern view of complete separation of church and state—a novelty at the time.
4. Rasmussen hastens to add that Smith believed that the state should intervene to educate the children of the poor to help their lot in life, who would otherwise be condemned to the same harmful division-of-labor-type employment as their parents. Again, if a libertarian tells you that Smith would've advocated for only private schooling, tell them to actually read Wealth of Nations.
The Kingdom of Ends
Kant stands at one of the great dividing points in the history of ethics. For perhaps the majority of later philosophical writers, including many who are self-consciously anti-Kantian, ethics is defined as a subject in Kantian terms. For many who have never heard of philosophy, let alone of Kant, morality is roughly what Kant said it was... But at the outset we have to note one very general point about Kant. He was in one sense both a typical and supreme representative of the Enlightenment; typical because of his belief in the power of courageous reasoning and in the effectiveness of the reform of institutions... supreme because in what he thought he either solved the recurrent problems of the Enlightenment or reformulated them in a much more fruitful way.
~Alasdair MacIntyre
Moral logic
Lets focus for a moment on the first five ethical theories we've covered: ethical egoism (EE), social contract theory (SCT), divine command theory (DCT), virtue ethics (VE), and cultural relativism (CR). These have been, more or less, presented in the order in which their root ideas came about. For example, theories in which human self-interest plays the dominant role, such as EE and Hobbes' SCT, were around since the beginning of the Western tradition, back in the time of Plato. As such, they were presented first. It should be said, however, that some of these theories are difficult to place on firm historical ground. Take for instance CR. We said last time that there were "hints" of a relativism far back in history, but the sort of relativism we're covering was only really conceived of in the 20th century. The same goes for DCT. There was a polytheistic version of DCT far back in history, at least since the time of Plato. But the DCT we're covering is not polytheistic—it's actually Catholic—and it is largely the sort of DCT that two figures from the Middle Ages would subscribe to. These figures are St. Thomas Aquinas and William of Ockham.1
The more important thing to note about the first five theories is that they all share a particular moral logic. Despite the fact that the ideas that gave birth to these theories are "smeared" across history, these are all originally forms of means-end reasoning (MacIntyre 2003, 2013). This is the point I made in Patterns of Culture when discussing moral discourses. Let me reiterate it here. Rather than focusing on specific ethical theories, it might be helpful to focus on moral discourses. From this perspective, the moral discourses covered thus far can be distinguished by the sort of justification given for moral rules and by the type of moral logic that they use (see Comments on historical recurrences from the Patterns of Culture). For example:
Social Contract Theory's moral logic:
if you want to avoid the condition of a war of all against all → give authority on moral/legal matters to a central powerThe Virtue Tradition's moral logic:
if you want to flourish in a particular social order → develop the virtues that are conducive to excellence in that social orderDivine Command Theory's moral logic:
if you want to avoid eternal damnation → follow the rules set forth by the deity
According to MacIntyre (2003, 2013), only a historical study of ethics allows you to realize this. This is because most ethicists today, as well as just regular folk, tend to use moral terms in a more absolutist sense. In other words, people tend to think of morality not as means-end reasoning (i.e., if you want this, then do this), but as something that must be done simply because it is the right thing to do. Again, this is not how most people in, say, Ancient Greece thought of morality. Interestingly enough, we can pinpoint when this shift in use of moral terms occurred. It has to do with the massively influential ethical theory we begin to cover today, one that transformed our way of conceiving of moral discourse itself. As it turns out (and as you read in the epigraph above), the way in which most people think about ethics today is the way that Immanuel Kant thought about ethics, Kant being the thinker whose theory we're beginning to cover today. But to understand his view, you have to understand his mind; and this mind was shaped by a revolution in thinking known as the Age of Enlightenment—a time period when the philosophy of humanism took hold.
The high water mark of religiosity
“In medieval Europe, the chief formula for knowledge was: knowledge = scriptures × logic. If we want to know the answer to some important question, we should read scriptures and use our logic to understand the exact meaning of the text… In practice, that meant that scholars sought knowledge by spending years in schools and libraries reading more and more texts and sharpening their logic so they could understand the texts correctly” (Harari 2017: 237-8).
Up until the dawn of Enlightenment era humanism, which began in the early 18th century, it was religion that gave meaning to every sphere of life. Although it doesn't always seem like it, the role of religion has been radically reduced since its high-water mark prior to the Enlightenment era. Religion gave meaning to art, music, science, war, and even death.2 Since it is hard to intuit this radical shift, I want to give you an example of the role that religion used to play in an institution that is still around today: punishment. Once you see how punishment used to be conceived of and compare that with how punishment is seen today, you'll have a better understanding of the radical shift that occurred. The following contains graphic descriptions of a kind of execution which was practiced in Kant's native Kingdom of Prussia, as well as throughout Western Europe, namely breaking on the wheel. Those who are sensitive can and should skip this video. For the rest, here is some Food for Thought...
Food for Thought
What humanism is

Fountain, by Marcel Duchamp.
As we've seen, before humanism, supernatural beings gave meaning and order to the cosmos. Now, humans do. In fact, this is one way to define humanism. In chapter 7 of Homo Deus, Yuval Noah Harari argues that humanism means that humans are the ultimate arbiter on meaning. To make his point, Harari discusses the radical shift of perspective and subject matter in art. It is no secret that religious expression used to dominate all art forms: paintings, music, etc. Now, however, what counts as art is a purely secular matter. In other words, it's ultimately up to humans. Consider the sculpture pictured right. Is it art? Really that’s for humans to debate and decide on. And this notion—that humans can give value to something merely by deciding that it has value—was inconceivable prior to humanism.
A further example can be how we characterize war. Before humanism, war was seen “from above.” The justification for armed conflict was divine, the soldiers were faceless, and the General was a genius. If one army defeated another, the General of the victorious army would claim that it was God's will that they won. (Otherwise, why would they have won?) This is why Genghis Khan called himself "the punishment of God". In this quote by the Great Khan, you can see how supernatural beliefs were imbued even into the reasons for why one group defeats another: "I am the punishment of God. If you had not committed great sins, God would not have sent a punishment like me upon you." Even the losers in war sought answers from the divine. When the Mongols captured Baghdad in 1258, religious leaders found themselves asking why their people had lost favor with Allah. (Why did Allah abandon them?) But now, after humanism, portrayals of war revolve around the individual soldier and their loss of innocence. Think of movies like Full Metal Jacket. We support the troops, not just the general. We critique our politicians if they send our young ones to die for no good reason. The perspective has clearly shifted.
It is, I think, impossible to put yourself into the mindframe of someone prior to humanism. But perhaps you can approximate thinking like someone who was in the transitional period. Perhaps you find meaning in the inquiries of science, as many did during this time period. Also during this time period there was a rise in literacy rates and the proliferation of non-religious literature, like novels (see Hunt 2007). If you find reading this kind of literature liberating, then you are approaching this way of thinking. In other words, you are approaching the worldview of Immanuel Kant.
Storytime!

The Capitol Hill Putsch,
6 January 2021.
We are almost ready to begin looking at Kant's worldview. There is one important difference between our perspective and his that needs to be discussed first: you have the benefit of hindsight. You know that Kant's time was a transitional period. At the time, however, loss of trust in traditional institutions must have been extremely concerning. Perhaps another way to look at it is this. Just like you are living through a transitional period—one where there is loss of trust in political institutions, one where there is uncertainty over the future of employment, one where the US is transitioning from being the global superpower to being second behind China—Kant was seeing the world around him in a state of crisis. When you think of Kant, you have to bring in this feeling of uncertainty into the picture. So, let's do a quick Storytime!, and then we'll be on our way...
Important Concepts
Human Understanding

The Copernican Revolution
(in perspective).
Prior to Kant's time, various thinkers had been trying to establish the foundations of the natural sciences. These are thinkers like Francis Bacon, René Descartes, John Locke, and Thomas Hobbes (see Footnote 2). Many found it unsatisfactory to justify science by saying simply that it works. As you might know, if all you know is that something works, but you don't know why it works, then you're going to be in a lot of trouble when it stops working. This is obviously because you'll have no idea how to fix it. And so thinkers were trying to establish the foundations of science for precisely this reason. But before even establishing these foundations, thinkers had to settle on a much more mundane question: Do we perceive the world as it actually is? Each thinker had their own position, but Kant was not satisfied with their theories. In Critique of Pure Reason, originally published in 1781, Kant engages in his Copernican revolution. Just like Copernicus' heliocentric theory takes into consideration that the Earth is simultaneously in its own orbit around the sun when explaining the movement of the other planets, Kant takes up the hypothesis that, when we perceive the objects in the world, our perceptual systems change them. In other words, Kant took into consideration how our cognitive systems give form to our perceptions, a form that isn't actually in the objects in the world.
This idea is profoundly modern. In his (1999) book The Number Sense, Dehaene gives an overview of the countless mathematicians who have wondered why mathematics is so apt for modeling the natural world. By this point, Dehaene has spent some 250 pages arguing that we actually have an innate module programmed into us by evolution that helps us visualize a number line and learn basic arithmetic concepts. And so Dehaene concludes that it is not the case that "mathematics is everywhere." Instead, we can't help but to see mathematics everywhere. Our brains project a mathematical understanding onto the world.
"There is one instrument on which scientists rely so regularly that they sometimes forget its very existence: their own brain. The brain is not a logical, universal, and optimal machine. While evolution has endowed it with a special sensitivity to certain parameters useful to science, such as number, it has also made it particularly restive and inefficient in logic and in long series of calculations. It has biased it, finally, to project onto physical phenomena an anthropocentric framework that causes all of us to see evidence for design where only evolution and randomness are at work. Is the universe really 'written in mathematical language,' as Galileo contended? I am inclined to think instead that this is the only language with which we can try to read it" (Dehaene 1999: 252).
In the Critique, Kant wants to study the limits of abstract thought; in other words, he wants to know what he can know through reason alone. Can we know nothing of value from pure reason? Or can we discover fundamental reality with it, as Plato thought? It would be impossible for me to summarize Kant's most important argument in this work, the transcendental deduction, but I can give you his main conclusion: it is through human understanding that the laws of nature come to be. In other words, human understanding is the true law-giver of nature; it is through our cognitive systems, with their built-in ways of looking at the world, that we "formalize" the world and project onto it the laws of nature, very close to what Dehaene said above.4
"Thus we ourselves bring into the appearances that order and regularity that we call nature, and moreover we would not be able to find it there if we, in the nature of our mind, had not originally put it there... The understanding is thus not merely a faculty for making rules through the comparison of the appearances; it is itself the legislation for nature, i.e. without understanding there would not be any nature at all" (Kant as quoted in Roecklein 2019: 108).

Human Reason
In case you haven't noticed, the preceding section was all devoted to human understanding. This section is devoted to human reason. For Kant, these are two different cognitive faculties. The basic distinction is as follows. Human understanding (through which we give order to the world, as seen in the previous section) contains forms of intuition, i.e., those built-in categories that shape the world when we cognize it. Human reason, on the other hand does not. Reason does not depend on the peculiarities of human cognition; what is reasonable is reasonable to all intelligent beings whether they be humans, angels, gods, or whatever.
Human understanding is the means by which we understand the world as it appears to us. Human reason, on the other hand, is the means by which we consider how the world ought to be. So both faculties help us construct the world, although in different senses of the word. And just as we give ourselves the laws of nature through human understanding, we give ourselves the moral law through human reason.
We should be more specific on the kinds of laws that human reason gives us. There are two ways that reason commands us. A hypothetical imperative is the sort of imperative (or command) where: a. you have a particular desired outcome or consequence, so b. you do a particular action as a means to that end. For example, “Billy wants to get an A in the course, so he does all the homework and engages in class.” Also, “Wendy is thirsty, so she got up to get some water.” Billy and Wendy had a desire, and reason came up with a rational means by which to fulfill said desire. That's one way that reason commands us.

A great white shark,
not known for vegetarianism.
A categorical imperative is a command from reason that applies across any situation no matter what you desire, i.e. it’s a set of rules you must follow, since they always apply. Put another way, there are some rules that if you don't obey, you'd be contradicting yourself.5 Kant believes that morality is a categorical imperative. It is a moral law that is commanded upon us by our own reason.
It is the realization that there is a moral law, and that we can fail to abide by it, that makes us realize that we are free.6 That means we are Rational Beings; we are beings that can live according to principles. Non-human animals can't do this. This is why it is funny to see vegetarian sharks in Finding Nemo: it's not in a shark's nature to be able to choose to be vegetarian. But we can choose the principles by which we live. Kant argues that this is what gives us moral personhood (i.e., the status of having moral rights).
“The starting point of Kant’s ethics is the concept of freedom. According to his famous maxim that ‘ought implies can’, the right action must always be possible: which is to say, I must always be free to perform it. The moral agent ‘judges that he can do certain things because he is conscious that he ought, and he recognises that he is free, a fact which, but for the moral law, he would have never known’”(Scruton 2001: 74).

Kant's Metaphysics
Some clarifications...
As previously stated, human understanding is for understanding the empirical realm (the world of phenomena that we perceive through our "forms of intuition"); human reason, then, helps us to come to know aspects of the transcendental realm (the realm of things-in-themselves), namely the general laws of logic that apply everywhere. Because Kant’s moral system is founded in this transcendental realm, along with the reasoning behind his defense of free will, he must rely solely on reason for making moral judgments, since only human reason can grasp the transcendental realm. This is why Kant argued that we can arrive at fundamental moral truths through reason alone (or Pure Reason); in other words, Kant was compelled to argue that we do not need to look at the consequences of the action (in the empirical realm) to see whether they are right or wrong given his views on metaphysics and the problem of free will. This is why Kant develops a purely duty- or rule-oriented view—consequences only operate in the realm of cause and effect, the empirical realm, but that's the domain of human understanding; moral truth can only be arrived at through human reason.
What is freedom for Kant? Kant stresses that freedom is not just doing whatever you desire. This is because some desires are not genuinely coming from us. Desires have either biological or social origins. For example, our desire for food and sex have biological origins. Other desires, like our desire to have a bigger following on Instagram, clearly has a social origin—and it has to do with sophisticated algorithms which design the user experience so as to be maximally addictive (see Lanier 2018).7 In any case, Kant argues that true freedom comes when you rid yourself of these non-rational desires. It is only when you allow yourself to be truly governed by reason that you are free. Getting rid of all your non-rational desires leads to pure practical reason. As previously mentioned, human reason is through which we give ourselves the moral law. So once you've gotten rid of these non-rational desires you can follow the moral law.
For the reasons outlined in the preceding paragraph, Kant argues that an action only has real moral worth (i.e. moral value) if it is done out of duty. Doing something out of duty is to do something because one is motivated out of respect for moral law, even if one doesn’t really want to do it. The moral worth of the act is derived not from the consequences of the act, but from the principle, or maxim, that motivated that act. For this reason, good will is the highest moral virtue. Good will is what allows you to follow the moral law. In fact, other virtues wouldn’t be as good without the possession of good will first. For example, being loyal is clearly a virtue. But if you are loyal to a tyrant, like Vlad the Impaler, you are doing many immoral things, like impaling people. If you have good will (towards others), you wouldn't be loyal to a tyrant like that. It is good will that allows loyalty to truly be a virtue.
We're finally ready to see Kant's ethical theory. The categorical imperative is...
To be continued...
FYI
Suggested Reading: Onora O’Neill, A Simplified Account of Kant’s Ethics
TL;DR: Crash Course, Kant & Categorical Imperatives
Supplementary Material—
- Reading: Tim Jankowiak, IEP Entry to Immanuel Kant, Section on Moral Theory
Advanced Material—
-
Reading: Christine Korsgaard, Kant's Formula of Universal Law
-
Reading: Michael Rohlf, Stanford Encyclopedia of Philosophy Entry to Immanuel Kant
-
Book: Immanuel Kant, Critique of Pure Reason
Footnotes
1. St. Thomas Aquinas and William of Ockham did both believe in divine comamnd theory, but, in other respects, they are really more like each other's archnemesis. Their debates can't be covered here. For a brief survey of their debates, you can take my PHIL 101 course. For a more complete analysis, you can take PHIL 107: Philosophy of Religion or PHIL 111: History of Ancient and Medieval Philosophy.
2. Some students sometimes ask how religion motivated scientific inquiry. As it turns out, the very same scientists who played a role in the downfall of Aristotelianism and the ushering in of the new Newtonian worldview in the 17th century were motivated by religious fervor. Copernicus was motivated by neo-Platonism, a sort of Christian version of Plato's philosophy (DeWitt 2018: 121-123), and Kepler developed his system, which was actually correct, through his attempts to "read the mind of God" (ibid., 131-136). Even Galileo was a devout Catholic; he just had a different interpretation of scripture than did Catholic authorities; the interested student can read Galileo's Letter to Castelli, where he most clearly articulates his views on the matter. Interestingly enough, the only thinker from this period that seemed to have advocated a genuinely secular worldview was none other than Thomas Hobbes. “It should be said, however, that Hobbes (despite his own pleas) has rarely been seen as the key theoretician of modern science—which illustrates the dubiousness of the notion that were was a historical link between modern science and the process of secularization. Scientists in the 17th and 18th centuries in fact disowned the one true secular philosophy of science on offer to them [i.e., that of Hobbes], preferring instead the elaborate theological speculations in which Newton indulged” (Tuck 2002: 60).
3. Sure, there are still people who are profoundly religious. But since humanism came about, even the profoundly religious often don't find meaning exclusively in the religious sphere.
4. Although it is not directly relevant to the course, the "forms of intuition" that Kant argued are a part of how we see the world but not a part of the world in-and-of-itself are actually space and time. In other words, Kant believed space and time are not actually features of the world but added to our sensory impressions of the world when we cognize it. Believe it or not, some physicists believe Kant is actually right: spacetime isn't objectively real, we construct it through our perceptual systems (see Rovelli 2018).
5. For example, here are some commands from reason: a. You may not conceive of a married bachelor; b. You may not conceive of a round square.
6. Roger Scruton puts it this way: “The law of cause and effect operates only in the realm of nature (the empirical realm). Freedom, however, belongs, not to nature, but precisely to that ‘intelligible’ or transcendental realm to which categories like causality do not apply” (Scruton 2001: 75).
7. I suppose that, in addition to Lanier's Ten arguments for deleting your social media accounts right now, I'd also recommend Netflix's The Social Dilemma. However, I should add that the dramatizations in this Netflix documentary are somewhat misleading and hyperbolic. In the case of the algorithms powering social media and the profit model behind it, it is scary enough to just describe these as they are without resorting to hyperbole. I suppose, though, that the dramatizations are necessary to convey the message without getting bogged down in technical details.
Common Sense
Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another.
~Immanuel Kant
Universality
You are forgiven if you did not fully understand the ideas from the last lesson or even the quote above. Kant's ideas form a notoriously difficult to understand doctrine. You can see why. Kant was trying to find a foundation for all those traditional views that had been pushed aside by thinkers who were blazing forward with the scientific method. In a way, then, you can see Kant as a counter-revolutionary force; he was trying to preserve the classic views of belief in an immortal soul and the freedom of the will. Despite all this, there is still something revolutionary in his view: the hypothesis that we do not see reality as it is. Because he was trying to establish a foundation for timeless entities, like souls and morality, he looked at the world in a fundamentally different way. This is his so-called Copernican Revolution. So important was this thesis that thinkers more than a century later would read his work, as apparently Einstein did (see Rovelli 2017, chapter 3).1
Kant's metaphysics are not the topic of this lesson, though. The main lessons to draw from the last lesson should be that a. it is human understanding that gives the perceived order of the universe that we see and b. that human reason similarly governs our actions. In other words, just like the universe operates according to a universal law, our will must also operate according to some law. But our will is not in the empirical realm, with all the other objects we perceive; our will is in the transcendental realm. So the law that it must conform to does not come from human understanding but from human reason. Human reason allows us to discover those laws that any rational will must conform to, regardless of the consequences. Robert Johnson and Adam Cureton, in their entry on Kant's Moral Philosophy in the Stanford Encyclopedia of Philosophy, put it this way:
“If x causes y, then there is some universally valid law connecting Xs to Ys. So, if my will is the cause of my φing, then Φing is connected to the sort of willing I engage in by some universal law. But it can’t be a natural law, such as a psychological, physical, chemical or biological law. These laws, which Kant thought were universal too, govern the movements of my body, the workings of my brain and nervous system and the operation of my environment and its effects on me as a material being. But they cannot be the laws governing the operation of my will; that, Kant already argued, is inconsistent with the freedom of my will in a negative sense. So, the will operates according to a universal law, though not one authored by nature, but one of which I am the origin or author. Thus, Kant argues, a rational will, insofar as it is rational, is a will conforming itself to those laws valid for any rational will” (Johnson and Cureton, entry on Kant's Moral Philosophy in the Stanford Encyclopedia of Philosophy, section 10).
In short, if the moral law is to be a law, then it needs to apply to everyone, at all times, in any given context, just like gravity applies to everyone at all times. This moral law, then, is a command from reason, a law that any rational will must conform to, which Kant called the Categorical Imperative (CI).
Important Concepts
Kant's Ethics
Comments on the significance of Kant's work
Kant is perhaps the most important thinker we've covered so far. This is not because I believe he's right (I don't) or that his arguments are extremely convincing (Sometimes they're not). He's important because most thinkers after Kant, even those who claim to be non-Kantian, take as the subject of inquiry of ethics to be just what Kant defined it as (MacIntyre 2003: ch. 14). Even those who have never heard of philosophy or Kant conceive of ethics in the way that Kant conceived of ethics. Consider how Kant defines moral terms and the logic of his moral discourse: what is moral is simply what one must do. Why must one do it? Reason commands it, even if there are negative consequences, even if you don't want to, even if it is against your interests—because this must be.
Let me try to convey the important of Kant's work in another way. As MacIntyre argues, once you see Kant's line of reasoning and way of defining morality, other moral discourses seem almost absurd. Of course morality is what you must do and not just merely what you'd like to do or what is in your best interest. But this simply wasn't obvious before Kant said it. This brings us to the cognitive bias of the day.

If you ever needed a reminder of how we don't truly understand ourselves, there's always hindsight bias, also known as the I-knew-it-all-along effect. As you can imagine, this bias leads us to mistakenly assume that past events were actually more predictable than they really were. For example, in one study, subjects’ opinions on a controversial subject were measured, e.g., capital punishment. The subjects then viewed persuasive material either for or against said policy. Then subjects had their opinions measured again. The subjects were usually closer to the persuasive message they had just viewed. But(!), here’s the stunning find. Subjects had great difficulty reconstructing their original opinion(!). In other words, they were in the grip of their current beliefs, the ones altered by the persuasive message. And so, they often substituted their updated belief for their original opinions (Nisbett and Wilson 1977). This just goes to show you that if you just learned something and you feel like you knew it all along, you should probably second guess that feeling.
Hindsight bias also leads us to overestimate our ability to predict events. In another study, subjects reported the probabilities of fifteen possible outcomes with regards to Richard Nixon’s foreign policy objectives while he was traveling on a diplomatic run. When Nixon returned to the US and those topics which the subjects had predicted the probability of had been actualized, the subjects were asked again about the probabilities of the events. Subjects tended to exaggerate the probability they had assigned to events that did come to pass, and shrunk the probability of the events that did not come to pass. This was all done non-consciously, of course, since the subjects did not have their original probabilities in front of them—and hence couldn’t make a direct comparison (Fischhoff and Beyth 1975).2
And so, just like Darwin's theory of evolution seems obvious once he lays it out for you, Kant's conception of moral discourse makes all previous moral discourses seem weak. This is not to say that Kant's ethical theory is obviously true, but rather that Kant's way of thinking about morality—as rules that we have to abide by—seems very intuitive. In other words, it seems like the right way to think about morality.

Why is Kant's moral discourse intuitive? In chapter 4 of The Expanding Circle, Peter Singer gives an explanation for how reason plays the essential role. He first makes the case that early ethical systems were probably marked less by explicit moral reasoning and more by habit and custom, habits and customs which were themselves stabilized by our social instincts of kin and reciprocal altruism. In other words, early societies simply found social norms that "worked" and those societies that didn't find social norms that "worked" fell apart—much like evolutionary theory would predict.3 In fact, Singer reminds us that our moral language shows the relics of this. The word “ethics” comes from the Greek ēthos, which normally means “character” but can, in the plural, mean “manners” and is related to the Greek word for custom, ethos (short e). The word “moral” comes from the Latin mos and morals, which mean “custom” (Singer 2011: 94).
As societies grew in complexity and encountered other social orders, its members had to face various moral disputations, debates about what’s the right thing to do, what’s best for society, what norms are outdated, etc. This is where moral reasoning enters the picture. In order to be persuasive, moral reasoning would have to take the shape of a disinterested citizens. It couldn't, for example, just be me defending what is good for me. Successful moral reasoning and argumentation would persuade large numbers of people by appealing to the interests of large numbers of people. But, Singer explains:
“Reasoning is inherently expansionist. It seeks universal application. Unless crushed by countervailing forces, each new application will become part of the territory of reasoning bequeathed to future generations. Left to itself, reasoning will develop on a principle similar to biological evolution. For generation after generation, there may be no progress; then suddenly there is a mutation which is better adapted than the ordinary stock, and that mutation establishes itself and becomes the base level for future progress. Similarly, though generations may pass in which thinkers accept conventional limits unquestionably, once the limits become the subject of rational inquiry and are found wanting, custom has to retreat and reasoning can operate with broader bounds, which then in turn will eventually be questioned” (Singer 2011: 99-100).4
The bottom line
As Singer argues, moral reasoning gave birth to the idea of impartiality, i.e., making decisions based off of objective criteria.5 Through a long evolution of moral discourses, from early hunter-gatherer societies to the Enlightenment, the logic of moral reasoning became more impartial, stricter, and less constrained by context and circumstances. By the time Kant got to it, it was ready to be fully universalized. Morality became absolute. In this way, Kant is the embodiment of the Enlightenment, having supreme confidence in the power of courageous reasoning to be the ultimate foundation for irrefutable moral truth. To discover the moral law Kant argued that we must use reason; only reason can guide us into finding those precepts that are to be followed by all rational beings. In arriving at these commandments from reason, Kant clearly distinguishes between hypothetical imperatives and categorical imperatives. Hypothetical imperatives, as you may notice, are the kind that Aristotle, Hobbes, Mandeville, and divine command theorists made. But Kant’s categorical imperative is of the kind “You ought to do X just because you ought to do X.” The most universalist moral discourse is born. Its justification: reason commands that it must be.
Other formulations of the Categorical Imperative
Kant argued that there are multiple ways of formulating the categorical imperative and that these ways are all ultimately equivalent, meaning that they will arrive at the same moral conclusions on every case. The following are formulations are from Johnson and Cureton's entry on Kant's Moral Philosophy in the Stanford Encyclopedia of Philosophy.
The Humanity Formulation
The second formulation of the Categorical Imperative that Kant gives us is the humanity formulation: Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means to an end. Perhaps a simpler way of putting it might be the basic rule of not treating others merely as a means to an end. In a nutshell, don't use people.

Being rude to fast food
workers is clearly prohibited
by the humanity formulation.
Since Kant believes that the three formulations are equivalent, the humanity formulation should help us arrive at the same perfect and imperfect duties as the universal law formulation. Thankfully, this seems pretty straightforward. When you steal from someone, you are definitely using them. In the very least, you let them use their resources to acquire whatever it is that you stole from them. When you lie to someone, you similarly use them. You tell them erroneous information so that they'll do or think what you find it convenient for them to do or think. When you abuse drugs, you are using yourself. And so, without too much effort, you can duplicate the perfect duties arrived at using the universal law formulation with the humanity formulation.
The Autonomy Formulation
Kant's third formulation is the autonomy formulation: Act so that through your maxims you could be a legislator of universal laws. The idea behind this formulation is to stress the source of human dignity: that we are the lawgivers. And to be universal lawgivers we must be maximally impartial, or fair.
The Kingdom of Ends Formulation
Kant's final formulation is the kingdom of ends formulation: Every rational being must act as if he were by his maxims, at all times, a lawgiving member of the universal kingdom of ends. In other words, imagine a society where everyone treated each other as ends in-and-of-themselves. Further, imagine that all your actions were immediately turned into laws in this special society. What sorts of laws would you enact? Presumably, at least, you'd say that lying, stealing, and murder are wrong.
Sidebar
Criticisms
First off...
Before I give objections to Kant's ethical theory, I should give one quick aside. The characterization of Kantian ethics that I've given, although mainstream, does have some critics. For example, Barbara Herman warns that Kant’s views are too often characterized as “rule-fetishism,” and that we should instead focus on how “moral rules give shape to the agent’s desire to be a moral person” (Herman 1993: 27). To be honest, when I read Kant, I really do get that rule-fetishist vibe. But I wanted to make sure that you all know that my interpretation isn't the only one.
Nonetheless...
There are some problems. First, since Kant developed his ethical system in the 18th century and tried his best to ground it in a transcendental reality, advances in mathematics (non-Euclidean geometry) and physics (relativity) yield various empirical problems for his view. I might add, though, that some of his theses turned out to be right, and others turned out to be influential to major thinkers of the 20th century (see footnote 1). We won't focus on these here because a. we are banishing all empirical problems until Unit III, and b. Kant's view is so much more ambitious than the others that we've covered, in a sense, it makes sense that it has more obvious empirical problems.6

Some representatives of
the new atheist movement.
Second, MacIntyre (2003) closes his chapter on Kant by pointing out that, since Kant has completely severed morality from self-interest and positive consequences, Kant has fully severed the connection between morality and happiness. Indeed, in his 1788 Critique of Practical Reason, Kant states that "Morality is not properly the doctrine of how we may make ourselves happy, but how we may make ourselves worthy of happiness." Kant recognized that this is an extremely unsatisfactory state of affairs, and he tried to remedy it by including in his worldview God’s existence, freedom of the will, and the immortality of the soul. Given this worldview, God could have the power to ultimately crown virtue with happiness, albeit in the next life. However, this isn't going to work if one is taking a secular point of view. In effect, by grounding his ethical system in God's existence, the freedom of the will, and the immortality of the soul, Kant has put his whole theory on shaky ground. This is because a. God's existence has been increasingly called into question, b. human free will has been called into question (see Laplace's Demon and The Person and the Situation), and c. the existence of souls has been called into question.
Third, some scholar's think Kant is too reliant on reason. Around the same time as Kant’s writing, there was a strong tradition of “sentimentalist philosophers”, such as David Hume and Adam Smith, who built their moral theories based on moral emotions (e.g., see Prinz and Nichols 2012). It must be remembered that Kant wrote as a conservative reaction to the crisis of the Enlightenment. He was simultaneously horrified about the traditions being thrown by the wayside and, since he was influenced by the Enlightenment, confident that reason can put things back in order. This was perhaps the height of faith in reason, a time when people believed that reason can do more than it really can. In fact, anthropologist Robin Fox claims that academia is currently still being harmed by a quasi-divinical treatment (i.e., worship) of reason that began in the Enlightenment.
“The previous paragraphs are taken, in fact, from a previous book. There also I said that any of these diatribes are only contribution to a larger project, the aim of which is to free us from the intellectual shackles of the Enlightenment faith in reason, the romantic passion for the individual, and the nineteenth-century worship of progress. But it is worth saying over and over again because no one gets it the first time” (Fox 1989: 233-4).
Fourth, and on a more practical note, even if we could resolve all the issues mentioned above, there’s still the very mundane fact that Kantianism appears to be far too strict. The theory states, for example, that you have a perfect duty not to lie. In other words, you can never lie—even when lying would lead to a better outcome, even when lying could save a life (as Kant stated in an essay titled On a Supposed Right to Tell Lies from Benevolent Motives), and even when lying can prevent some greater harm. To many, not only are some lies are innocuous, but consequences clearly matter. If you could, say, save a life by lying, then some argue you definitely should lie.
A ticking time bomb.
Relatedly, what happens when one duty conflicts with another? For example, what happens if you have to lie to save a life? We have a perfect duty both to not lie and to protect life. Here's another example. Suppose a terrorist has planted a ticking time bomb in a shopping mall, although you don't know which one. You've captured the terrorist but he is not cooperating. One might suggest that torture is a good way to get the terrorist to reveal where the bomb is. Kantianism, however, would be opposed to this sort of treatment, since it would be treating the terrorist as a means to an end. Should they be punished? Sure. But not tortured. But obviously, we have a duty to save lives. Which duty has primacy: saving lives or not torturing? In this regard, Kantianism is too vague, at least when duties conflict; put more precisely, Kantianism is not action-guiding in the case of conflicting duties.
The preceding points bring up a distinction between two approaches to moral reasoning. Kantian ethics is characterized by its duty-oriented perspective. We have, Kant argues, certain perfect duties that we must always abide by, regardless of context. But other thinkers think that consequences are effectively all that matter. Duties are important, sure, but producing the best consequences possible is the only real moral mandate.
This brings us to a topic I had been deliberately avoiding. As it turns out, modern-day Kantians are mired in a brutal debate with another school of ethics. This is, in fact, how ethics classes typically introduce Kant's categorical imperative: in competition with a consequence-driven approach to ethics. I didn't quite do that, but I was only delaying the inevitable. Perhaps it's true. Perhaps the best way to understand Kantianism is to view it in comparison with its most bitter rival. I suppose it's true that only the dead have seen the end of war. And this, the war between Kantianism and the utilitarians, is a war that is still being waged. Much ink has already been spilled. We're about to add a little more...
Immanuel Kant is an immensely influential figure in the history of ethics in that most people, philosophers and non-philosophers alike, conceive of moral terms in the way in which Kant defined them: as being absolute and universal—or so argues Alisdair MacIntyre.
Kant's ethical system is grounded in his metaphysical system, and both of them are notoriously difficult to understand. His central claim is that we can be assured a priori that all of our experience will be law-governed due to the character of our cognition. As a consequence of this, however, we have no way of grasping causal relations outside of experience. And so, we cannot infer the existence of God and that his creation is a good one from experience alone. To discover the moral law then we must use reason, to find those precepts that are to be followed by all rational beings. In arriving at these commandments from reason, Kant clearly distinguishes between hypothetical imperatives and categorical imperatives. Hypothetical imperatives involve means-end reasoning and are the kind of moral reasoning that Aristotle, Hobbes, Mandeville, and divine command theorists engaged in. But Kant’s categorical imperative is of the kind “You ought to do X just because you ought to do X.”
There are several issues with Kantianism, including the dubiousness of his metaphysical assumptions (the existence of God, the immortality of the soul, etc.), an over-reliance on reason, an overly-strict fetishism about rules, and a failure to be action-guiding when perfect duties conflict.
FYI
Suggested Reading: Barbara Herman, Integrity and Impartiality
TL;DR: Marianne Talbot, Deontology: Kant, duty and the moral law
Supplementary Material—
-
Video: The School of Life, Immanuel Kant
Advanced Material—
-
Reading: Robert Johnson and Adam Cureton, Stanford Encyclopedia of Philosophy Entry on Kant’s Moral Philosophy, Section 10
-
Note: This section stresses Kant’s argument that freedom must be a necessary idea of reason. This notion is the strongest link to Kant’s first major work of his critical philosophy, Critique of Pure Reason.
-
-
Reading: Kant, Groundwork for the Metaphysic of Morals
Footnotes
1. Einstein was apparently fond of reading the work of philosophers. Per Rovelli, Einstein did not arrive at his views through the experimental method or through mathematical modeling. His revelations came in a series of thought-experiments, which were only later formalized into a mathematical language using Riemann's non-Euclidean geometry and much later proved experimentally. Also worthy of mention here is that some of Kant's have been vindicated by physics (see Rovelli 2018).
2. Note that the hindsight bias makes it extremely difficult to properly evaluate a decision that you've taken. This is evident in a study in which two groups were asked whether a given city should pay to have a full-time bridge monitor to protect against the risk that debris will block a local river, potentially causing floods. One group was given all the evidence available at the time of the city’s decision; the other group was given all that evidence plus they were informed that debris did in fact block the river and caused a flood. Additionally, they were instructed to not allow hindsight to distort their judgment. Nevertheless, only 24% of subjects from the first group judged that the city should get a bridge monitor, while 56% of subjects from the second group judged that the city should get a bridge monitor (Kamin and Rachlinski 1995). This makes one wonder if moral decisions are affected in a similar way. Nobel laureate Daniel Kahneman does claim that this bias can lead us to into believing that business moguls who take massive risks “knew it all along,” rather than accept the more practical belief that they just got lucky (Kahneman 2011: 202). This bias might influence issues relating to moral luck.
3. One example of Darwinian processes happening at the level of a society is when Protestantism proliferated throughout Europe in the 16th century. During this time period, various experimental communities cropped up, each having their own interpretation of Protestant Christianity. But mainstream Protestantism was eventually only configured from those communities that survived. In other words, those communities with a social order that could not hold together did not pass its particular social order, i.e., its memes, into the next generation. This could be seen as a form of natural selection (see Wilson 2003).
4. Michael Tomasello (2014) calls this the cultural ratchet effect, a cumulative process in which one culture's discoveries get passed on to the next generation, making it such that progress comes more easily and naturally and people don't find themselves constantly having to reinvent the wheel.
5. The idea of impartiality which was born through moral reasoning was originally limited to the members of one’s in-group. Singer reminds us that in the Bible the Israelites were instructed to only enslave non-Israelites (Leviticus 25: 39-46); a message that was echoed by Plato who suggested that Greeks only enslave non-Greeks and not fellow Greeks. But once this idea was born, i.e., the idea of impartiality, it took on a logic of its own—even if it took a while. “[O]nce reasoning has got started it is hard to tell where it will stop. The idea of a disinterested defense of one’s conduct emerges because of the social nature of human beings and the requirements of group living, but in the thought of reasoning beings, it takes on a logic of its own which leads to its extension beyond the bounds of the group” (Singer 2011: 114).
6. Nonetheless, even for moral anti-realists, like John Mackie (1990), there is something intuitively true about the conjecture that moral maxims are supposed to be universalizable and accessible to everyone (via reason or some other faculty)—another sign of the legacy of Kant.
The Trolley
(Pt. I)
That bodily pain and pleasure, therefore, were always the natural objects of desire and aversion, was, he thought, abundantly evident. Nor was it less so, he imagined, that they were the sole ultimate objects of those passions. Whatever else was either desired or avoided, was so, according to him, upon account of its tendency to produce one or other of those sensations. The tendency to procure pleasure rendered power and riches desirable, as the contrary tendency to produce pain made poverty and insignificancy the objects of aversion. Honour and reputation were valued, because the esteem and love of those we live with were of the greatest consequence both to procure pleasure and to defend us from pain. Ignominy and bad fame, on the contrary, were to be avoided, because the hatred, contempt and resentment of those we lived with, destroyed all security, and necessarily exposed us to the greatest bodily evils.
~Adam Smith, discussing Epicurus
Kant v. the utilitarians
Typically in an introductory ethics course, the view we learned about in the last two lessons is taught in tandem with the view we are covering today: utilitarianism (in particular the version of utilitarianism advocated by John Stuart Mill, 1806-1873). I tried to move away from that while introducing Kantianism, but now that we are moving towards utilitarianism, it's impossible to hide just how antagonistic these two views are to each other. It sometimes seems they are almost exact opposites in their approach to moral reasoning, and they disagree on almost every ethical issue of great import.
Why is this? There are many reasons. First, as you will learn in the Important Concepts, the utilitarians are explicitly moral naturalists. This is simply the view that moral properties are just natural properties. They're not commands from God or social constructs like the law. Instead, moral properties are empirically discoverable, i.e., capable of being studied by science. How? utilitarians believe that the moral property GOOD just is a positive mental state, namely pleasure. Pleasure, of course, is a natural phenomenon.1

A dopamine molecule which
is discoverable through
science and is very much
unlike the synthetic a priori
truths of geometry.
Moreover, once they've opened up the natural realm as candidate for moral properties, they argue that positive mental states, hereafter referred to as utility, are actually the only intrinsic good. If you recall, ever since the lesson on Hobbes we've been referring to this view as hedonism. This is where the utilitarians' empirical approach comes in handy to them: they pose a challenge to non-utilitarians. What do you really want other than happiness (or the avoidance of pain, i.e., negative mental states)? The more "base" desires are obviously linked to the pursuit of pleasure and the avoidance of pain; I'm speaking here of our desire for things like sex and food. You might then claim you want a good job, a family, and a house. But the utilitarian would only inquire further, "Why do you want that?" Ultimately, you'd have to concede that having a bad job (or no job), no family, and no home would be considerably damaging to your mental wellbeing. Having them, however, would make you happy. Pretty much anything you desire, the utilitarian can find a way to show you that ultimately there is a desire for the utility that it brings you. This is why the utilitarians consider hedonism to be an empirical truth: you can discover that this is what really drives people by just asking them. That is, check the drives of humans, i.e., what motivates them, and you'll find that it is the pursuit of pleasure and/or the avoidance of pain; i.e., that hedonism is true. I might add that the utilitarians and Hobbes are far from the only hedonists. This view is famously advocated by Epicurus, as you can see in the epigraph above.
The combination of moral naturalism and hedonism already puts the utilitarians at odds with Kant, as well as with basically every other theorist covered so far. Recall that for Kant, the moral law was a synthetic a priori truth, a proposition that isn't merely an analytic truth (that's true by definition) but is justified independent of any sensory experience. In other words, for Kant, moral truth is arrived at through the power of reason. Reasoning is surely involved in utilitarianism too, but the justification will ultimately be a posteriori—based on some sensory experience.
Hedonism puts the utilitarians at odds with virtue theorists too. If pleasure is the only intrinsic good, then virtues cannot be a good in-and-of-themselves. Virtues, according to the utilitarian, are only pursued for the pleasure that they bring (or because being without them is distressing). Hear Mill:
“There is in reality nothing desired except happiness. Whatever is desired otherwise than as a means to some end beyond itself, and ultimately to happiness, is desired as itself a part of happiness, and is not desired for itself until it has become so... Those who desire virtue for its own sake, desire it either because the consciousness of it is a pleasure, or because the consciousness of being without it is a pain, or for both reasons united... If one of these gave him no pleasure, and the other no pain, he would not love or desire virtue, or would desire it only for the other benefits which it might produce to himself or to persons whom he cared for” (Mill 1957/1861: 48).2
In any case, the component of Utilitarianism that really puts it at odds with Kantianism is consequentialism. Recall from the lessons on Kant that consequentialism is the view that an act is right or wrong depending on the consequences of that action. Kantianism, on the other hand, specifically stipulates that you need not check the empirical realm. The moral law must be transcendental, independent of context, argues Kant. This might seem like a small difference now, but you'll soon see that this puts these two theories worlds apart—literally (according to Kant).

Much like classical cultural relativism wasn't explicitly formulated until the 20th century but had "hints" of it in ancient times, utilitarianism wasn't explicitly formulated until the 19th century by Jeremy Bentham but had antecedents both in ancient times (in the work of Epicurus) and in the 17th century in the works of Richard Cumberland (1631–1718), John Gay (1699–1745), Anthony Ashley Cooper, the 3rd Earl of Shaftesbury (1671–1713), and David Hume (1711–1776). It is interesting to note here how it appears that ages of great intellectual accomplishment are associated with utilitarianism. In the first place, and as we've already mentioned, the 17th and 18th centuries are associated with the Scientific Revolution and the Enlightenment, a time period that influenced the abovementioned thinkers.
With regards to the time period during which Epicurus lived, as well as the time period during which Epicureanism flourished, there is also evidence that it was one of intellectual breakthrough. In chapter 6 of The Closing of the Western Mind, Freeman discusses the intellectual merits of the first two centuries of the common era, as well as the increasingly pro-social role of the Roman emperor as the “parent” of the people. (Recall that by this point, Rome had conquered huge swaths of territory including most of Western Europe, as well as parts of the Mediterranean, North Africa, and the Middle East.) Freeman also discusses how it was a time of theological fusion during which the Romans co-opted foreign gods into their pantheon, the consequence being that across the empire there was a recognized hierarchy of gods—a supreme god at the head of other lesser deities. Freeman also discusses popular philosophies of the time: Epicureanism, Stoicism, and Platonism.
A map of the Roman Empire
in 125CE, during the reign
of the Emperor Hadrian.
In addition to noting that utilitarianism and its tenets (such as hedonism) can be associated with "enlightened periods", we might also note that there is nonetheless a change in the moral discourse from ancient times relative to that of modern times. Recall MacIntyre's distinction between Greek ethics and modern ethics. Greek ethics is concerned with the question, “What am I to do if I am to fare well?”; modern ethics is concerned with the question, “What ought I to do if I’m to do right?” As we learned last time, this transition was completed in the work of Immanuel Kant. The transition from the Greek to the modern approaches, though, can actually already begin to be seen after Aristotle. Aristotle and Plato, among others, had words for optimal behaviors in their particular social orders (e.g., arete). However, their social orders eventually fell prey to others. First, Athens fell to Macedon and eventually all were subsumed by the Roman Empire. And so, the words for these optimal behaviors remained but the behaviors themselves no longer led to thriving, since the social orders they were designed for no longer existed. This is actually what led to movements like Stoicism and the Epicureanism, movements that sought to redefine agathos under the new social order. The condition of the Greeks was now one of subordination. Unsurprisingly, these movements stressed a lifestyle that was conducive to either self-control with regards to one's private life (Stoicism) or withdrawal from excess desire (Epicureanism). These are no longer moral systems for being a successful statesman. These values are values for a conquered people. This is excellent evidence that moral norms change as social orders change, and that, depending on the social order, different moral norms stabilize and different moral discourses are shaped.
Making room...
Mill does not only differentiate his view from that of Kant; he also goes after other ethical theories. As you can see in his quote above, he argues that virtue theory doesn't hold any weight since it doesn't give the right theory of moral value. He claims that those who pursue virtue for its own sake are actually just pursuing pleasure or the avoidance of pain. Similarly, Mill goes after social contract theory. While reviewing the problems of past approaches to moral reasoning, Mill argues that social contract theory merely put a bandaid on the whole matter by inventing the notion of a contract. But he dismisses the theory outright. You'll get a better idea of why in the next section.
“To escape from the other difficulties, a favourite contrivance has been the fiction of a contract, whereby at some unknown period all the members of society engaged to obey the laws, and consented to be punished for any disobedience to them, thereby giving to their legislators the right, which it is assumed they would not otherwise have had, of punishing them, either for their own good or for that of society… I need hardly remark, that even if the consent were not a mere fiction, this maxim is not superior in authority to the others which it is brought in to supersede" (Mill 1957/1861: 69; emphasis added).
Important Concepts
The Theory
The Formula
The theory itself is simple. The Principle of Utility is derived by combining hedonism and consequentialism. It is as follows: An act is morally right if, and only if, it maximizes happiness/pleasure and/or minimizes pain for all persons involved.
Who counts as a person?
Here's another point of contention with Kantianism. Whereas Kant believe that personhood, i.e. moral rights, are assigned to anyone who is a Rational Being, i.e. able to live according to principles, Mill believed that all sentient creatures deserve rights. Sentience is the capacity to feel pleasure and pain.3
It should be added that setting the boundary for the moral community at sentience, the view endorsed by utilitarians, is not without controversy. Much like Kant's views are critiqued for not including non-human animals, the utilitarian perspective is also seen as too narrow by some. For example, some environmental ethicists believe we should extend the boundaries of the moral community to include bodies of water, the air, and perhaps even the planet as a whole (see Shrader-Frechette 2010/2003: 194-198).
For his part, Singer (2011: 123-124) believes the boundary of sentience is the most rational boundary. This is because our moral reasoning cannot be impartial about non-sentient things like trees, rivers, and rocks. If we were to try to imagine what it would be like from the perspective of the tree, there would just be a blank. And so, practically speaking, there is no difference in our moral reasoning whether we take the perspective of the tree or not. This is not to say that ecological considerations shouldn’t be incorporated into our reasoning. In fact, Singer is staunch advocate of environmentalism. However, Singer argues that it is most practical for us to allow our intuitions about environmental ethics to be guided by inhabiting the perspective of all sentient beings. By doing so and acting for the benefit of all sentient creatures, we would indirectly take care of the environment.
Subordinate Rules
Mill also endorses subordinate rules, or what we might call “common sense morality.” This is because it is unfeasible to always perform a Utilitarian calculus when making a moral decision in your day to day life. In any case, according to Mill, these are rules that tend to promote happiness. They’ve been learned through the experience of many generations, and so we should internalize them as good rules to follow. These rules include: "Keep your promises", "Don’t cheat", "Don’t steal", "Obey the Law", "Don’t kill innocents", etc. However, note that if it is clear that breaking a subordinate rule would yield more happiness than keeping it, you should break said subordinate rule.
“Some maintain that no law, however bad, ought to be disobeyed by an individual citizen; that his opposition to it, if shown at all, should only be shown in endeavouring to get it altered by competent authority. This opinion… is defended, by those who hold it, on grounds of expediency; principally on that of the importance, to the common interest of mankind, of maintaining inviolate the sentiment of submission to law” (Mill 1957/1861: 54).
Support

John Stuart and
Harriet Taylor Mill,
co-authors of
On the Subjugation of Women.
One of the reasons why Mill was dissatisfied with Hobbes' SCT is the reliance on psychological egoism, a view even earlier utilitarians had held. Mill, for his part, thought he could explain more with his view, namely the entire history of social improvement (see Mill 1957/1861: 78). In fact, you can devise a utilitarian justification for many of the great moral leaps forward in our history. For example, take the abolition of slavery. Sure, in the United States slaveholding states had to be defeated in a brutal civil war before American slavery was abolished. But in other countries, slavery was abolished in much less violent ways. An argument might be made for why slavery is wrong. It is wrong for one person (the master) to enjoy profit at the expense of the misery of several others (the slaves). The misery of the latter (the slaves) outweighs the utility enjoyed by the former (the master), so slavery is wrong. Similar arguments could be made for women's suffrage, the abolition of child labor, and the adoption of a vegetarian or vegan lifestyle (see Singer 1995/1975 for an argument for animal liberation).
You can even make utilitarian arguments against practices such as environmental contamination. Say that there is a paper mill which dumps its toxic waste into a nearby river. The owner could dump the waste more responsibly, but it would cost him a share of his profits. However, the river is the watersource for a nearby town, the members of which are becoming sick due to the contaminated water. The argument here would be that the utility enjoyed by the owner is outweighed by the pain felt by the sick and dying townspeople. This means that the practice of dumping toxic waste into the river is wrong.
An interesting use of utilitarian moral reasoning is featured in Mill's feminism—a topic that will arise again in Unit II:
"Like his eighteenth-century predecessor [Mary Wollstonecraft, (see footnote 4)], nineteenth-century philosopher John Stuart Mill argued that women need to become like men in order to become fully human persons. Mill noted that society's high praise of women's virtue does not necessarily serve women's best interests. To praise women on account of their gentleness, compassion, humility, unselfishness, and kindness is, he said, merely to compliment patriarchal society for convincing women 'that it is their nature to live for others', but particularly for men (Mill & Mill 1869/1911: 168). Thus, Mill urged women to become autonomous, self-directing persons with lives of their own; and he demanded that society give women all the rights and privileges it gives men" (Tong 2010/2003: 221).
Utilitarianism
Thought-experiments4
Challenging moral naturalism
Some thinkers have challenged the utilitarian naturalist assumption that the moral property of moral goodness could be equated with the natural properties of positive mental states. In Ethica Principia, G.E. Moore argued for moral non-naturalism, the view that moral properties cannot be studied with the natural sciences. He used various arguments (such as the naturalistic fallacy argument, which many think was insufficient), but the open question argument is the most often referenced. The argument goes something like this. If “good” just means “pleasure”, then we can express it like an identity claim:
Eg,
BACHELOR = UNMARRIED MALE
GOOD = PLEASURE
But it doesn’t seem like asking “Is a bachelor an unmarried male?” is the same as “Is good the same as pleasure?” The first is a silly question. But the second demands an argument in response. In essence, although some people consider moral naturalism to be intuitive, it still requires an argument in order to be truly established. This is a basic criterion that we have of all theories, whether they be scientific or philosophical.
Even moral skeptics, individuals who question whether there are any objective moral values (like someone who endorses the atheist version of DCT), are unimpressed by moral naturalism. Richard Joyce, a prominent moral skeptic, doesn't see the appeal in equating moral goodness with mental states. Clearly, this is an issue we'll have to revisit.
“When faced with a moral naturalist who proposes to identify moral properties with some kind of innocuous naturalistic property—the maximization of happiness, say—the error theorist [i.e., the moral skeptic] will likely object that this property lacks the ‘normative oomph’ that permeates our moral discourse. Why, it might be asked, should we care about the maximization of happiness anymore than the maximization of some other mental state, such as surprise?” (Joyce 2016: 6-7).
The theory is too demanding...
Lastly, just like some object that Kantianism is too strict, some object that Utilitarianism is far too demanding. For example, one might have to accept extremely unsavory social orders. Just like we saw Hobbes endorsing a form of enlightened despotism, where the ruler governs for the wellbeing of the subjects, Mill similarly endorses despotism as a legitimate mode of government in dealing with barbarians, as long as it ultimately leads to their improvement (see Mill 1989: 13). And so, Mill justified British occupation of India, although he objected to imperial misgovernment. In order to be justified, colonialism had to actually benefit the colonized peoples.
Enlightened despotism aside, consider now the most famous case of utilitarian dilemmas...
Two approaches to moral reasoning are naturally antagonistic to each other: deontology and consequentialism. Whereas Kant is an excellent representative of deontology, John Stuart Mill and his utilitarianism are a prime example of consequentialist thinking.
For Mill, what was morally required was what maximized happiness for all sentient beings involved, including non-human animals.
Mill, a member of Parliament, worked tirelessly in advocacy for women's rights, for the abolition of slavery, and for securing freedom of speech.
Mill's utilitarianism is not without controversy, however. If there are ultimately more positive consequences than negative, some unsavory actions might be justified, such as colonialism.
FYI
Suggested Reading: John Stuart Mill, Utilitarianism
-
(Note: Read chapters I & II.)
TL;DR: Crash Course, Utilitarianism
Supplementary Material—
-
Video: Julia Markovits, Ethics: Utilitarianism, Part 2
Advanced Material—
Reading: Julia Driver, Stanford Encyclopedia of Philosophy Entry on The History of Utilitarianism
Reading: Margaret Kohn and Kavita Reddy: Stanford Encyclopedia of Philosophy Entry on Colonialism
Reading: Internet Encyclopedia of Philosophy, Entry on Utilitarianism, Section 3
Footnotes
1. I might add that the English word pleasure does not convey the complexity of positive emotions that J.S. Mill and other utilitarians mean by it. There is, Mill argues, a hierarchy of positive mental states. On the low end you might find the pleasure of sex, the satisfaction of a good meal, or the clarity of mind you have when you are well-rested. On the higher end you might find the fulfillment of a life well-lived or the equanimity of coming to terms with your own mortality. Mill famously put it this way in chapter 2 of his Utilitarianism: "It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied."
2. This conflict with hedonism and other moral systems has been obvious since ancient times. In The Theory of Moral Sentiments (part 7, section 2, chapter 2), Smith closes the chapter by contrasting the systems of Plato, Aristotle and Zeno with that of Epicurus. All agree, Smith shows, that virtue is the most suitable manner of attaining the objects of natural desire. In other words, they all have the same moral discourse and moral logic of the ancient world where being good is for the purpose of living well. For Epicurus, however, the only objects of natural desire were pleasure and the avoidance of pain; the others, however, found that knowledge, friends, country, etc., were valuable for their own sakes as well. Lastly, according to Smith, Epicurus did not make the case that virtue was an intrinsic good, i.e., good for its own sake, but the rest did.
3. I should add here that the meaning of sentience being used is one of several senses of the word. The word sentience is also used, for example, as a synonym for consciousness, more broadly speaking. We are not using the term in this way. When I use the word sentience it will exclusively refer to the capacity to feel pleasure and pain.
4. The thought-experiment that I am referring to as the Near and Dear Argument is just a stylized version of a thought-experiment made by the 18th century anarchist philosopher William Godwin in his Enquiry Concerning Political Justice. In this work, Godwin proposes that if you could only save one person, either a famous author or your father, that you should choose the famous author, since that person would make more people happy over the long run. On a historical note, Godwin was married to the feminist philosopher Mary Wollstonecraft and was the father of Mary Shelley, author of Frankenstein. Wollstonecraft's brand of feminism will be covered in Unit II.
Endless Night (Pt. II)
The time has come for ethics to be removed temporarily from the hands of the philosophers and biologicized.
~E. O. Wilson
After Darwin...

About a decade after the publication of John Stuart Mill's Utilitarianism, Charles Robert Darwin published two books which sent shock waves across popular and intellectual circles. Darwin had already revolutionized science with his publication of On the origins of species in 1859, the founding document of evolutionary biology. But now he was taking things a step further, a very uncomfortable step for many of his readers. As Phillip Sloan reports in his Stanford Encyclopedia of Philosophy entry on Darwin, Darwin's own position on humans had remained unclear. But in the late 1860s, Darwin moved to explain human evolution in terms of his theory of natural selection. The result was the publication of Descent of man in 1871.
First, let's briefly summarize Darwin's theory of natural selection. The most—dare I say—gorgeous summary of the theory of natural selection that I've ever read comes from evolutionary biologist David Sloan Wilson:
“Darwin provided the first successful scientific theory of adaptations. Evolution explains adaptive design on the basis of three principles: phenotypic variation, heritability, and fitness consequences. A phenotypic trait is anything that can be observed or measured. Individuals in a population are seldom identical and usually vary in their phenotypic traits. Furthermore, offspring frequently resemble their parents, sometimes because of shared genes but also because of other factors such as cultural transmission. It is important to think of heritability as a correlation between parents and offspring, caused by a mechanism. This definition will enable us to go beyond genes in our analysis of human evolution. Finally, the fitness of individuals—their propensity to survive and reproduce in their environment—often depends on their phenotypic traits. Taken together, the three principles lead to a seemingly inevitable outcome—a tendency for fitness-enhancing phenotypic traits to increase in frequency over multiple generations” (Wilson 2003: 7).
The radical position that Darwin was advancing in Descent of man was that human traits are all explicable through evolution. This, I might add, is still (unfortunately) a controversial position today among certain groups of people—despite the shocking amount of confirmatory evidence (Lents 2018). I won't attempt to persuade you of the truth of Darwinism here (although it's been said of me that accepting the truth of evolution is a prerequisite to even have a conversation with me). Rather, I'll connect Darwinism to the puzzles we've been looking at in this course.

Recall what we've been calling the puzzle of human collective action: that despite our selfish tendencies, humans also have the capacity to cooperate on a massive scale with other humans who are non-kin and are from different ethnic and racial groups. This seems completely at odds with the popular (but erroneous) conception of evolution as "survival of the fittest." However, as Wilson notes in his Darwin's Cathedral, this puzzle only becomes more perplexing when initially learning about evolutionary theory. Here is what he calls the fundamental problem of social life: individuals who display prosocial behavior do not necessarily survive and reproduce better than those who enjoy the benefits with sharing the costs. Put another way, “[g]roups function best when their members provide benefits for each other, but it is difficult to convert this kind of social organization into the currency of biological fitness” (Wilson 2003: 8). In short, at face value, cooperation doesn't seem to translate well to passing on one's genes to the next generation; so, it seems like an evolutionary dead end.
Befitting of his fame, Darwin was the first to propose a solution to the fundamental problem of social life. Darwin argued that even if a prosocial individual does not have a fitness advantage within his/her own group, groups of prosocial individuals will be more successful than groups who lack pro-sociality. In other words, perhaps it's the case that individual cooperators die without passing on their genes. But(!) groups of cooperators beat groups of non-cooperators. Thus, cooperation spreads.
“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an increase in the number of well-endowed men and an advancement in the standard of morality will certainly give an immense advantage to one tribe over another. A tribe including many members who, from possessing in a high degree the spirit of patriotism, fidelity, obedience, courage, and sympathy, were always ready to aid one another, and to sacrifice themselves for the common good, would be victorious over most other tribes; and this would be natural selection. At all times throughout the world tribes have supplanted other tribes; and as morality is one important element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase” (Darwin as quoted in Wilson 2003: 9).
In a nutshell, Darwin was making the case that phenotypic variation, heritability, and fitness consequences applied to groups just as much as it applied to individuals. Moreover, this could explain human cooperation, including what we call morality.
Important Concepts
Food for thought...
Legacy
Recall that in Endless Night (Pt. I), we saw the theory of moral sentimentalism put forward by David Hume and Adam Smith. However, this theory was hamstrung even if the two thinkers who conjured up the view did not realize it. What Hume and Smith lacked was a rationale for why and how nature would impart in us a tendency to prefer prosocial cooperative behavior, as well as the tendency to dislike antisocial behavior. Of course, it took another towering genius (Darwin) to provide a theory about the mechanisms by which a preference towards prosocial behavior (at least towards one's ingroup) could be developed and perpetuated: natural selection and group selection. It was the confluence of these two strains of thought, moral sentimentalism and evolutionary theory, that gave rise to the view known as moral nativism: the view that evolutionary processes programmed into us certain cognitive capacities that allow for moral thinking and behavior.
Cognitive scientists, those that work in an interdisciplinary field that traces its origins back to the work of David Hume, refer to our evolutionarily-evolved functions as modules, "programs" that perform some specific cognitive function like language acquisition, our number sense, and our intuitive physics (see Food for thought). Moral nativists, then, could be assumed to be hypothesizing the existence of an innate morality module, and there are many views on just what this morality module entails. Some think that we come pre-loaded with complete moral judgments, like "Don't kill innocents!", while others think that we only come with a general tendency to learn from our environment what is accepted and what is not, what is "right" and what is "wrong" (see Joyce 2007). According to this second theory, much like the universal grammar that allows us to learn languages effortlessly when we are young, the morality module lets us learn moralities in the same way. We "grow" a morality by internalizing and synthesizing the collective views of those we interact with.
So far this sounds much like cultural relativism, and there are thinkers who use evolutionary psychology to defend relativism (for example, Jesse Prinz). Moral nativism is most definitely not a kind of relativism, a topic we will turn to in the next section. It might be noted here that very few thinkers apply any kind of relativistic theory when defending their moral convictions; in fact, they seem to typically prefer Utilitarianism and Kantianism (see Unit II). Instead, what we will discuss here is the following: if there is a morality module, it is very strange indeed. This is something neither Smith and Hume nor Darwin could've foreseen—just how buggy our moral cognition is.
Idiosyncrasies

Assuming we do have an innate morality module, our innate moral tendencies do appear to be strange and even inconsistent. For example, Tomasello (2016: 71) hypothesizes that we have an intuitive sense of just rewards but(!) it only kicks in after collaborative activity. In a series of experiments, he tested for a sense of justice in children. In one experimental setup, two children would each get a treat (i.e., candy) but one child would get more than the other. The children in this setup tended to not share. In a second experimental setup, each experiment had two children do some task (i.e., work) and then they were rewarded with a treat. Again, one child would get more than the other. Again, children would mostly not share. But in a third experimental setup, Tomasello was able to get the children to share. What was the difference? In this setup, two children had to work together to achieve the task. Only then would the children share when one child was given more candy than the other. There's something about working together that gives rise to our moral feelings about justice and fairness; absent this cooperation, we are fine with unfairness.

Our capacity for violence also depends on our social environment. Blair (2001) suggests that we have a violence inhibition mechanism that suppresses aggressive behavior when distress cues (e.g., a submission pose) are exhibited. This is why armies do all they can to train soldiers to override their innate dispositions against violence. This is where the sociological comes in. There are various sociological factors that can facilitate violent behavior, such as creating a group mentality and creating distance (both physical and emotional) between the enemy and your group (Grossman 2009). So, it might be innately difficult to harm someone right in front of you, but if you have a group egging you on or if you are looking at "the enemy" through a rifle scope or on a computer screen (as in drone attacks), then it becomes a lot easier. Moral inhibitions, it seems, can be weakened by salient sociological features of the environment.
It's even the case that the types of policies that we favor might be affected by our intuitions about what humans can reasonably be capable of—an example of the tail wagging the dog. For example, Sowell (1987) hypothesizes that our intuitions about human nature and our capacity to predict complex human interactions give rise to our different attitudes towards politics and society. If you take the pessimist side, you might believe that the complexities that arise from, say, raising the minimum wage are too difficult to predict and so the best policy is to not attempt to regulate the economy with such a heavy hand. A good exemplar of this view is F. A. Hayek. On the optimistic side, you might believe that humans can positively influence complex systems like the economy and raise the standard of living for all. Perhaps Marx and Engels are good examples of this optimism; Keynes might fit the bill as well. The important thing to note here is that it is possible that these thinkers' innate tendencies to either be pessimists or optimists are what led to their particular political dispositions (see also Pinker 2013, chapter 16). This is once again the power of our innate dispositions rearing their heads into the moral and political realm. I might add that these innate dispositions are often accompanied with positive and negative feelings. In fact, it is possible that the feelings drive the moral conclusion and our reasoning capacities only invent a rationale for them afterwards (see Kahneman 2011). So, even when you think you have good reasons for your moral conclusions (with economic models included!), that very moral decision of voting for or against the government helping the least well off might be a product of non-conscious intuitions.

Our methods of evaluating whether things are good or bad, a mental faculty that might be relevant in moral evaluations, appears to be non-rational: it doesn't follow traditional linear reasoning. Take as an example the following. The halo effect, first posited by Thorndike (1920), is our tendency to, once we’ve positively assessed one aspect of a person, brand, company or product, positively assess other unrelated aspects of that same entity (see also Nisbett and Wilson 1977 and Rosenzweig 2014). This means that once you know one positive thing about a company, say that their value is up on the New York Stock Exchange, you are more likely to believe that this company has other positive traits, like that they have good managers and a positive workers' culture even though you don't have any information on these other traits(!). It just happens to be that once we believe in one positive trait, our minds tend to "smear" the positivity onto other aspects of that company. This happens with products, people, and ideas too, FYI (see Rosenzweig 2014.
Philosopher Peter Singer comments on how our moral intuitions are only tuned to those scenarios for which they evolved and can't seem to be readily activated in more modern social contexts:
"Our feelings of benevolence and sympathy are more easily aroused by specific human beings than by a large group in which no individuals stand out. People who would be horrified by the idea of stealing an elderly neighbor's welfare check have no qualms about cheating on their income tax; men who would never punch a child in the face can drop bombs on hundreds of children; our government—with our support—is more likely to spend millions of dollars attempting to rescue a trapped miner than it is to use the same amount to install traffic signals which could, over the years, save many more lives" (Singer 2011: 157).
My favorite two examples of the strangeness of our innate morality module (if it exists) are the following. First, Merritt et al. (2010) argue that we are prone to moral licensing. In other words, once we’ve done one good deed, we feel entitled to do a bad one (click here for more info). Second, several studies (e.g., Grammer and Thornhill 1994) show that humans have an innate preference for symmetrical faces, judging these to be more beautiful. This might explain why attractive defendants on trial are acquitted more often and get lighter sentences (see Mazzella and Feingold 1994; click here for a real-world example). The morality module, if it exists, is a fickle faculty indeed.
More(!) Important Concepts
The offspring
Non-cognitivism

A child being spanked.
In addition to moral nativism, the confluence of moral sentimentalism and evolutionary theory has inspired some meta-ethical positions. This is to say that, while one can accept some version of moral sentimentalism (plus an account of a morality module) as the ethical theory they accept, they might also have several meta-ethical positions inspired by their ethical theory of choice. One such view is non-cognitivism, the view that sentences containing moral judgments do not have truth-functionality (i.e., are not propositions) but instead express emotions/attitudes, rather than beliefs. In truth, there are actually various "flavors" of non-cognitivism, but let me get at what they have in common. A sentence like "Spanking your children is wrong" sounds suspiciously like a belief—namely, the person uttering that sentence is saying that they believe the act of spanking children features the property of moral wrongness, just like LeBron James has the property of being 6'9". But, the non-cognitivist argues, it's actually not a belief. It's actually just an expression of emotion or perhaps a command. In other words, when you say "Spanking your children is wrong", the real linguistic function is something like "BOO SPANKING CHILDREN!" or "NO, DON'T SPANK YOUR CHILDREN", either a emotive expression or a command, respectively. The main point of non-cognitivism is this: moral judgments aren't true or false. They're not the kind of thing that can be true or false. So, if you've been thinking this whole course that, say, the sentence "Capital punishment is morally permissible" is false, you're wrong—that sentence isn't the kind of thing that can be true or false. You're just saying "BOO CAPITAL PUNISHMENT!"

Emma Darwin (1808-1896).
Sometimes it's easier to understand non-cognitivism when juxtaposed with moral relativism. The first thing to point out if we want to understand this non-cognitivism is how strange the notion of relative truth is. Relativism is the view that some things have some property in some contexts but not in others. Let's take an example to make this more clear. If we are a relativist about beauty, then we believe that Emma is beautiful for Charles but not beautiful to Alfred. Think about how strange that is: Emma both has the property of being beautiful and doesn't have it, depending on who's looking at her. In any other context, we would rightly say that is ludicrous. I, for example, am not both under and over six feet in height. I can be either one or the other, and that's just it.
So perhaps a better way of understanding beauty might be to think about it more as an expression of one's feelings. In other words, using the jargon learned in the Important Concepts above, judgments regarding beauty are not propositions but instead something more like exclamations. In other words, when someone says "Emma is beautiful" it looks like they are giving a description about Emma (that can either be true or false). But maybe what they're really doing is saying something like "EMMA! WOW!", in a grammatically misleading way. It looks like they're giving a truth-functional description, but all they're saying is "YAY EMMA!" (which is not truth-functional).
So, the non-cognitivist argues, the notion of relative truth is out, and cultural relativism goes out with it. It's absolutely ludicrous, they say, to think that some action (for example, arranged marriage) is perfectly morally permissible in one case, but is absolutely morally abhorrent in another. The notion of relative truth is just too strange. A better position, the non-cognitivist argues, is to say, "This culture says, 'HURRAY ARRANGED MARRIAGE!' and some other cultures say 'BOO ARRANGED MARRIAGE!'" This gets the same sentiment across without meddling with strange theories of truth. Moral judgments, then, are simply expressions of one's feelings.
Moral error theory
While non-cognitivism is a theory about the linguistic function of uttering sentences containing moral judgments, moral error theory is a position about what moral properties are: moral properties, if they really are the way that Kant and others say they are, are non-physical, non-natural, abstract objects. In other words, moral properties (according to moral objectivists) are mind-independent; they've existed independent of humans for all eternity. To this the moral error theorist says "Baloney!" In other words, the moral error theorist focuses on the metaphysical claim that moral objectivists are making, and they say there's just no way those things actually exist. So, it's not the notion of relative truth that's the deal-breaker; it's the weirdness of moral properties (see Mackie 1990).

Let's take an example to make this clearer. Think about the sentence "The number of cups in the cupboard is 5." The truthmaker for this sentence is pretty easy to conceptualize. In fact, you can even visualize it! The truthmaker is: five cups in a cupboard. Any more or any less would make the statement false. This is an easy case because all the elements of the sentence (cups, cupboards, the quantity of five) are perfectly intelligible. Now think of the sentence "Stealing is morally wrong." What is the truthmaker for that? Can you picture it? All I can picture is stealing (sorta). I picture someone running away from a bank with a bag with a dollar sign on it. Where is the wrongness in there? It's not in the bag, right? Money is (to me) morally neutral. It's not in the whole action, since that could easily be a scene in a movie set (with a fake bank, fake bills, etc.), and pretending you're stealing doesn't seem morally wrong. What thinkers like Mackie (1990) claim is that the reason why you can't picture moral wrongness is because it isn't physical. And it isn't just an idea either. Whatever it is, Mackie says, it is really weird. Again, just try to imagine the concept of moral wrongness. Whatever it is, it somehow has the property of not-to-be-done-ness built into it. It sounds strange. It sounds, in other words, completely made up—or so says the moral error theorist. Every moral judgment you've ever made is false because the thing that would make it true (i.e., some moral property) doesn't exist. This is systematic moral error.
“Plato’s Forms give a dramatic picture of what objective values would have to be. The Form of the Good is such that knowledge of it provides the knower with both a direction and an overriding motive; something’s being good both tells the person who knows this to pursue it and makes him pursue it. An objective good would be sought by anyone who was acquainted with it, ...because the end has to-be-pursuedness somehow built into it... How much simpler and more comprehensible the situation would be if we could replace the [non-natural] moral quality with some sort of subjective response which could be causally related to the detection of the natural features on which the supposed quality is said to be consequential” (Mackie 1990: 28-29; interpolation is mine).1
Justification skepticism
A third meta-ethical position inspired by moral sentimentalism and evolutionary theory is justification skepticism, the view that moral objectivism can simply not be satisfactorily defended. The justification skeptic's argument is simple. Here's what you have to ask yourself. Is it possible that evolution somehow predisposed you to make moral judgments and to feel that they are objectively true (even though they're not)? Here's another way to put it. What's more likely: that morality is real and you can use reason (like Kant says) or divine revelation (like some divine command theorists say) to know what's right or wrong? Or that your concepts of right and wrong actually have perfectly natural evolutionary origins and have been expanded upon by cultural evolution? Even if one accepts that evolution imparted in us a morality module, the justification skeptic still argues that, even if moral properties exist, there's no guarantee that our morality module actually targeted and acquired the "right" set of moral values. Sharon Street puts it this way:
“The (moral) realist must hold that an astonishing coincidence took place—claiming that as a matter of sheer luck, evolutionary pressures affected our evaluative attitudes in such a way that they just happened to land on or near the true normative view among all the conceptually possible ones” (Street 2008: 208-9).
Moral skepticism
Hume and Smith developed an ethical theory that gave the dominant role in moral judgment to moral sentiments. This view is called moral sentimentalism.
After Darwin, evolutionary theory began to be applied to human affairs, such as the human activity of moralizing.
Hume and Smith's moral sentimentalism has several descendants that arose in the 20th century, including non-cognitivism, moral error theory, and justification skepticism.
Non-cognitivism, moral error theory, and justification skepticism are all forms of moral skepticism, the view that denies that moral knowledge is possible. These are often paired with moral nativism, which posits the existence of an innate morality module that was imparted on us by evolution. Combining all of these views renders a radical moral skepticism that seeks to explain morality in a purely naturalistic, deflationary, anti-realist way.
FYI
Suggested Viewing: John Vervaeke, Cognitive Science Rescues the Deconstructed Mind
Supplemental Material—
Video: Closer to Truth, Donald Hoffman on Computational Theory of Mind
Video: Dan Ariely, Our Buggy Moral Code
Kevin DeLapp, Internet Encyclopedia of Philosophy Entry on Metaethics Sections 1 and 2
TL;DR: Crash Course, Metaethics
Related Material—
Audio: Freakonomics, Does Doing Good Give You License to Be Bad?
Link: Jonathan Haidt, The Moral Roots of Liberals and Conservatives
Video: TEDTalksYuval Noah Harar—What explains the rise of humans?
Advanced Material—
Reading: Paul Thagard, Stanford Encyclopedia of Philosophy Entry on Cognitive Science
John Mackie, The Subjectivity of Values
Podcast: Science Salon, Michael Shermer with Dr. Michael Tomasello
Footnotes
1. Mackie’s moral skepticism has been defended and further developed by Richard Joyce in various works, including The Myth of Morality (1998), The Evolution of Morality (2006), and Essays in Moral Skepticism (2016). But Joyce follows a very similar theme: moral objectivism is just too strange to actually be true; there must be a natural way of explaining it.
What Could've Been
My first act of free will shall be to believe in free will.
~William James
Free Will and Morality1
-
There is an interesting question regarding whether or not human free will is required in order for the notion of moral responsibility to be intelligible.
-
Recently, some neuroscientists, including Robert Sapolsky and David Eagleman, have questioned whether the notion of moral blameworthiness really makes sense. These thinkers advocate for criminal justice reform.
Footnote
1. Any student interested in the problem of free will can refer to the following lessons from my PHIL 101 course:
- Lesson 2.1—Laplace's Demon, and
- Lesson 2.2—The Union Betwixt.
- Lesson 2.3—One Possibility Remains.
Playing God
Life can only be understood backwards; but it must be lived forwards.
~Søren Kierkegaard
Playing God
FYI
Suggested Listening: Radiolab, Playing God
- Note: Follow the link for a transcript of the audio from this podcast.
Material on Mosquito Annihilation—
-
Audio: RadioLab, Kill ‘Em All
-
Reading: Megan Molteni, Here's the Plan to End Malaria With Crispr-Edited Mosquitoes
-
Reading: Hope Reese, Mosquitoes might be humanity’s greatest foe. Should we get rid of them?
Towards Kallipolis
Educate the children and it won't be necessary to punish the men.
~Pythagoras
Where to begin?
If one is discussing normative questions relating to children and childhood, it's difficult to not begin with the topic of abortion. This is because if children really do have full-fledged moral rights, a view that we will explore in the next section, we must begin the discussion by isolating the origin point of those moral rights. In other words, we have to figure out when moral rights start, the point at which one becomes a member of the moral community.

Before beginning, perhaps a few things should be said about the concept of moral personhood itself. Recall that the two most explicit accounts of moral personhood we've explored in this course are those that attribute personhood to individuals with certain intellectual capacities, such as Kant, and those that award moral rights to those with the capacity for certain feelings, such as pleasure and pain on the utilitarian account. Although the former account, the one having to do with intellectual capacities, is most closely aligned in our course with Immanuel Kant, Kant is not the only advocate of this approach. In his Essay Concerning Human Understanding, the philosopher John Locke gave his account of what a 'person' is. He wrote that a person is a "thinking intelligent being, that has reason and reflection, and can consider itself the same thinking thing, in different times and places" (Locke as quoted in Harris and Holm 2010: 115). As you can see, Locke clearly emphasizes our capacity to recognize ourselves as the same person throughout time and in different contexts if we are to be deserving of moral rights. It is interesting to note that, on this account, perhaps even machines can be in principle awarded moral rights, if they have the right capacities—a topic near and dear to my heart. Also interesting is how this view of personhood explains the mechanics behind moral wrongs: the wrong done to an individual when their existence is ended prematurely is the wrong of depriving that individual of something that they value (ibid., 116).
With regards to abortion, an advocate of a view somewhat like this is Mary Anne Warren. Warren's views seemed to have changed throughout time, and she will be mentioned again in the lesson titled The Jungle. However, in her 1973 article, where she argues against the opponents of legalizing abortion, she advocated a view somewhat similar to that of Locke.1 Her argument is that opponents of abortion commit one of two errors in reasoning. Either they presume that a fetus has rights without proving it (a fallacy called begging the question) or they argue for fetus rights through equivocation, which is when you use a word with one meaning in one premise and then use the same word utilizing a different meaning in a different premise. Here is the standard pro-life argument, according to Warren:
- It is wrong to kill innocent human beings.
- Fetuses are innocent human beings.
- Therefore, it is wrong to kill fetuses.
Warren argues that the phrase ‘human beings’ in premise 1 is intended to mean ‘moral persons’, while in premise 2 ‘human beings’ refers to ‘genetically human entities.’ If you were to replace the phrase 'human beings' in each premise with their definitions, the argument will no longer "flow", and it would be clearly invalid. (Try it!) Warren then makes the point that pro-lifers need to give an account of what gives humans moral personhood that includes fetuses. If they cannot, their argument fails. She concludes by giving her own criteria for moral personhood, which is as follows:
- Sentience (defined in the utilitarian sense)
- Emotionality
- Reason
- The capacity to communicate
- Self-awareness
- Moral agency, i.e., the capacity to make moral decisions
Warren stresses that fetuses at early stages of gestation have none of the relevant criteria; hence they are not moral persons; hence they have no rights. In later stages of pregnancy, fetuses perhaps have 1 of the criterion (sentience), but it would be unreasonable to insist, she argues, that this gives the fetus the same moral status as a full-fledged human. And thus, through an account of moral personhood that stresses the greater weight of cognitive capacities, she defends the pro-choice position.
There are several problems with this line of thinking about moral personhood. The first is that this account of personhood seems to allow for infanticide, the practice in some societies of killing unwanted children soon after birth. This is because, by Warren's own criteria, there are not many morally relevant differences between a late-term fetus and a newborn infant; they still seem to have, at most, only one or two of Warren's criteria for personhood. Moreover, consider a human that is in a permanent vegetative state or has severe cognitive disabilities. These, on Warren's account, would also (it seems) not have moral rights (Harris and Holm 2010: 117).

Chinese anti-infanticide
tract (circa 1800).
There is, of course, the account of personhood advocated by the utilitarians—one which utilizes the boundary of sentience. In other words, you are a part of the moral community just in case you have the capacity to feel pleasure and pain. Although there is disagreement about when the capacity for fetal pain begins (Harris and Holm 2010: 126), with some (Rokyta 2008) theorizing that the 26th gestational week is the best candidate for the origin of fetal sentience, it is agreed that fetuses do acquire sentience at some point during pregnancy. As such, on the simplest conceivable utilitarian account, fetuses become a part of the moral community when they acquire sentience, and hence their moral rights must be weighed when considering abortion. Generally speaking, then, abortion would be impermissible once the fetus acquires sentience. However, as was mentioned in The Trolley, some utilitarians (e.g., Singer 1993) argue that abortion and even non-voluntary euthanasia (after birth) is permissible on incurably ill or severely disabled individuals whose life experiences would net more suffering than pleasure. As is always the case with consequentialist moral reasoning, things get complicated since it all depends on context.2
There are other accounts of moral personhood, although none seem to be very persuasive—at least to me. There is the religiously-motivated view that we are imparted with a divinely-sent immortal soul. This is a metaphysically dense assumption that has little to no proof and is hardly convincing for anyone who is not already a believer. Moreover, one would have to still make the case for when the soul enters the fetus (see Death in the Clouds (Pt. II)). There's also the notion that all humans, by virtue of being humans, have moral rights. But, as Harris and Holm (2010: 118-19) point out, the assumption that 'our kind' has moral priority is morally problematic, as it has been throughout history. There's also the potentiality argument: that a fetus has moral rights because it (generally) has the potential to become a complex, intelligent, self-conscious human. The problem with this view, though, is that there are countless "potential humans." For example, if you are a woman of child-bearing age and have a suitable sexual partner nearby, there is a "potential human" right there—if you catch my drift. But do you really feel morally obligated to bring about this potential human?
Lafollette (1980)
Who has rights over and duties towards children?

At the end of his essay, Lafollette (1980: 196) suggests that the aversion towards parenting licenses comes from a deeply ingrained belief that parents either own or have an absolute sovereignty over their children. This brings us to the debate over who has rights over children. There are, it should be said, some advocates of what seems to many an extremely radical position: that children are being wronged by being maintained in an artificial state of dependence upon adults. Let's call this position child liberationists. Child liberationists argue that we should extend to children all the rights that adults have (Archard 2010: 94-99). I don't take this view very seriously since the empirical facts of child development suggest that children simply don't have capacity to acquire the independence of adults if "given the chance." Even as adults, it's clear that becoming independent requires the acquisition of skills that take time to build and that even adults can't fully develop sometimes. Laura Purdy puts it well:
“Granting immature children equal rights in the absence of an appropriately supportive environment would be analogous to releasing mental patients from state hospitals without alternative provision for them” (Purdy as quoted in Archard 2010: 98).
There are, however, some more generally-accepted positions on who has rights over children. First, there is the view of Thomas Hobbes. Let's call this the absolute subjection view. According to Hobbes, parental power is total. Parents may "alienate them... may pawn them for hostages, kill them for rebellion, or sacrifice them for peace" (Hobbes as quoted in Archard 2010: 100).
To 21st century ears, this view is horrifying, or in the very least sounds anachronistic. John Locke, who was mentioned in the previous section, had a more tame view on parent's rights over children. Locke argued that the authority of parents over children is natural, although it is not an absolute control like Hobbes argued. The parents' task is to rear the children well, for that is the social function of marriage. During this time, parents can be said to be the proprietors of the children, but they can't pawn them or sacrifice them—contra Hobbes. Call this the parental proprietorship view.
Another perspective is that of Robert Nozick, whose "experience machine" we encountered in The Trolley. His view can be referred to as the extension view, since it claims that children are an extension of the parent. Children, says Nozick, "form part of a wider identity you have" (Nozick as quoted in Archard 2010: 101). In other words, due to the procreative relationship between parents and children, parents have the right to bring up the child with the beliefs, views, values, and ways of life of the parent. The children are "organs" of the parents (ibid.).

Robert Nozick
(1938-2002).
The extension view contrasts nicely with the "right to an open future" view. Thinkers like Nozick see it as important that children acquire certain values and inherit (or continue) a certain identity (Archard 2010: 100). Those who advocate children's "right to a future", however, value the autonomous (or chosen) life of children. Put another way, advocates of children's "right to a future" want the children to be equipped with the reasoning faculties to make decisions for themselves, without having some identity or value system forced on them. For the advocate of children's "right to a future", parents have rights over but also duties towards their children: parents must maximize the capacities of the future adult so that they can be maximally free, exercising their own free and autonomous choices.
The preceding is all complicated enough. So, might as well introduce a further complication: the role of the state. There appears to be a spectrum of views with regards to the role of the state and the rearing of children. On one side of the spectrum is the "family state" view. This is the view that, ideally, the state should take exclusive responsibility for the rearing of children. The most famous advocate of this view is Plato. Plato (in)famously defended the notion that male-female pair-bonds should be abolished and that the Guardians (those that had been groomed to rule the state since they were children) would select which male-female pairs would mate and then hand their children to professional childcare specialists. The parents would never know their biological offspring. On the other side of the spectrum is the "state of families" view, which (once again) John Locke advocated. This is the view that parents not only have rights over their children (the parental proprietorship view) but that they have a right to privacy to rear their children without interference from and observation by the state.
Food for thought...
How much do we owe?
State intervention
Given the entry of the "right to an open future" view, we must now consider just how far state interventions should go if this view is true. But in a liberal society like ours—liberal in the classical sense that stresses civil liberties, the rule of law, and economic freedom—there appears to be a trilemma, i.e., an uneasy situation in which there are three competing options. James Fishkin identifies this problem for us (see Archard 2010: 105-06). Liberal societies have three principles that they tend to promote. The first is the principle of merit. This is simply the view that positions and wealth within society should be allocated only on the basis of qualifications and productivity. The principle of equal life chances states that children with the same potential should have the same life prospects for social advantage. In other words, if two children have equal potential, the chances of one of them shouldn't be hampered just because she grew up in, say, a predominantly Mexican neighborhood with poorly-funded schools. Lastly, the principle of family autonomy states that the government (i.e., the state) should not interfere with family life other than to provide the minimum conditions for the child's eventual participation in adult life, such as through compulsory public schooling.

Student living out of car.
These three principles seem plausible enough. The difficult choices comes when these principles begin to negatively counteract each other. In particular, it appears that the principle of equal life chances and the principle of family autonomy might be incompatible in certain respects. This is because it is fairly obvious that differences in socioeconomic status (SES) affect children's prospects for academic and social success. Food and housing insecurity and lack of reliable academic guidance are just two factors that could negatively impact the academic prospects of children in low SES households. So what should be done?
The tendency appears to be to want to avoid Plato's totalitarian state, where the children are reared by the state—although reared very well, one might add. Even though, in theory, the children are raised well, maximizing each of their potentials, there are many unsavory aspects to Plato's social order: complete submission to authoritarian rulers, the abolition of marriage, etc. However, the condition of having no state intervention is equally undesirable. Not only will this mean that low SES families are given no assistance, essentially ensuring that society will never capitalize on the child's potential talent, but some families might instill in their children ideas that are contrary to academic and social success, not to mention the public good (see the Food for thought). Ideally, there should be some middle position, but it is difficult to know where to draw the line. Should all families with parents be given the basic necessities and resources to rear children? Should something like a universal basic income be instituted? Should all children spend part of their lives in something akin to boarding schools, where the state can provide the necessary housing, environment, and sustenance for good education? Not to sound like Plato but—can we really trust parents with something as important as a child's education? It seems fitting to end this subsection with the words of J.S. Mill:
“It is in the case of children that misapplied notions of liberty are a real obstacle to the fulfillment by the state of its duties. One would almost think that a man's children were supposed to be literally, and not metaphorically, a part of himself, so jealous is opinion of the smallest interference of law with his absolute and exclusive control over them” (Mill as quoted in Archard 2010: 107).
Ideological freedom?
Note that the preceding discussion made one very big assumption without any real defense: that the knowledge that the state imparts is actually valuable (or preferable to what parents can impart). This is not the place to enter into a full analysis of the educational system (although I do dive into this in my teaching trilogy in my PHIL 105 course). Let's concentrate instead on one possible effect that the curriculum has on children: influencing their values.
One K-12 teacher (B. Hernández) that I reached out to for information about ideology in the classroom put it bluntly, "There’s no apolitical classroom, there’s no neutral classroom, it’s impossible." Hernández made the case that ideology influences education at every conceivable stage, beginning with what's chosen for inclusion in the curriculum as well as what's excluded. She stressed the importance of teaching in a socially inclusive way:
“If we choose to include diverse stories and perspectives in our curriculum and we have conversations about bias and racism, obviously we’re impacting our students ideologically. If we don’t have those conversations, we’re also impacting them ideologically—just in the opposite way... [So] we need to be conscious about the messages students are getting, based on what we do and teach in our classrooms” (Hernández, personal communication).

Inclusivity is one value that students can have imparted on them. Another is blind patriotism. In his Lies my teacher told me, James Loewen argues that the American k-12 history curriculum has been sanitized and distorted so as to primarily emphasize positive aspects of the nation's history, while deliberately ignoring or "Disneyfying" aspects of American history that are either morally problematic, like slavery and relations with Native Americans, or do not fall in line with the political status quo. For example, most students know about Helen Keller overcoming her disabilities, but they don't know that she was a radical socialist. Most students also know that Martin Luther King Jr was the country's most famous agitator for civil rights, but they don't tend to know how vociferous King was against American aggression abroad, calling the United States government "the greatest purveyor of violence in the world today."
Perhaps even more important than teaching (or not teaching) inclusivity and patriotism, we should consider whether the educational system, as it is currently constituted, actually imparts on students critical thinking skills. To be honest, the available evidence does not make it apparent that the educational system is at all effective (Caplan 2018). I don't need to tell you that this is bad. Many teachers are worried. My colleague and friend Josh Casper shares these sentiments:
“Education that focuses on rote memorization and preparation for standardized tests fails, not only our students, but our society by creating generations of memorizers and test-takers and not generations of critical thinkers. History never fully repeats itself, but it rhymes with itself; therefore, an education system that fails to imbue students with the requisite tools of critical thinking will create a society founded on popular docility, mass apathy, and oppression” (Casper, personal communication).
Should the state take initiative in influencing children's values so that they are more inclusive? Should we make students more patriotic? Perhaps a consequentialist argument can be made for one of these positions, arguing that more inclusive or patriotic students would be (overall) better for the public good. Having said that, this might fly in the face of the principle of family autonomy. (What if the children's families shun inclusivity and/or patriotism?) Should we make students better critical thinkers? Would this be better for society? That's a legitimate question that rational people might actually disagree about. This is a trilemma indeed.
Segregation(?)
As one final example of how the state might intervene to maximize children's educational potential, perhaps the right way to move forward in k-12 education is to eliminate co-ed education, opting for only same-sex classrooms:
“According to difference feminists [see The Gift], co-ed classrooms are one of the primary places where men's gender dominance over women is created and maintained. For example, a significant percentage of teachers (female as well as male) tend to praise boys more than girls for equal-quality work; encourage boys to try harder; and give boys more interactive attention (Hall and Sandler 1982: 294). Thus, girls and women need same-sex classrooms, not because female students are less academically capable than male students, but because such classrooms constitute an environment friendly to women's concerns, issues, interests, needs, and values” (Tong 2010: 226).
Coming of age...
The trajectory of this unit is pretty straightforward. Now that the normative questions surrounding childhood have been explored, we will move forward in the life cycle to discuss the various ethical choices one makes as they get older. Once compulsory schooling is over (and sometimes before it's actually over), American students face the choice of whether or not to enlist in the armed forces. We'll take that question up—and much more— in Thucydides' Trap. Another important set of decisions young adults have to make surround love and marriage; these will be discussed in The Gift. Concurrently with deciding who to marry and whether or not to enlist, young people are making decisions at the ballot box. In Seeing Justice Done, we'll consider the moral dimensions of policing and punishment—topics you may have to vote on. After that, we'll think about our relationship to animals (The Jungle), money (The Game), and drugs (Prying Open the Third Eye). We close with a reflection on death in The Troublesome Transition.
The topic of abortion is broached primarily in order to discuss theories about moral personhood. The views explored include theories that ascribe personhood to those who have certain intellectual capacities (Kant and Locke) and those that are sentient (utilitarianism), as well as less popular views (religiously-motivated views and the potentiality view).
Lafollete (1980) argues that, due to the fact that parenting is an activity potentially very harmful to children, society ought to require parenting licenses of all those who want to have and raise children.
There are various views with regards to who has rights over children, as well as whether we have duties and obligations to children beyond ensuring survival. We covered the child liberationist view, the absolute subjection view, the parental proprietorship view, Nozick's extension view, and the "right to an open future" view.
We closed with different views on how the state can intervene to ensure children have a maximally good future. Topics covered included state interventions to correct differences in socioeconomic status, values being taught in the classroom, and the abolition of co-ed classrooms.
FYI
Suggested Reading: Mary Anne Warren, On the Moral and Legal Status of Abortion
Supplemental Material—
-
Reading: Don Marquis, Why Abortion is Immoral
-
Reading: Judith Jarvis Thomson, A Defense of Abortion
- Reading: Hugh Lafollette, Licensing Parents
Related Material—
- Podcast: Freakonomics, Abortion and Crime, Revisited
-
Audio: Fresh Air, Interview with Richard Rothstein on his book The Color of Law.
-
Reading: Noa Yachot, History Shows Activists Should Fear the Surveillance State
Video: James Loewen,Interview on Lies My Teacher Told Me
Advanced Material—
-
Reading: Richard Rokyta, Fetal Pain
Footnotes
1. Note that 1973 is the same year that Roe v Wade was decided, after two years of arguing and re-arguing.
2. There's also some interesting empirical data that legalizing abortion may have had a positive impact on crime rates.
Thucydides' Trap
The humanising of war? You might as well talk about the humanizing of Hell!... The essence of war is violence! Moderation in war is imbecility!... I am not for war, I am for peace! That is why I am for a supreme Navy... The supremacy of the British Navy is the best security for peace in the world... If you rub it in both at home and abroad that you are ready for instant war... and intend to be first in and hit your enemy in the belly and kick him when he is down and boil your prisoners in oil (if you take any), and torture his women and children, then people will keep clear of you.
~John Fisher
Hellbound
Young men and women today have to make the same decision that countless others have had to make throughout history—with differing levels of coercion being applied, I'm sure. This is to make a decision about whether or not to participate in a practice that is disagreeably human: war. Today there are many ways to participate in this lethal custom. One can, of course, enlist to be on the frontlines. But one can also participate in warfare from, say, a US base in Germany, where one can carry out assassination missions using drones. One can also be a part of the weapons industry, which received million- and billion-dollar contracts from the US government. Or, at least in the United States, one can abstain from all of the above.
One important ethical study of war comes from Michael Walzer's Just and Unjust Wars. In this work, Walzer defends a view that is generally referred to as just war theory. In a nutshell, this is the view that war can be justified if the morally-weighted goods outweigh the bads. Against this view, Walzer considers other possible perspectives on war and argues against them. We'll take a look at each of those in turn.
Walzer juxtaposes his view with other ethical perspectives, all of which he finds lacking. Take, for example, absolute pacifism. This is the view that war and violence are always wrong; i.e., there is no such thing as justified state violence. According to this view, “even military action aimed at protecting people against acute and systematic human-rights violations cannot be justified” (Fox 2014: 126). Although the view appears to be perfectly coherent, it is interesting to note that it is hard to find actual historical figures that advocated this view. It appears that Martin Luther King Jr. advocated this view. Arguably, Jesus of Nazareth also advocated this view, at least according to the traditional conception of Jesus. However, we might add that some scholars argue that the real historical Jesus was actually surprisingly traditional—at least according to the earliest Gospel accounts, as opposed to the later, more embellished accounts (Wright 2010, chapter 10). For example, Jesus didn’t seem to preach universal love and he wasn’t very divine in these early accounts. He was just another apocalyptic prophet. It was only as time passed and the people who might actually remember Jesus died that the accounts of his life became more moral and the pacifism he is known for is made manifest.1
There are some pretty clear objections to absolute pacifism. First off, absolute pacifism does not allow for self-defence. But it seems intuitively true that humans have the right of self-defence. Most theorists covered in this course, e.g., Kant, Aristotle, Hobbes, and Mill, explicitly defend the right of self-defence. Moreover, even though absolute pacifism is often associated with some religious standpoint, e.g., Martin Luther King Jr., even divine command theorists seldom subscribe to this view. In fact, Christianity has had links to war from the very start, as Constantine was baptized in his dying days (Freeman 2007, chapter 11).2

Albert Einstein (1879-1955).
Another view that Walzer considers is contingent pacifism. This is the view that, under certain conditions (for example, self-defense), war is permissible (perhaps even necessary) but that one is still able to reject, on principle, most other military aggressions. Some good exemplars of this view might be Albert Einstein (the famous physicist) and Bertrand Russell (whom you will meet in The Gift). Both of these thinkers were opposed to World War I—Einstein because he saw the war as a product of the German militarism, racism, and nationalism that he opposed; Russell because he found the turmoil in Europe as contrary to the interests of greater civilization. Despite these oppositions to WWI, both of these individuals felt differently during World War II. They argued that Hitler's overabundance of aggression, genocidal rhetoric, and overt racism were threats to the entire world. War may be evil, they reasoned, but it is the lesser of two evils in this case. The world must fight Hitler in what is essentially self-defence.
With regards to justifying her view, the contingent pacifist has a much easier time than does the absolute pacifist. “Some (contingent) pacifists use the second formulation of the categorical imperative to support their position by claiming that war treats persons as means and does not respect them as ends in themselves” (Fiala 2018). However, since the autonomy of the citizenry must be protected, states do have a duty to wage war in self-defence. Thus, from a Kantian perspective, most wars are unjustified—but a war of self-defence is.
Another moral perspective on war can be referred to as the “peace through strength” view. This is the view that, in order to promote peace, one’s own nation must become militarily supreme so that no other state powers would dare invade or transgress in any way. The logic behind this view can be seen at various levels. Some drug dealers report that they have to be ruthless in the drug trade, never letting anyone short them or cross them, since this would be a sign of weakness—a sign which could put their lives in peril or cause a turf war (see Hari 2015, chapters 4 and 5). According to Thucydides, the Athenians argued to the people of Melos that they had to punish them brutally, otherwise all other Greek city-states would think Athens is weak and attempt to subordinate it. Perhaps some American exemplars of this view are Ronald Reagan and Theodore Roosevelt. Just as in the epigraph above, the idea is simple: make the prospect of war so terrible that no one dares to dive into the abyss.
This is not to say that the "peace through strength" logic is without its critics. From a historical context, the more militarily powerful a state is, the more wars it engages in (Dyer 2005: 290); so, the logic might be faulty. In fact, it appears that this aggression-eliciting effect even occurs at the individual level, since the mere presence of a weapon increases aggressiveness in subjects (Berkowitz and LePage 1967).3
Just war theory is the final perspective to consider. Just war theorists go further than the contingent pacifists in arguing that, not only is war justified in the case of self-defence, but states can use their militaries to actively do good in the world. Here are Walzer’s criteria for deeming a war to be just:
- There is a just cause, namely a response to aggression.
- The war is initiated by a legitimate authority, i.e., the nation-state.
- The intention behind the war is the right intention; i.e., it is a response to aggression and not an opportunity to grab more land or natural resources.
- There are reasonable prospects of success; i.e., there is a good chance that the waging of war will be effective in completing the stated mission.
- The conflict as a whole exhibits proportionality, i.e., the morally weighted goods achieved by the war outweigh the morally weighted bads that it will cause.
- All other alternatives besides war have been attempted and have failed; i.e., war is the last resort.

Rwandan genocide, 1994.
Walzer is fond of making an analogy between the nation-state and the individual. He argues that, just like an individual, a national community is allowed to, for example, interfere against aggression. In other words, just like an individual can intervene when someone is being, say, beat up by someone else, the state can likewise justifiably intervene in similar situations. This can take the form of an intervention during crimes against humanity. For example, had some state taken initiative to stop the Rwandan genocide, this would've been justified for the just war theorist. Intervention can also come in the form of a preemptive strike against another nation-state who poses a credible threat, as was arguably the case in the Six-Day War between Israel and the Arabic league led by Egypt. Returning to the individual-state analogy, nation-states are also justified in defending themselves; i.e., wars of self-defence are clearly permissible, just as someone defending themselves against aggressors seems morally unproblematic. Lastly, nation-states, like individuals, can defend others from unfair odds. In other words, if two great powers are set to conquer a weaker power, a nation-state can justifiably intervene against the great powers. Of course, empirical research is required to say whether any particular war produces more net positive or negative political consequences. However, if there is evidence that waging war would yield more net positives, the just war theorist argues that waging war would be morally permissible or even necessary. This is, of course, consequentialist moral reasoning.
Food for thought...
Sidebar
McPherson (2007)
On how to decrease terrorism

Malala Yousafzai, 2014.
As of this writing, the United States is currently still fighting the so-called War on Terror, a project which began after the 9/11 attacks. This is, of course, an atypical conflict since it is unclear just how to win a war on terror. (Who signs the armistice documents?) In any case, it appears that the current goal is simply a reduction of terror attacks. It may be the case that, just like some think there are "natural" rates of crime and unemployment (Brayne 2020: 59-60), there might be a "natural" rate of terrorism. In other words, if you live in a free society—with minimal intrusion by the state into the private lives of citizens—you can expect a certain number of acts of terror being committed. However, some thinkers are not so sure about this. Most notably, Noam Chomsky and Malala Yousafzai have claimed that US violence and militarism abroad is what causes some to be radicalized, feeding terror networks with new converts and causing what the CIA calls blowback, the unintended results of American actions abroad. Malala, in fact, turned heads when she explained her views directly to President Obama while the cameras were rolling.
There does appear to be some evidence that particular policies by both the Bush and Obama administrations have led to the unnecessary death of non-combatants, including medical personnel from Médecins Sans Frontières (Doctors without Borders) and journalists. This comes from leaked documents that were published by The Intercept, an online newspaper that focuses on investigative journalism. Excerpts from the relevant documents and stories that were written about them were compiled into a book called The Assassination Complex. Here are some of the key findings:
Because so many factors are involved in assembling the terror watchlist, it is easy to get on it and hard to get off. First off, terrorism is defined extremely broadly such that it includes destruction of property, which means even animal rights activists can be deemed terrorists. Moreover, “reasonable suspicion” is not defined rigorously in the guidelines, making it so that what's deemed "reasonable" is extremely flexible. In addition to this, network nodes linked to a person of interest can also be put on the list, even if they’ve not had any history of terror-related activities. This means that the dragnet is expanded and ordinary, non-criminal citizens are sometimes on the list, including even elected officials such as Ted Kennedy and Evo Morales. Under Obama, who relaxed the guidelines for inclusion, the terror watchlist grew tenfold to almost 50,000 names.
Both President Obama and Bush personally approved high-value targets to be added to the kill list, primarily through the use of “baseball cards”, i.e., condensed data sheets on a target. Obama, in particular though, amplified the drone assassination program.4
The intelligence on which baseball cards are built is extremely unreliable. It is all primarily coming from signals intelligence, a type of intelligence that cannot easily distinguish between a high-value target and, say, his mother. In particular, most drone strikes are executed on targets ascertained through metadata, which is often unreliable (killing someone who borrowed the phone) or can easily be scrambled (such as when certain members of groups trade SIM cards by mixing them up in a bag and handing them out randomly). Importantly, metadata analysis does not include the content of mobile device communications, so proof of guilt is difficult to come by. The result is that, per the Bureau for Investigative Journalism, it is estimated that in the first five years of the Obama presidency, more than 200 innocent civilians were killed using drones strikes.
Strikes often kill many more than the intended target. For example, during Operation Haymaker in Afghanistan, only 35 of the 200 killed were actually intended targets.5
The military labels unknown people it kills “enemies killed in action", thereby undercounting civilian deaths.
There are clear discrepancies between the reported number of people on the kill list from a given country and the number of people killed via drones in that country. In other words, there are far more people actually killed via drones than there are people that were targeted to be killed by drones.
Some targets from the hit list could not easily be struck since they were in countries that the US was not officially at war with, thereby making the development (or locating) of the target more difficult. Analysts called this the “tyranny of distance.” As a result, there has been a proliferation of bases from which to deploy drones across the African continent—a new type of colonial imperialism (see Immerwahr 2019, chapter 22).

A police stingray device.
Given the information reported by The Intercept, it may very well be the case that US military and covert operations abroad do fuel terrorism, as Malala claimed. If this is so, we can make a consequentialist argument against these US operations. To all this we could add that the same technology that was devised to assist in the drone assassination program and in war zones is now being used by local law enforcement, as you will see in Seeing Justice Done. This augmented consequentialist argument might be that current practices are morally intolerable not only because they cause the death of innocents abroad, but because they fuel terrorism (which leads to the death of US citizens) and they provide the technology which is increasingly being utilized domestically to surveil the citizenry.
Storytime!
Walls

It may be the case that a young person may forego military service in the United States. But another heavily militarized profession that they might enter is that of border patrol. As of this writing, the issue of borders and immigrations are hot topic issues. This is in no small part due to the tough-on-immigration tactics of Donald Trump. However, Trump did not devise these tactics himself, nor is he the first to use them—although he might be the most egregious example of tough-on-immigrants politics. These policies were actually initiated by Democrats, for example Pete Wilson at the state level (California) and Bill Clinton at the federal level. Other Democrats that signed on are Joe Biden, Diane Feinstein, and Hilary Clinton (see Frey 2019).
It was Bill Clinton who signed into law the bill that utilized border walls in cities and left the desert regions unprotected. This policy had the effect of, whether intended or unintended, funneling desperate migrants into the desert where their risk of death is much greater. However, harsh immigration policies actually had the effect of increasing illegal residents. This is because prior to a militarized border, it was easy for many Mexicans to cross the border for work and return to their home in México at night. After militarization, though, if one was able to cross the border, they’d stay and not risk their life having to cross again. After Clinton initiated the era of tough-on-immigrants politics, George W. Bush began to heavily militarize the border, and Barack Obama further fortified it, initiating the practice of family detention facilities. For a time, in fact, Obama was even called “Deporter-in-chief.” For this reason, Donald Trump’s actions, although morally suspect to say the least, cannot be prosecuted: they are technically legal.
But the US southern border does not only see migrants from México attempting to cross; there are also many people fleeing violence in Central and South America. Part of the reason is that the US gun market is actually playing a vital role in destabilizing the regions from which these migrants come. In chapter 5 of his recent Blood Gun Money, Grillo begins by discussing how porous the border actually is, despite rhetoric of building walls and securing the homeland. Greater border security, Grillo argues, doesn’t make crossing the border impossible—just more expensive. And this is the case whether one is crossing the border with contraband (e.g., drugs) or taking part in illegal immigration. Why is it so expensive? This is because much illegal immigration is now orchestrated by the Mexican drug cartels. These cartels, which control smuggling at the border, simply expanded their trade to include smuggling humans—a task to which they easily adapted. This in turns gives them another form of revenue, which makes them more powerful, which makes them more feared, which makes Americans more alarmed, leading to more militarization of the border, which leads to more money for the cartels.

In addition to giving more power to Mexican drug cartels, militarizing the border puts all the focus on the flow of goods and people travel=ling north, neglecting the flow of goods and people traveling south. To correct this, Grillo discusses how both Americans and Mexicans use the private sale loophole in American gun laws, which allows private sellers to sell weapons without a background check, to buy weapons in the US and take them to México. These weapons, of course, end up in the hands of the now more powerful drug cartels. Why do guns flow south? The reasons for why someone might participate in this practice vary. Grillo reports that some undoubtedly like the excitement. Some like the culture. Most, including many veterans, the recently unemployed, those on disability, etc., just really need the money. These are known as straw buyers.
As it turns out, no one knows how many guns have been smuggled south across the US-México border. That’s the nature of the black market. One estimate, however, is that over 250,000 weapons per year have been smuggled from the US to México during the 2010s, earning the gun industry more than $125 million per year. The study is called The Way of the Gun. The US firearm black market, or as Grillo calls it The Iron River, makes it way into more than 130 countries. It gets into the hands of gangs and guerrillas in Central and South America, destabilizing those regions, strengthening the gangs by equalizing the firepower of the police forces and the gangs, which in turn lets gangs commit more crimes with impunity. This violence drives people to leave their country, becoming refugees and ultimately making their way to the American border.
There are obviously various normative questions around this complicated dynamic. The one we can mention here is that any attempts to simply militarize the US southern border will likely fail. This is because the root cause of the immigration is, at least in part, due to the unregulated nature of the US gun market and gun industry. As such, it may be that the suffering that is caused by militarizing the border is needless and immoral, since it is mostly negative consequences and very little to show with regards to positive results. Alternatively, one can make a deontological argument by demonstrating that the southward flow of US weapons weakens the autonomy and legitimacy of nation states in Latin America.
There are various ethical perspectives on war: absolute pacifism (war and violence are always wrong), contingent pacificism (war and violence are almost always wrong, unless war is waged in self-defence), just war theory (just wars can be waged if certain criteria are met) and the "peace through strength" view (militarism is the best method for securing peace).
McPherson makes the case that just war theory is unable to clearly articulate why state-sponsored violence is permissible. As such, he makes room for the possibility that non-state-sponsored violence, i.e., terrorism on the part of non-state actors, is morally permissible in certain situations.
Due to the large number of innocent deaths caused by the US drone strike program, a consequentialist argument can be made that drone strikes overall have negative consequences (since they fuel terrorism) and are thus morally wrong.
With regards to the militarized border, both consequentialist and deontological arguments can be made about how the current policy (militarizing the border) is likely immoral and that a more ethical stance would be to regulate the gun industry so as to stop the flow of weapons from the US to Latin America.
FYI
Material on War—
-
Related Video: Gwynne Dyer, The Road to Total War
Related Video: Democracy Now, Interview with Gwynne Dyer on his book Climate Wars
Audio: The Lawfare Podcast, Interview with Graham Allison, author of Destined for War: Can America and China Escape Thucydides's Trap?
Note: There is no transcript for this audio file.
Material on Terrorism—
Reading: Lionel K. McPherson, Is Terrorism Distinctively Wrong?
Reading: Michael Walzer, Terrorism and Just War
Reading: Isabelle Duyvesteyn, How New Is the New Terrorism?
Reading: C. A. J. Coady, The Morality of Terrorism
Reading: Leonard Weinberg , Ami Pedahzur & Sivan Hirsch-Hoefler, The Challenges of Conceptualizing Terrorism
Video: Democracy Now, Noam Chomsky on ISIS
Note: Chomsky is himself an anarchist, although a non-violent anarchist. This is a video of Chomsky discussing his political positions.
Material on Immigration—
Michael Huemer, Is There a Right to Immigrate?
Reading: Robert D. Putnam, E Pluribus Unum: Diversity and Community in the Twenty-first Century (The 2006 Johan Skytte Prize Lecture)
Reading: Garrett Hardin, Living on a Lifeboat
Podcast: Letters and Politics, Interview with John Carlos Frey
Note: This is an interview of John Carlos Frey, the author of the book discussed in class: Sand and Blood: America’s Stealth War on the México Border.
Podcast: Radiolab Presents: Border Trilogy
Footnotes
1. Where did the Christian message of love come from if not from Jesus? It was Paul. The ambitious Paul was setting up franchises of Christianity across the Roman empire, and he needed to standardize everything. He used love as his example of what to do and what not to do. If you love your fellow man, don’t speak in tongues during the service, for example. Moreover, the Roman Empire was a vortex that drew people from the countryside to the big cities. This left people in need of familial affection, much like during the industrial revolutions. The Church offered this (see Wright 2010, chapter 11). Combine this with the fact that plague eventually came around and the ethic of Christians was to heal the sick, you can see why Christianity grew.
2. It is difficult to argue that Constantine's conversion was genuine. He never once went to a church event. He was instead simply a successful, if brutal, emperor who secured (and even expanded) the borders of his empire and unified the people under one ruler once more. In doing so, he gave Christianity a link to success in war, and this in turn made the Church acknowledge the overriding power of the State, something that can be seen from Aquinas to the 20th century (see Freeman 2007, chapters 11 & 12). In other words, the Catholic Church reliably sides with powerful states and attempts to justify their numerous wars.
3. A nearby relative to the "peace through strength" view is realpolitik, which states that moral considerations aren’t helpful when thinking about war, and we should be practical instead.
4. Per the reporting from The Intercept, Obama’s strategy for the war on terror seemed to be to strike at budding hotbeds of terrorist cells, such as Yemen. However, the US was not at war with Yemen, ruling out troops on the ground—a politically unattractive move either way (since the American public was weary of war). And so, a strategy of assassination by drone was chosen. In fact, however, Dennis Blair, director of national intelligence, made the case that drone strikes were actually more attractive than any other alternatives since they were low cost, resulted in no US casualties, and gave the appearance of toughness. Because of all this, when Obama took office, only one drone strike had taken place in Yemen. During his term in office, there was a drone strike in Yemen about every six days, on average. By 2015, almost 500 people had been killed in Yemen.
5. The drone strike program in Yemen is actually worse than the others since it lacks an element that was present in both the Iraq and Afghanistan theaters: domex (document and media exploitation). That’s because assassination by drone leaves no agents on the ground to recover documents and no captives to extract information from. So, it’s an intelligence dead end. A leaked study called “ISR Support to Small Footprint CT Operations — Somalia/Yemen”—ISR standing for intelligence, surveillance, reconnaissance—states bluntly that sigint (signals intelligence) is inferior to human intelligence, which is inferior still to tactical intelligence (intelligence coming from seized documents and interrogations). So, the already unreliable drone program is made more unreliable by the dearth of intelligence gathering in the region.
The Gift
Yo sé que tú lo dudas que yo te quiera tanto.
Si quieres me abro el pecho
y te entrego el corazón.
[I know that you doubt that I could love you this much.
If you'd like, I'll tear open my chest
and surrender my heart.]
~Julio Jaramillo
Love and marriage
Some of the most consequential decisions you'll ever make in your life revolve around the institution marriage. Should you get married? With who? What will the power relations be within the relationship? How will you address disagreements? These are all questions that you'll have to endeavor to find answers to, and many of the answers will have a normative dimension. In other words, when answering many of these questions, you'll regularly have to think about what's right—in every conceivable meaning of the word—for you and for your loved one(s). As if this weren't complicated enough already, I'd like to give you a historical survey of different philosophical perspectives on marriage, just to add even more choices in the multiple-choice test that is life. As you will come to see, the institution of marriage has had its fair share of fans and detractors (Almond 2010).

One detractor of marriage and family life was none other than the master: Plato. In his masterwork Republic, Plato made the case that the institution of marriage should be abolished. This is because, unlike other thinkers of his time, Plato recognized the potential equality of women in becoming rulers of the state. Moreover, Plato insisted that the ideal state requires that only those most suited to rule became rulers. So, if a woman was the most suited for the task, they should become rulers. But, Plato noticed, family life took up most of women's time. As such, it was necessary to completely overhaul the family structure so as to give women the opportunity to rule (should they be qualified to). What would be the new social order? Male-female pairings would be abolished. Of course, the population still had to be replenished. So, during certain festivals, pairs of males and females would be selected, they would copulate, and (eventually) a child would be born. But this child would be raised in nurseries by individuals trained specifically in the art of child-rearing. The biological parents would not remain together as a pair-bond and they would never know their offspring—the cost of living in an ideal state.
If you'd like a more "traditional" take on marriage, look no further than Aristotle and Saint Thomas Aquinas. Aristotle believed that men were more fit to command, and thus should take control in marital relations (Politics 1253b, 1259b, 1260a). Even Aristotle had a more lax take on marriage than Christian philosophers. It was thinkers like Saint Paul, Saint Augustine, and Saint Thomas Aquinas that argued for the Christian perspective that marriage is the only acceptable context in which one can have sex—something that the ancients were not exactly known for. Aquinas in particular spoke of the many pernicious elements of sexual lust, as you will see in the next section. He advises that sex be confined to marriage and primarily for procreation, with only occasional copulation to protect against temptation—the so-called "marriage debt."
John Locke had an interesting take:
“The English political philosopher John Locke (1623-1704) took a narrow and relatively limited view of this, seeing the roles of husband and wife as social roles that could in theory be abandoned when the purpose for which the marriage was entered into—having and raising children—had been completed” (Almond 2010: 78).
Kant, unsurprisingly, disagreed with Locke. For Kant, the judicial status of partners was a lifelong contract which had an element of something like property rights. In other words, the spouses had something like property rights with respect to one another—especially to one another's sexual organs. But it's much more than that. "Marriage prevents us from using others merely as instruments for fulfilling our sexual appetites, for marital partners satisfy their sexual desires as part of a lasting relationship in which each treats and regards the other as a human being" (Tuana & Shrage 2010: 17). Kant will also be covered in the next section.

Mary Wollstonecraft
(1759-1797).
In A Vindication of the Rights of Women, English feminist Mary Wollstonecraft (1759-1797) rejected the idea that one should have a companion for life. She also insisted on keeping a separate residence from that of her husband, the anarchist William Godwin (1756-1836). We've met Godwin before in this course. The thought-experiment in The Trolley which we referred to as the near and dear argument is simply a stylized version of an argument by Godwin. Godwin and Wollstonecraft's daughter also acquired literary fame. Her name was Mary Shelley (1797-1851), and she was the author of Frankenstein. Her husband, the poet Percy Bysshe Shelley (1792-1822), took his father- and mother-in-law's views seriously. He wrote a short essay against legal marriage but insisted in a romantic institution of sexual union fueled by feeling and attraction. Surely he loved Mary very much (Almond 2010: 79).
Harriet Taylor, the wife of utilitarian John Stuart Mill, argued that economic independence, through paid work outside the home, was essential if women were ever to stand equal to men (ibid.). Since we will cover this view in more detail later in this lesson, we should give it a name. Call this sameness feminism.
Of course, Karl Marx and his collaborator Friedrich Engels had an opinion on this matter too:
“[Marx and Engels] saw the family as a device for perpetuating and making possible capitalist patriarchy—a system that benefited men by enabling them to hand down their property to offspring who could be identified as their own, but that was ultimately exploitative of women. In his influential work The Origins of the Family, Private Property and the State, Engels described wives and children as a proletariat within the domestic economy of the family, with husbands and fathers playing the role of the bourgeoisie” (Almond 2010: 79).
More recently, there is opposition to marriage from the polyamorous community. A person who endorses polyamory is someone who engages in multiple sex or love relationships simultaneously. Someone with this predilection might critique the various justifications for marriage. For example, some philosophers argue that sustaining a love relationship requires that one maintains sexual exclusivity. In other words, staying in love requires only having sex with the person you're in love with. But some, like Bertrand Russell, have even argued that extra-marital sex could actually strengthen otherwise difficult marriages. In other words, an open marriage (in which one has the explicit consent of one's partner to have sexual relations with other people) might actually improve the working relationship of the married couple. Perhaps this occurs because polyamorous relationships have communication built-in to them, and hence there is greater honesty and openness than in exclusive arrangements (see Elizabeth Brake's entry on Marriage and Domestic Partnership in the Stanford Encyclopedia of Philosophy for more).
Nonetheless, marriage does have some measurable benefits:
“[R]esearch from most sources is agreed in finding that formal marriage is more stable than cohabiting, and that unmarried couples are three or four times more likely than married ones to split up... Nor is the potential to split up diminished by the presence of children. Indeed cohabitees with children are between four and five times more likely than married couples to split up. Of course, some cohabiting relationships are resolved by marriage rather than separation, but, in general, cohabiting represents a positive preference for lack of commitment, rather than, as is often supposed, a serious preliminary to it” (Almond 2010: 73).
Important Concepts + Sidebar
Pessimism vs Optimism
Pornography
Although the topic may be uncomfortable in a college undergraduate course, several ethicists have raised alarm about the effect of pornography on society. The following discussion on this topic is taken from Tuana & Shrage (2010: 29-35). One important thinker in this debate is Catharine McKinnon. In her Feminism Unmodified, she denounces the sale of women's sexuality for male entertainment, e.g., in pornography. Moreover, she argues that when pornographic materials depict women as aroused and sexually fulfilled by aggressive and violent sexual treatment by men, this mis-educates men and their sexual expectations. This leads to normalized objectification of women which, according to the categorical imperative, is to not treat women as ends in-and-of-themselves, but rather as a means to an end.
Ann Garry challenges McKinnon's Kantian-flavored feminist critique. After reviewing the social science literature on the effect of pornography on men, she concludes that it's probably not the case that pornography itself causes men to objectify women, but that they both have a joint cause. Adding nuance, this is to say that it is partly the case that pornography does harm women, but an important factor in this is that our society treats sex as "dirty." As such, depicting someone as a sex object is degrading. In other words, it is because society fails to treat women as fully-fledged persons and because society treats sex as "dirty" that, when women are sexually objectified, they are done more social damage than if men were to be sexually objectified in the same way. Said counterfactually, if society were non-sexist, then a non-sexist pornography would be possible; it is the social asymmetry between men and women that impedes this. Here is Garry's depiction of non-sexist pornography:
“[N]on-sexist pornography would treat men and women as equal sex partners. The man would not control the circumstances in which the partners had sex or the choice of positions or acts; the women's preference would be counted equally. There would be no suggestion of a power play or conquest on the man's part, no suggest that 'she likes it when I hurt her'. Sexual intercourse would not be portrayed as primarily for the purpose of male ejaculation—his orgasm is not 'the best part' of the movie” (Ann Garry as quoted in Tuana & Shrage 2010: 31).

Of course, there are some who enjoy sex of the kind that many (or most?) find—shall we say—unpalatable. I am referring here to sadomasochism, the practice of deriving sexual gratification from the infliction of physical pain and/or humiliation. Patrick Hopkins responds that participants in sadomasochistic (SM) sex are not eroticizing sexual violence because these are only 'simulations'. He informs us that in SM there is attraction, negotiation, playfulness, and, most important of all, the power to halt the activity. It is also possible to switch roles such that "dom" becomes a "sub". Lastly, there is typically "safe words" and other forms of attention to safety. And so, Hopkins concludes, SM does not involve real subordination or terror, although it may involve some physical pain (although that's sort of the point).
Prostitution
Another practice that is regularly condemned by many feminist ethicists is prostitution (Tuana & Shrage 2010: 32-34). For example, Elizabeth Anderson, like many other ethicists (in my estimation), that there are some activities that should simply not be a part of a market system, since the norms of the market degrade and corrupt these activities, e.g., selling human organs and (of course) sex. Margaret Radin takes Anderson's argument further and asks us to imagine the deforming effects that the market would have on sex:
“What if sex were fully and openly commodified? Suppose newspapers, radio, TV, and billboards advertised sexual services as imaginatively and vividly as they advertise computer services, health clubs, or soft drinks. Suppose the sexual partner of your choice could be ordered through a catalog... If sex were openly commodified in this way, its commodification (sic) would be reflected in everyone's discourse about sex, and in particular about women's sexuality. New terms would emerge for particular gradations of sexual market values” (Radin as quoted in Tuana & Shrage 2010: 33).

Other ethicists are not so sure about Anderson and Radin's analysis. Marxist feminist Harriet Fraad (2017) argues that it is the exploitation involved in prostitution (e.g., by the pimp, by abusive customers, etc.) that is inherently wrong; the sex acts themselves are simply a form of labor. As such, Fraad envisions conceivable arrangements in which a union of sex workers organize a means by which to safely practice their sexual and emotional labors in a worker-owned collective enterprise. In other words, if sex work were legalized, worker-owned brothels could provide a regulated, safe, non-coercive environment in which sex workers could provide their labor without any exploitation. There would be no pimp (since it worker-owned), bad customers would be ejected by security staff, and safety measures would be regularly ensured for both workers and customers.
There are still more complicated/nuanced positions. Debra Satz (1992) argues for the decriminalization of prostitution despite arguing that, in our cultural context, prostitution is morally wrong since it perpetuates inequality between men and women. Put another way, her own view is that “prostitution [in our society] represents women as the sexual servants of men. It supports and embodies the widely held belief that men have strong sex drives which must be satisfied—largely through gaining access to some woman’s body” (78). But she concludes that despite it being morally wrong, prostitution should be decriminalized because the current policy eliminates a possible means of income for those who need it the most. In other words, it is wrong in our society, but to keep it illegal is to harm some (mostly female) members of society for which prostitution is their primary means of income.
Food for thought...
Gendered discrimination
Clearly, one recurring theme in the discussion so far is the lack of social parity between men and women. So, it appears that a discussion about potential solutions to gender discrimination is a fitting way to close this lesson.

Eighteenth century fashion.
In her helpful summary, Rosemarie Tong (2010) reviews the three major stages (or "waves") of feminist thought, as well as some interesting recurring debates within these waves. Beginning with the latter, feminist have been decidedly split over the sameness-difference-dominance-diversity debate. Feminists on the sameness side argue that women have to become the same as men in order to become men's equals, with what "same" means varying from thinker to thinker. Those on the difference side deny that women have to become the same as men. Yet another perspective comes from the dominance thinkers who make the case that equality for women consists in neither becoming the same nor maintaining their differences; rather, equality has to do with liberating women from the dominance of men. As such, women need to "identify and explode those attitudes, ideologies, systems, and structures that keep women less powerful than men" (Tong 2010: 220). Lastly, the diversity view believes that all the other approaches are incomplete. The equality problem for women won't truly be addressed until the disparities between rich women and poor women, white women and minority women, young women and old women, as well as between men and women, are also eliminated.
During the first wave of feminist thought, which is historically located in the eighteenth and nineteenth centuries, it appears that the advocates of the sameness view, which emphasized self-development and self-sufficiency, gained the upper hand. Mary Wollstonecraft is an excellent exemplar of this view:
“In her 1792 monograph, A Vindication of the Rights of Women, the philosopher Mary Wollstonecraft noted that whereas men are taught 'morals, which require an educated understanding', women are taught 'manners', specifically a cluster of traits such as 'cunning', 'vanity', and 'immaturity' that offend against real morals. Denied the chance to become moral persons who have concerns, causes, and commitments over and beyond their own personal convenience and comforts, women become hypersensitive, extremely narcissistic, and excessively self-indulgent individuals. So disgusted was Wollstonecraft by her female contemporaries' 'femininity' that she reasoned women would never become truly moral unless they learned to be, think, and act like men” (Tong 2010: 221).1
In contrast to Wollstone's perspective, Catherine Beecher argued that women were actually insulated from the market place and political forum. So, they don't have the same temptations and proclivities for the acquisition of wealth and power that men have. In other words, "women remain 'purer' than men and, therefore, [are] more capable of civilizing, indeed 'Christianizing' the human species" (ibid.). As such, it is women's differences from men that ought to be maintained and even celebrated. As such, Beecher is an advocate of the difference view.
The second wave came in the 20th century. This wave is characterized by the rise of the dominance view. Per Tong, the rise of the dominance view came after a split between second-wave feminists. On one side of the divide were liberal feminists who believed that female subordination came from a set of social and legal constraints; in other words, they believed that society failed to provide women with the educational and occupational opportunities it provides for men. On the other side were revolutionary radical feminists. These feminists believed that the problem was much more deep-seated. They believed that the subordination of women came from hierarchical power relationships that were built into the system itself. The only solution was to destroy it.

Androgynous model.
Things get more complicated from here. The revolutionary feminists were themselves split into various camps, which Tong condenses into three basic perspectives. The radical-libertarian feminists targeted gender roles and puritanical approaches to sex. They advocated androgyny (the combination of masculine and feminine characteristics into an ambiguous form) and "perverse" sex (a kind of sex where dominance roles alternate, with each partner taking turns being passive, receptive, vulnerable, etc.). Radical-cultural feminists urged women to abandon values traditionally associated with masculinity (such as assertiveness, aggressiveness, emotional restraint) and embrace values associated with femininity (gentleness, supportiveness, empathy, nurturance). The third camp, radical-dominance feminists, agreed with the radical-cultural feminists that feminine virtues were excellent, but they cautioned that they are not necessarily women's 'best friends', since they also can set women up for exploitation and misery. They advocated that women disconnect as much as possible from men, looking within for their self-definition and self-respect. It may even the case that, per some radical-dominance feminists, that the best way to resist gender oppression is to not get married, since marriage often forces women into a dependence-role while simultaneously isolating women from each other (Tong 2010: 240).
The third wave is much more recent and is characterized by the diversity view. This view, when broadened to include racial discrimination, is also referred to as intersectionality. This is a complicated perspective that cannot be done justice here. What can be said is this. According to this perspective, achieving true gender equality is unimaginable without simultaneously achieving racial and class equality, since, for many, gender is not the primary source of their oppression; it's their race or their socioeconomic status.
Although the different waves of feminism are interesting in their own right, both as intellectual history as well as for their ethical prescriptions, the sameness-difference-dominance-diversity debate is most important for our purposes since it prescribes to us how to proceed—how to move towards eliminating gender and sexual discrimination. Defending one of these views above the others will clarify many thorny normative questions. For example, consider the double standard when it comes to sexual behavior. Society appears to be less condemnatory of men who are 'players' than of promiscuous women, who are condemned as 'whores'. Sameness feminists, of course, argue that women's sexual desires should be given as much 'free play' as men's. Difference and dominance feminists agree that society unfairly restricts women's sexual urges but add that heterosexual relationships are still the paradigm of domination-subordination relationships. Thus, they are much more cautious about advocating traditional relationships, even though they wouldn't necessary have to caution men in the same way—another example of the difference between the difference perspective and the others.
There are various perspectives on marriage in the history of philosophy, including the view that it should be abolished (Plato), that the male should be in charge (Aristotle and Aquinas), that it should be temporary (Locke), that it should be updated so as to be more equitable between the partners (Wollstonecraft, Taylor, Marx and Engels), and that it should be open (Brake).
One can divide perspectives on sexual morality into two camps: those who believe that sexual acts inhibit us from greater human functions (metaphysical sexual pessimists) and those who believe that sex is not necessarily pernicious but can even enrich our lives (metaphysical sexual optimists).
Embedded in the three waves of feminism is the sameness-difference-dominance-diversity debate, in which different approaches to establishing parity between men and women are argued for.
FYI
Suggested Reading: Alan Soble, Internet Encyclopedia of Philosophy Entry on Philosophy of Sexuality, Introduction and Sections 1, 2, & 3
Supplementary Material—
Video: CrashCourse, Natural Law Theory
Video: The School of Life, Thomas Aquinas
Audio: Freakonomics, The Fracking Boom, a Baby Boom, and the Retreat From Marriage
Video: Father Albert, Interview with Anne Coulter
Advanced Material—
Reading: Elizabether Brake, Marriage and Domestic Partnership
Reading: Debra Satz, Markets in Women’s Sexual Labor
Reading: John Celock, Donald Pridemore, Wisconsin Legislator, Says Single Parenting Leads To Abuse
Footnotes
1. John Stuart Mill expressed similar sentiments; see The Trolley.
Seeing Justice Done
It is more necessary for the soul to be cured than the body;
for it is better to die than to live badly.
~Epictetus
On the fragility of social order
James C Scott's Against the Grain is, I believe, essential reading for anyone who wants to understand state-formation, the process by which a central authority amasses political domination over a given territory. First and foremost, Scott recognizes that the story of states is one told by states; as such, we should expect some bias in the narrative of history—a history whose writing has been sponsored by states. For example, there is a continual dismissal, denigration, and demonization of nomadic peoples. One of Scott's chief aims is to dispel the notion that early nomadic peoples were like a subspecies of humans. To even think of nomadic peoples as "less civilized" is, Scott argues, to accept the propaganda by the earliest states denigrating non-settled peoples. As it turns out, the growing animosity between the political elite of settled peoples and nomads can even be seen in different versions of the Epic of Gilgamesh. In an earlier version of the epic poem, the nomad character is just that: a nomad. In later versions, though, he is more savage and, for some reason, needs to have sex with a human female to be tamed and made more human. This denigration of the nomadic lifestyle, Scott argues, is all wrong.

Scott's Against the Grain.
In chapter 2 of Against the Grain, Scott details what happens to an animal species as it is being domesticated. He does this, he admits, with the aim of eventually arguing that our plants and our animals have, if you squint at the problem in a certain way, domesticated us. So here's what happens to a species as it is domesticated. Domesticated animals, relative to their wild counterparts, have reduced dimorphism. This is to say that the two sexes of the same species exhibit fewer and fewer differences besides their different sexual organs. In other words, the males and females of a domesticated species begin to look more and more alike. Also, domesticated species have a higher infant mortality rate, a greater reproduction rate, greater exposure to a disease pool, and are less capable of living outside of their ecological niche.
With these points in place, Scott muses about how these are all things that happened to humans as they transitioned from a nomadic existence to a sedentary one. Humans, Scott argues, are enslaved to their ecological environments, agricultural practices, and animals (both in the form of pets as well as in our animal agriculture). We spend an unsustainable amount of resources on these, as well as lots of time and energy. In fact, prior to the modern era, we would choreograph our whole daily and yearly lives to the needs of our animals and crops, which is why some of the oldest holidays tend to mark different stages of the planting cycle, e.g., Spring equinox, harvest time, etc. Today, home gardeners still labor away to make a utopian environment for their, say, potatoes.
Scott's most illuminating insight, though, might be his realization that the earliest states were shockingly fragile. As it turns out, the norm is that states collapse. In chapter 6, Scott details the various factors that could lead to state collapse, including pandemics, ecocide (e.g., resource depletion, salinization of the soil, soil exhaustion, etc.), population leakage (as when subjects abandon, or escape, from the state), and, of course, war with another state. Given Scott's conclusion that states actually decreased the quality of life of humans, he moves to make the case for normalizing collapse. Scott wagers that it may be that state collapse actually provided a respite from authoritarian control, an improvement in health, a more localized cultural life, and no significant drop in population. Indeed, he provides some scattered evidence of this. He speculates that state collapse is seen as a calamity in large part because researchers aren’t able to study non-state peoples as well, since there are, of course, no great city walls, artifacts are de-centralized, and languages become more fluid—and hence more difficult to study—without a central authority enforcing the use of some particular language.

Given that states were fragile and that their subjects tended to want to escape, i.e., state leakage, it is no surprise that the earliest states and the first multi-ethnic empires relied heavily on state terrorism to force their subjects to fall in line. That is to say that states had to scare their citizens into submitting to the existing social order, paying their taxes, and not escaping. It's hard to find a better example of this than in none other than the Roman Empire. For instance, consider the Third Servile War. This is when the famous gladiator/slave/general Spartacus led an army of over 100,000 slaves in rebellion. The details of the battles are not important for our purposes. What is important is how the rebellion ended. Once the slaves were defeated, the Roman elite had to ensure something like that wouldn't happen again. So they utilized a form of punishment that was meant specifically for enemies of the state: crucifixion. The Roman state crucified 6,000 slaves and displayed their bodies along Appian Way, the road that led to Rome. States, because of their fragility, have to police their subjects, and, when they break the rules and threaten the social order, they have to punish them.
Predict and surveil
Policing, as we've seen, appeared to be a necessary constituent of the earliest states. But even some societies of the ancient past have seen something undesirable about policing, even if it is not quite what some people today find morally objectionable about it. For example, the Greeks knew that policing was necessary, but they didn't want to police themselves. So, since during the Persian Wars many Scythians (nomadic horse archers that often served as mercenaries) were captured and taken into Athens as slaves, the Greeks tended to use them as civil police who would aid in forced voting as well as make arrests. Apparently, Greeks found performing those tasks themselves distasteful, considering the work to be more suited to barbarians than to their refined selves (see Cunliffe 2019, chapter 2).

There are many normative questions surrounding the issue of policing that could be covered here. For example, in his call for police abolition, Vitale (2017) points out various morally objectionable aspects of policing. He starts with their very inception. He argues that the basic nature of the law and the police is to be a tool for managing inequality and maintaining the status quo. In other words, inequality and asymmetrical power relations (where some have most of the power and others have very little or none) are built into the fabric of our social order; and the role of police has been to maintain that asymmetry. He argues that no police reforms could ever address this reality—only abolition would.
Even if one does not buy into Vitale's abolitionist argument, there are other aspects of policing that Vitale discusses that are morally objectionable from either consequentialist or deontological grounds. In chapter 10, Vitale (2017) stresses that the role of the police has repeatedly been to suppress dissident opinions, opinions that citizens of a free country have a right to express. Police, Vitale reminds us, began as a method of quelling labor unrest. More recently, in the 20th century, police targeted anarchists and communists, anti-war activists, civil rights leaders (such as Martin Luther King Jr and Malcolm X), and, even more recently, anti-police-violence groups, environmentalists, Muslims, Occupy Wall Street protesters, animal rights activists (who violate ag-gag rules), and even anti-death-penalty activists. Both federal agencies and the local police departments that help them are, one could argue, violating the categorical imperative, since they appear to not be enforcing the law impartially (but preferentially). Alternatively, one could argue that targeting protesters has the negative effect of not raising awareness about societal problems.
The topic of police abolition can't be done justice here. To fully flesh out this idea, it would be necessary to separate out the various tasks that police perform, discuss which ones are necessary and which ones are not, and figure out which institutions—whether they be already existing or new ones that would have to be created—could take up those tasks deemed necessary. All of this, of course, would require taking a careful look at reams of data from the social sciences. I hasten to add that some have done this, and they did not come to an agreement. Vitale, as we've seen, is for police abolition. Pinker (2013) is not (see especially chapter 2 of The better angels of our nature).
So here's a normative question we can discuss: What's a defensible hit rate? In other words, if we are provisionally accepting that policing is necessary, what is the appropriate ratio of convictions to the number of people the police actually stop (or surveil or arrest)? So, for example, if the police stop 100 people, but release 99 without charge and only arrest 1, that would be a 1% hit rate. Is this justifiable? How would you like to be one of the 99 who was stopped and questioned even though it was shown to be the case that you hadn't done anything wrong (or at least it can't be proved you had done something wrong)?2

As we've seen, being impartial in enforcing the law is an important moral requirement. One recent trend towards making policing more objective is referred to as predictive policing, a concept pioneered by William Bratton right here in Los Angeles at the LAPD (Brayne 2020: 22). What is predictive policing? It is the use of machine learning and other computational methods for crime pattern recognition via the analysis of data from different databases (both public and private) available to law enforcement so as to guide the deployment of police officers. In short, it is automating certain aspects of law enforcement through Big Data. The idea is that, since the algorithms used are "just math", this approach to policing will be more objective than previous efforts.
In her recent Predict and Surveil, sociologist Sarah Brayne takes issue with the claims that predictive policing is actually more objective. Instead, she argues, we must recognize that data collection is itself sociologically influenced, and so the data collected might actually be biased, although this bias is techwashed away by talk of computational techniques and algorithms. For example, one method of data collection is through FI cards (Field Interview cards). These are generated through a point system in which chronic offenders are assigned a point value and ranked: five points for a violent crime, five points for a known gang affiliation, five points for prior arrests with a handgun, five points for being on parole, one point for every police contact (e.g., police stop), etc. This data is then input into one of the many databases that law enforcement agencies use when generating their predictions. But there is a clear problem here:
“On balance, the points system can quickly turn into a feedback loop and produce a ratchet effect wherein individuals with higher point values are more likely to be stopped, thus increasing their point value, justifying their increased surveillance, and making it more likely that they will be stopped again in the future... Moreover, the point system is not used in lower-crime areas of the city [Los Angeles]. Therefore, individuals living in low-income, minority areas like South LA have a higher probability of their 'risk' being quantified than those in more advantaged neighborhoods where the police are not conducting point-driven surveillance” (Brayne 2020: 69-70).
Overall, Brayne concludes that data-driven policing, at least as it is currently being practiced, is exasperating inequality in four different ways. The first is the ratchet effect which has already been discussed, which appears to be a algorithmic form of confirmation bias (ibid., 109). The second is that data-driven policing has led to the incorporating of non-criminals into police databases.
“Even if you do not consent to providing your information, by virtue of your network tie, you might be included in data systems. To be gathered up in what I call the 'secondary surveillance network', individuals do not need to have any police contact or have engaged in criminal activity; they simply need a data link to the central person of interest... [and] minority individuals and individuals in poor neighborhoods have a higher probability of being in this secondary surveillance net than those in higher-income neighborhoods” (Brayne 2020: 111).
The third way that data-driven policing exasperates inequality is through the phenomenon of false discovery in policing (i.e., false positives) and wrongful conviction. As Brayne reports, "Black people are seven times more likely than white people to be wrongly convicted of murder" (ibid., 113). Lastly, the fourth way data might exasperate inequality is that affected communities might be conditioned to avoid leaving digital traces. This means that they will avoid surveilling institutions like hospitals, banks, schools, and unions, hence suffering poorer health, less financial security, and and a lack of upward mobility (ibid., 114-15).
From a deontological perspective, it is easy to see that this bias is morally objectionable. The law is not being meted out impartially.
“Researchers have gained access and analyzed the magnitude of racial disparities in hit rates under stop-and-frisk [in NYC], finding that 80-90 percent of individuals were released without charge and hit rates were higher for whites than Blacks. Specifically, this confirmed that, controlling for precinct variables and race-specific baseline crime rates, Blacks and Hispanics were stopped more frequently than whites, even though whites who were stopped were more likely to be carrying weapons or contraband than were Blacks.” (Brayne 2020: 104).3
A consequentialist would also take objection to these, since there are measurable negative consequences to the aforementioned policies. Consequentialism, however, is always a little more complicated. In essence, if it could be shown that predictive policing could be improved to something like in the film Minority Report, it is unclear whether a consequentialist would object to the practice. This brings us to punishment...
Storytime!
"Blot him out"
Arguments against the death penalty
The LWOP alternative
It appears that roughly one in eleven of those currently on death row had a prior conviction of criminal homicide, per the US Bureau of Justice. Given this, it could be argued that had this one-tenth of the current death row inmates been executed, several hundred innocents would've never been killed. In other words, had the state executed offenders the first time around, they would've never gotten out and committed murder again, i.e., recidivism. But, there is no way to tell which tenth of prisoners convicted of criminal homicide will recidivate. It would be unreasonable, one could agree, to kill all of them knowing that only some will recidivate. So, the most rational approach is life imprisonment without parole (LWOP).
Too expensive
Research by several different investigators provides compelling evidence that the modern death penalty system in the USA is far more expensive than an alternative system of LWOP. This is due to the cost of trials, appeals, etc. Moreover, LWOP would mitigate recidivism as much as execution. So, capital punishment should be supplanted by LWOP (Bedau 2010: 727).
The risk of executing the innocent argument
During the 20th century in the USA, it is estimated that about 0.3 percent of executions were of innocent people. Although this is not a significant number, it is still a tragedy for the legal system. Moreover, LWOP could eliminate recidivism just as well as execution, as well as provide those who are wrongfully convicted with an opportunity to appeal (and not run the risk of being executed). So, as a protection for the innocent, capital punishment should be abolished.
Bedau's "best argument"
- Governments ought to use the least restrictive means—that is, the least severe, intrusive, violent methods of interference with personal liberty, privacy, and autonomy—sufficient to achieve compelling state interests.
- Reducing the volume and rate of criminal violence—especially murder—is a compelling state interest.
- The threat of severe punishment is a necessary means to that end.
- Long-term imprisonment is less severe and restrictive than the death penalty.
- Long-term imprisonment is sufficient to accomplish (2).
- Therefore, the death penalty—more restrictive, invasive, and severe than imprisonment—is unnecessary; it violates premise 1.
- Therefore, the death penalty ought to be abolished (Bedau 2010: 728-29).
Food for thought...
On what our descendants may think of us
There are various aspects of past societies that we look back on with horror. What comes to mind most easily for me is the Roman games, where humans and animals were slaughtered for the enjoyment of the crowds. But we need not go so far into the past. Even child labor conditions little over a century ago were horrifying. We like to think that the ban on child labor, to not say anything of Roman-style games, is a form of moral progress. But this leads to another question: What present-day practices will our descendants be horrified by? Some believe that the way we treat our criminals will be one thing our descendants will back upon with shame. Stanford neuroendocrinologist Robert Sapolsky feels this way:
“People in the future will look back at us as we do at purveyors of leeches and bloodletting and trepanation, as we look back at the fifteenth-century experts who spent their days condemning witches… those people in the future will consider us and think, ‘My God, the things they didn’t know then. The harm that they did’” (Sapolsky 2018: 608).
Sapolsky, in fact, goes further. He does not only believe that we should reform the criminal justice system to abolish capital punishment; he believes that some things that are currently considered crimes need to be reconsidered. This is not to say that the behaviors he discusses aren't dangerous and that humans who commit these shouldn't be sequestered. However, he does say that perhaps we should have some sympathy for certain people—people that can't help but to have horrible impulses.
“Writing under the provocative heading ‘Do pedophiles deserve sympathy?’ James Cantor of the University of Toronto reviewed the neurobiology of pedophilia. For example, it runs in families in ways suggesting genes play a role. Pedophiles have atypically high rates of brain injuries during childhood. There’s evidence of endocrine abnormalities during fetal life. Does this raise the possibility that a neurobiological die is cast, that some people are destined to be this way? Precisely. Cantor concludes, ‘One cannot choose to not be a pedophile’” (Sapolsky 2018: 597).
I don't want to weigh-in on this topic at this point. I present it more so to show you that intellectuals today are thinking critically about many aspects of society that we take for granted—that we assume will just continue to be the way they've always been. But some are agitating for change—not just for our imprisoned population, but for all those who are treated in inhumane ways. For example, animals...
An important normative consideration on the topic of policing is finding an acceptable hit rate, the rate at which a police-stop renders an arrest. Ideally, the hit rate should be high, so that you are only stopped when there is a high likelihood that you have actually committed or are involved in a crime. Low hit rates are associated with police states, where everyone (or nearly everyone) is surveilled regularly. Sociologist Sarah Brayne, however, reports that current policing methods are disproportionately targeting minorities. This is even the case with predictive policing, which she argues is just an algorithmic form of confirmation bias, at least in the way that it is currently being practiced. This leads to both deontological and consequentialist worries that policing is currently being done in a morally problematic way.
With regards to capital punishment, there are various arguments for the practice, including van de Haag's argument about instilling fear in criminals, Bern's argument that capital punishment alleviates the anger that arises from criminal activity and restores dignity to the law, Mill and Bentham's incapacitation and determent arguments (where it is assumed that capital punishment will reduce crime, i.e., a positive consequence), and Kant's retributivist argument (where it is assumed that criminal activity deserves punishment and it is right to provide this punishment, preferably in a way that "fits" the crime).
There are various arguments against capital punishment, including the claim that life imprisonment without parole is more rational, either because it is less expensive or because we don't run the risk of executing the innocent. There is also Bedau's argument that the state should be as unintrusive as possible when pursuing state goals (like reducing crimes); Bedau argues that life imprisonment without parole is optimal for this end.
FYI
Suggested Reading: Anton Chekhov, The Bet
Supplementary Material—
Reading: John Stuart Mill, Use of the Death Penalty
Related Material—
Audio: NPR, All Things Considered, Botched Lethal Injection Executions Reignite Death Penalty Debate
Short Story: Franz Kafka, In the Penal Colony
Podcast: Dan Carlin’s Hardcore History, Painfotainment
Footnotes
1. Scythians were also objects of derision. For example, being called a descendant of a Scythian was a political insult (see Cunliffe 2019, chapter 2).
2. It's important to note that the lower a hit rate is the closer that society is to something like a police state. In other words, if police work is more like just stopping everyone and seeing who's guilty of something (a low hit rate) than doing actual police work and only questioning likely suspects (a high hit rate), then one could argue that low hit rate societies are less free than high hit rate societies, since in low hit rate societies surveillance is the norm.
3. Researchers have not been able to get access to data necessary to conduct a similar analysis in Los Angeles (Brayne 2020: 104).
The Jungle
The fear of you and the dread of you shall be upon every beast of the earth and upon every bird of the heavens, upon everything that creeps on the ground and all the fish of the sea. Into your hand they are delivered. Every moving thing that lives shall be food for you. And as I gave you the green plants, I give you everything.
~Genesis 9: 2-3
They had done nothing to deserve it; and it was adding insult to injury, as the thing was done here, swinging them up in this cold-blooded, impersonal way, without a pretense of apology, without the homage of a tear. Now and then a visitor wept, to be sure; but this slaughtering machine ran on, visitors or no visitors. It was like some horrible crime committed in a dungeon, all unseen and unheeded, buried out of sight and of memory.
~Upton Sinclair
The traditional perspective
At least some parents—parents that I personally know— have had to deal with their child's meltdown when the latter finds out that meat comes from animals. I'm not sure how frequent this scenario is, although more than a handful of students have reported to me that they themselves had such a breakdown. Typically, children resume their consumption of meat after an appropriate period of adjustment. We can gather as much since less than 10% of Americans are either vegan or vegetarian. Nonetheless, we can pause here and reflect on that meltdown, consider the chaotic mix of emotions and attempted justifications that I'm sure were expressed during that evening. Today's topic is the human use of non-human animals.
There are many ways in which humans make use of animals. In this lesson, however, we will focus on two: animal experimentation and animal consumption. Let's begin, though, by distinguishing between two very different approaches to animal ethics. There are those who believe that animals don't have moral rights at all; let's call them rights deniers. On the other hand, there are those who claim that animals are a part of the moral community; let's call them rights affirmers. Throughout history, going back into antiquity, the norm has been to be a rights denier (as can be seen in the first epigraph above). It is only recently that animal rights affirmers have begun to become more numerous. An early example of this can be found in the work of Upton Sinclair, who published The Jungle in serial form throughout 1905 (see the second epigraph above).1

René Descartes
(1596-1650).
You can find rights deniers at the very beginning of the Western Tradition. For example, Aristotle in his Politics (Book I, Part VIII) wrote that plants are created for the sake of animals and that animals are for the sake of humans, both for food as well as for clothing. This perspective on animals was remarkably persistent. More than a thousand years later, in his Summa Theologiae (Part 2, Question 64, Article 1) Saint Thomas Aquinas wrote that animals were intended purely for human use. He argued, hence, that it is not wrong for humans to make use of them either by killing them for food or in any other way whatsoever. And then, a few centuries after Aquinas, René Descartes argued that animals were just automata (machines designed for some function), and so they could not feel pain (see Harrison 1992 for analysis).
Moreover, it appears that it was not just the philosophers that were rights deniers. It appears that religious institutions were also denying non-human animals entry into the moral community. Frey (2010: 168-9) reports that the traditional justification for the human use of non-human animals was rooted in the Judaic/Christian ethic itself. On this topic, Harari (2017) reminds us that religious institutions as we know them today do not at all resemble these same religious organizations at their inception.
“How did farmers justify their behaviour? Whereas hunter-gatherers were seldom aware of the damage they inflicted on the ecosystem, farmers knew perfectly well what they were doing. They knew they were exploiting domesticated animals and subjecting them to human desires and whims. They justified their actions in the name of the new theist religions, which mushroomed and spread in the wake of the agricultural revolution. Theist religions maintained that the universe is ruled by a group of great gods—or perhaps by a single, capital ‘G’ god (‘Theos’ in Greek). We don’t normally associate this idea with agriculture, but at least in their beginnings theist religions were an agricultural enterprise. The theology, mythology and liturgy of religions, such as Judaism, Hinduism and Christianity, revolved at first around the relationship between humans, domesticated plants and farm animals. Biblical Judaism, for instance, catered to peasants and shepherds. Most of its commandments dealt with farming and village life, and its major holidays were harvest festivals. People today imagine the ancient temple in Jerusalem as a kind of big synagogue were priests clad in snow white robes welcomed devout pilgrims, melodious choirs sang psalms and incense perfumed the air. In reality, it looked much more like a cross between a slaughterhouse and a barbecue joint [as opposed to a modern synagogue]. The pilgrims did not come empty-handed. They brought with them a never-ending stream of sheep, goats, chickens and other animals, which were sacrificed at the god’s altar and then cooked and eaten” (Harari 2017: 90-1; see also Numbers 28, Deuteronomy 12, and I Samuel 2).
The 20th century, though, saw a proliferation of ethicists advocating for animal rights (Rossi & Garner 2014: 480). This is not to say, however, that all ethicists agree that animals can't be used at all. The issue is, in fact, complicated. No other issue shows how complicated it really is, I think, like the topic of animal experimentation.
Human benefit

Stephen Hawking, likely
the most famous
patient with ALS.
It is unquestionable that humans gain from experimentation on animals. For example, through genetic engineering, scientists can insert certain genes into a non-human animal's DNA, thereby creating a "new" animal that can contract the same illnesses that humans can contract. Once this is done, the goal is to study the progress and pathology of the illnesses in these "new" creatures and learn something about how to treat these illnesses in humans. So, by inserting a human gene into, say, a mouse, scientists can get a deeper understanding of dreadful diseases such as amyotrophic lateral sclerosis (ALS), Huntington's disease, and Tay-Sachs disease. If you're lucky enough to not know what these diseases are like, you at least likely do know about Stephen Hawking, perhaps the
most famous patient with ALS. All in all, it is undeniable that these animal models are invaluable to research into disease, the mind, and much more (Kandel 2018).
It doesn't end there. There is also the interesting topic of xenografts, or cross-species transplantations. Scientists could genetically engineer a type of, say, baboon or pig, with human organs bred into them for the express purpose of transplanting these organs to human patients who need them to survive (Frey 2010: 164-65).
"The very similarities of primates to ourselves genetically, physiologically, pathologically, metabolically, neurologically, and so on make them the model of choice for research into numerous human conditions. With stem cells from rhesus monkeys and marmosets isolated in 1994, we took a step towards the possibility, through inserting into the monkey the gene correlated with a certain disorder, of genetically engineering monkeys that had amyotrophic lateral sclerosis, cystic fibrosis, and so on. It is precisely because of their similarities to ourselves in the ways indicated that the ability to produce primates that have human illnesses stirs the medical community" (Frey 2010: 165).
Of course, there is a purely non-moral drawback to using animals for research in pathology: they're not human. In other words, there's still not quite identical to humans, so any lessons learned from their pathologies won't always directly transfer to humans. But this puts us in a predicament. If experimentation on animals can yield some positive results, albeit imperfectly transferrable to humans, wouldn't experimenting on humans yield even more positive results. In other words, if positive results is what we're after, then why draw the line at animals? Why not experiment on humans too? This, at least according to Frey (2010), is the central problem to be addressed in the debate on animal experimentation: the appeal to human benefit cannot stand alone as a justification of animal experimentation, since it would also justify human experimentation.

A roborat2.
I should say two things at this point. First, if the proposal to experiment on humans sounds ridiculous to you and it makes no sense to even entertain the idea, I sympathize with you. That was my exact reaction when first reading Frey. But(!), he's right in that if one is thinking from a purely consequentialist perspective, obviously experimenting on humans could potentially lead to some medical breakthroughs that would enhance our life expectancy and quality of life. In other words, if one is a. not a rights denier (and hence a rights affirmer), but b. wants to use consequentialist moral reasoning to justify experimenting on animals, then c. one needs a good justification for drawing a solid line between non-human animals and humans, since d. experimenting on humans really would be even more beneficial for science—or so it seems. Second, drawing this solid line between non-human animals and humans is harder than you think.
Suppose, for example, that one wants to draw a line between humans and animals as determined by certain set of cognitive abilities. For instance, Kant famously awards moral status only to Rational Beings. Here's the problem with this tactic: there is a number of humans who fall outside of this protected class. In other words, there are some humans with cognitive deficiencies who cannot truly be said to be Rational Beings. So, if we were to use Kant's criterion for personhood, it is not clear that we could rule out experimentation on certain humans.
Say, instead, that we can draw the line between humans and animals by making the case that humans have a greater potential for richer and higher quality life experiences than the life experiences of, say, a mouse (ibid., 178-79). Again, the response here is that humans vary greatly in their potential for meaningful, rich life experiences. If the potential for rich life experiences of some humans were to dip below the level of, say, a mouse, then wouldn't this make it permissible to experiment on them? The demarcation problem is a terrible problem indeed (ibid., 182-84).
“In short, the search for some characteristic or set of characteristics, including cognitive ones, by which to separate us from animals runs headlong into the problem of marginal humans (or, in my less harsh expression, unfortunate humans). Do we use humans who fall outside the protected class, side effects apart, to achieve the benefits that animal research confers? Or do we protect these humans on some other ground, a ground that includes all humans, whatever their quality and condition of life, but no animals, whatever their quality and condition of life, a ground, moreover, that is reasonable for us to suppose can anchor a moral difference in how these different creatures are to be treated? But then what on earth is this other ground?” (Frey 2010: 172-73).
Eating animals
Regan (1983/1986)
Animal agriculture and climate change
Perhaps we can obviate the need for wandering into the thorny territory of defending some theory of moral rights by instead discussing the negative consequences of animal agriculture. For example, we know that raising livestock is fueling global climate change. In fact, by some estimates, animal agriculture accounts for 9% of total carbon dioxide emissions, 40% of total methane emissions, and 65% of total nitrous oxide emissions. So, if global climate change continues unabated, the ongoing droughts in parts of the world will make raising livestock unsustainable. Using this information, one can make a Kantian-type argument. Essentially, our current practices are unsustainable. If everyone consumed and emitted greenhouse gases the way the West is currently consuming and emitting, then soon the practice will have to be discontinued. In other words, this seems to violate the categorical imperative, and a Rational Being would not engage in this type of behavior.3
Relatedly, by opening up the discussion to include the effect of animal agriculture on the environment, we find ourselves with a new problem. A new line has to be drawn—one between our consumption practices that are acceptable and those that are not. According to the EPA, about 75% of our greenhouse gas (GHG) emissions come from our transportation sector, our production of electricity, and our industrial sector. A plurality of it is actually from our transportation sector. As such, it appears that our consumerist lifestyle contributes to GHG emissions, since the products we buy have to be manufactured (using natural resources and electricity) and then delivered to us or our local retail establishment (transportation). Thus, if the rationale for not eating meat is the negative environmental factor of animal agriculture, then it would be inconsistent to not also put an end to unnecessary consumerist lifestyle choices.4

Marion Hourdequin.
So now we have to draw a line, and this line is as hard to draw as is the line between non-human animals and humans on the question of experimentation. So, do we have to do everything in our power as individuals to not contribute to GHG emissions? Or is there an acceptable amount of emissions that we can contribute? Which behaviors that emit GHG are acceptable? Baylor Johnson (2003), for his part, argues that under current circumstances, individuals do not have obligations to reduce their personal contributions to GHG emissions, only to fight for policy that mandates collective action. This is because only the coercive apparatus of the state is powerful enough to actually bring about effective change. Individual consumers alone simply do not have the desired effect; becoming vegan is, on Johnson's view, completely ineffectual.
On the other hand, Hourdequin argues that “we have moral obligations to work toward collective agreements that will slow global climate change and mitigate its impacts, [but] it is also true that individuals have obligations to reduce their personal contributions to the problem” (2010:444). In other words, you have to both make individual changes as well as push for the state to use its coercive powers to compel the population to change. How does she argue for this? Interestingly, at least to me, she argues from the perspective of Confucian philosophy:
“Confucian philosophy does not understand the individual as an isolated, rational actor. Instead, the Confucian self is defined relationally. Persons are constituted by and through their relations with others… The Confucian model is, further, one in which individuals look to one another as examples, learning from one another what constitutes virtuous behaviour. Confucius believes that moral models have magnetic power, and virtuous individuals can effect moral reform through their actions by inspiring others to change themselves” (Hourdequin 2010: 452-3).
Clearly, only a multinational effort to curb climate change will work. So, we need states, along with their coercive powers, to launch campaigns for green energy. However, Hourdequin makes a good point. Clearly, we need to start at the level of the community. By making more virtuous choices, we can influence our friends and neighbors. It does seem like we should start at home. But how far do we have to go?
Sidebar
Animal agriculture and the effect on workers

Although we cannot really get into this issue here, it appears that animal agriculture has negative effects on its workers. The little that I know about this is pretty horrifying. For example, denied breaks, US poultry workers wear diapers on the job. It's also the case that, according to some, factory farming often has very questionable working conditions, which some argue lead to higher incidence of crime, although this is disputed. Here's what we can say, though. Regardless of the precise details, it's not terribly controversial to say that farm workers are generally exploited (Linder 1992, ch 1). So, even if you’re a vegan, and hence are not directly exploiting animals for sustenance, unless you grow your own foods, you are still using the exploited labor of farm workers. It appears that it's almost impossible to eat without exploitation being injected in at some point. Surprisingly, or perhaps unsurprisingly, even fine dining is suspect. In Kitchen Confidential, Anthony Bourdain (2000: 55-63) reminds his readers that an overwhelming majority of line cooks at fine dining restaurants are non-European immigrants, whether you’re eating French, Italian or Japanese cuisine—immigrants that work under the table and don't typically get very many benefits. This begins to suggest that perhaps questions regarding the ethics of animal consumption extend all the way "up the food chain" to fine dining restaurants and even grocery stores. Is it morally permissible to purchase products at stores that make any kind of profit from animal agriculture? What about restaurants that serve animal products? Aren't you still contributing to their profit even when you take the vegan option? There's also the question of whether or not it is ethical to dine at any restaurants who have unfair practices with regards to their staff. After all, if you're taking great pains to make sure you don't participate in any non-human animal suffering, doesn't it seem like you should do at least as much for your fellow humans?
Go big or go home...
Why are things looking so dire in the animal agriculture sector? Part of it, unsurprisingly, is money...
“From an economic standpoint, [industrial farm animal production] is characterized by farms that are corporate-owned and/or corporate-controlled, instead of farms that are both owned and managed by individuals or families. In a process known as ‘vertical integration,’ distinct phases of the agricultural supply chain, such as crop growing, feed formulation, animal breeding, raising animals, slaughtering animals, and food processing and distribution, are increasingly controlled by large corporate integrators. In addition to vertical integration, there has been significant economic consolidation within the agricultural sector, meaning that fewer companies control ever more of the market share. Across all of agriculture, the largest 10% of U.S. farms now account for more than two-thirds of the total value of production... Four companies control 80% of the meatpacking industry. Perhaps most significantly, while large corporate integrators own only a small percentage of farms, many farmers are now contract growers, meaning that while they may own their land and buildings, they sign a contract with a large integrator (e.g., Tyson or Smithfield) to raise animals that are owned by the integrator. The integrator controls all aspects of how the animals are bred and raised, and sets the price that the grower will receive. Many contract growers report no open-market alternative to their contract. In addition, increases in scale and mechanization have resulted in significantly fewer farmworkers as compared to pre-industrial agriculture. In 1870, approximately 50% of the U.S. population lived and worked on farms; today, that number is less than 2%. Farm laborers are increasingly unskilled, low-wage earners, and many live below the poverty line” (Rossi & Garner 2014: 484).
Including animals as part of the moral community is a relatively recent development. Both divine command theorists, like Thomas Aquinas, and other thinkers, like Aristotle and Descartes, tended to believe that animals are purely for use by humans.
On the issue of animal experimentation, the central problem (per Frey 2010) is to find a justified way to demarcate why it is permissible to experiment on animals but not on humans—a distinction that appears to be difficult to make.
The topic of animal agriculture is just as problematic. Some thinkers believe that no type of animal consumption is permissible. For example, Regan awards animals rights based on some cognitive capacity that they have, i.e., being the subject-of-a-life. This can be construed as a neo-Kantian perspective. But his arguments are not without detractors. Warren responds to Regan with a version of Utilitarianism.
Also discussed is the effect of animal agriculture on the climate, a topic that took us into a conversation about our duties towards the environment. Johnson argues that we need not concern ourselves with individual actions to curb climate change, instead only fighting for a policy of collective action, via a type of social contract theory. Hourdequin responds to Johnson with a Confucian-inspired virtue ethics, arguing that both individual and collective actions should be taken to reduce our GHG emissions.
FYI
Suggested Reading: Tom Regan, The Case for Animal Rights
Supplementary Material—
Video: CrashCourse, Personhood
Video: Great Ideas of Science and Philosophy, Interview with Peter Singer
Video: Practical Ethics Channel, Peter Singer tackles the best objections to vegetarianism
Advanced Material—
Reading: Jan Narveson, The Case Against Animal Rights
Reading: Marion Hourdequin, Climate, Collective Action and Individual Ethical Obligations
Related Material—
Video: Big Think, How Healthy Is Vegetarianism...Really?
Other Material—
Material on Animal Treatment
Reading: Nita Rao, In the Belly of the Beast
Material on Animal Agriculture and Climate Change
Reading: Maurice E. Pitesky, Kimberly R. Stackhouse, and Frank M. Mitloehner, Clearing the Air: Livestock’s Contribution to Climate Change
Material on the Dietary Impact of Climate Change
Reading: Damian Carrington, Eat insects and fake meat to cut impact of livestock on the planet – study
Material on the Exploitation of Agriculture Workers
Reading: Oxfam Report on denial of bathroom breaks in poultry industry
Reading: Thomas Dietz and Amy J. Fitzgerald, Slaughterhouses and Increased Crime Rates
Related Reading: Georgeanne M. Artz, Peter F. Orazem, and Daniel M. Otto, Measuring the Impact of Meat Packing and Processing Facilities in the Nonmetropolitan Midwest: A Difference-in-Differences Approach
Note: This is a response to Dietz and Fitzgerald.
Reading: Marc Linder, The Pillars of an Inexhaustible Supply of Cheap Labor
Footnotes
1. Interestingly enough, Sinclair's primary goal in The Jungle was to advance the cause of socialism by exposing the hardship of the workers in the meat industry. However, many of his readers focused on the industry's several health violations and unsanitary practices instead.
2. I discuss roborats in Laplace's Demon, from my PHIL 101 course.
3. There may be some other food-related perils down the road. The price of meat will soar by 2050, due to population growth. Moreover, even vegetarians will be affected, since rising temperatures will also hurt production of staple crops like maize and wheat.
4. I might add that it is also the case that consumerist/materialistic values are associated with a decrease in prosocial behaviors, an increase in apathy towards environmental issues, and an increase in feelings of depression and lack of fulfilment (Kasser 2002; see also this video based on Kasser’s work).
The Game
Life is a game.
Money is how we keep score.
~Ted Turner
The Root of All Evil
Some of the most important normative questions that you will have to answer as an adult are questions involving money: how to acquire it and what to do with it. You may also have to consider questions about whether the economic system that we live under is just. In other words, you'd have to think about how far we can depart from capitalist forms of the free market, in the name of justice (e.g., reparations, redistribution of wealth, investment on the wellbeing of future generations, etc.), without losing its efficiency advantages (Wolff 2010). Moreover, if we want to intervene in market processes in the name of justice, there is still the matter of what metric we should use when attempting to address inequities. Should we base our interventions on, say, need? Or how about by making sure everyone has the same basic primary resources, like housing, food, and schooling, regardless of individual needs? To complicate things further, we need to ask ourselves where the boundary of our interventions should lie. Should we draw the line at our nation-state? If this is the case, then we would be neglecting far more needy peoples beyond the borders of our country. Do their needs have less moral urgency simply because they are not geographically near us or don't form part of the same political entity?
In any case, questions regarding the just or unjust nature of economic systems are best covered in a course on political philosophy. What we can cover in this lesson are issues surrounding both the acquisition of money and what to do with it after we've acquired it. There is, however, one issue that just barely crosses over into political philosophy. This has to do with the social organization of the workplace, a topic that brings us to the controversial philosophical position labeled Marxism.
Important Concepts
The capitalist class-process

An admittedly oversimplified
account of Marx's capitalist
class-process.
Again, Marxism really ought to be covered in a political philosophy class. However, one aspect of Marxism directly relates to both of the matters we are covering in this lesson: how we acquire money and what to do with it. So, let's begin with some context. For Marx, class doesn't necessarily have to do with how much money you earn; rather, your class has to do with your role in the production process. Those who work for private industry, like maybe you do, labor to produce profits for their company. This labor may be in the form of manufacturing some product, of selling some product, or perhaps it has to do with transporting some product so that it can be sold. Whatever the case may be, we know that, for workers, the value that they produce with their labor is less than what they get paid. In other words, if you work in the private sector, you only have a job because there is some value (that you create with your labor) that your boss doesn't pay you for. That's simply how it works. So, there's the value that you produced with your labor that you actually get paid for, and there's the value that you produced with your labor which your boss keeps. Marx has labels for these. According to Marx, necessary labor is performed during the portion of the day in which workers produce goods and services the value of which is equal to the wages they receive (i.e., the value that you produced with your labor that you actually get paid for). Then there's surplus labor: labor performed during the portion of the day in which workers continue to work over and beyond the paid portion of the day (i.e., the value that you produced with your labor which your boss keeps).
With this set of concepts in place, Marx defines what capitalism is. The capitalist class-process—that is to say the class-process that is prevalent in the United States—is the legalized yet ‘criminal’ activity in which the products of the laborers’ creative efforts are appropriated (i.e., taken) by those who have nothing to do with their production and who return only a portion of those fruits to the workers (wages), keeping the remainder (the surplus) for themselves. Put another way, the capitalist class-process in the state of affairs where the employers regularly and reliably exploit their employees by paying them less than the value they produce. This conception of what capitalism is in stark contrast with the way most people use the word capitalism. Many people use the term capitalism to be a synonym for "the economy" itself. But, with this definition, Marxists have fragmented the economy so that capitalism is a part of the economy and not the whole. For Marxists, the capitalist-process is simply the relationship between employers and employees—one in which the employers routinely and legally appropriate the surplus labor of the employees. Marx's contention is that this is exploitation. So, the capitalist class-process must be abolished.
Sidebar1
Storytime!
What did Marx want?
University of Massachusetts
at Amherst, home of the
Amherst School of Marxism.
 
When Marx makes the case against capitalism, what he really means is the capitalist class-process—the exploitative relationship between employers and employees. That's at least the way that Amherst School of Marxism interprets Marx. This school, which pays special attention to volumes 2 and 3 of Marx’s Das Kapital, claims that the fundamental Marxist argument is that firms should be run by employee-owners (see Burczak, Garnett & McIntyre 2017). The way to do this is to convert society into one in which the workers are the owners of the enterprises in which they labor; in other words, build society so that the norm is worker-owned cooperatives.2
Worker cooperatives are a type of firm where the workers play two roles: 1. their normal labor function; but also 2. an administrative function which allows them to vote on what the firm makes, where the firm makes it, and what the firm does with the profits. In a word, this is democracy at work. These, in case you didn't know, already exist. In these enterprises, surplus labor is still created, but it is appropriated by those who created it: the workers themselves. There could still be pay differentials, with some getting paid more than others, but the key difference is that workers had a say in establishing the pay differentials—through their vote. The result would be that, instead of the authoritarianism that we submit to at our places of work today, we would achieve democracy at work.3
What this means for you
There is both acknowledged positive consequences as well as an intuitive fairness about worker-owned firms (see footnote 2). If this truly is a means of reducing exploitation in the workplace, then we can argue from both consequentialist and deontological grounds that, when purchasing goods and services, we should attempt to select only (or primarily) worker-owned firms. In other words, we would be voting against exploitation with our dollar—something which may be morally required of us. We should also, one could argue, vote in favor of any measures or political parties that might make worker-owned cooperatives more commonplace.4
Food for thought...
Singer (1971): Famine, Affluence, and Morality
Effective altruism
The Basics
Although Peter Singer has been immensely influential and is one of the most recognizable utilitarians in philosophy, new faces are popping up in the field of applied ethics. One of those new faces that's made quite a splash is William MacAskill, Associate Professor at Oxford University. MacAskill is one of the founders of the effective altruism movement.

The PlayPump.
In Doing Good Better, MacAskill (2016) gives a snapshot of what effective altruism is all about: trying to figure out, using scientific tools, how to help as many people as possible. In the introduction, MacAskill distinguishes between what we might term unreflective altruism and effective altruism by juxtaposing between the Roundabout PlayPump fiasco and a deworming campaign in Kenya. The PlayPump was a failed method of getting clean water to rural parts of South Africa. The pumps were fun for investors but actually created more work for communities, since they were less efficient than the standard pumps they replaced. One statistic says it all: to provide enough water for the community, a Roundabout PlayPump would have to be pumped for 27hrs per day—which is, of course, impossible.
On the other hand, in Kenya, a group of researchers were trying to figure out how to improve schools. In doing so, they tried something that was, at least at the time, hardly utilized in the development sector: randomized controlled trials. Via their controlled trials, the researchers concluded that getting new textbooks to schools didn’t have an effect. Funding schools so that they could hire more teachers, thereby allowing for smaller class sizes, also didn't help. Neither did workbooks and a variety of other interventions. What did work? Deworming—providing students with an anthelmintic drug to rid them of parasites such as roundworms, flukeworms and tapeworms. Deworming allowed children to focus more in school, do better in their studies, and earn more after their schooling was complete. The project even paid for itself with increased tax revenue once the first cohort was of working age. In short, even though it isn’t a "sexy" cause, deworming is effective altruism.
So, in short, effective altruism takes aspects of the scientific method, such as randomized controlled trials, causal models, decision matrices, and other tools, to see what interventions and charities actually have measurably positive consequences. The key questions of effective altruism are the following:
- How many people benefit and by how much?
- Is this the most effective thing you can do?
- Is this area neglected?
- What would happen otherwise?
- What are the chances of success and how good would success be?
See MacAskill's Doing Good Better for more information.5
Where should you work?
You might now be thinking to yourself that charity is all well and good, but you're not there yet. You're not at the stage of your lifecycle where you have expendable income to give to charitable causes or international aid organizations. That's fine. MacAskill argues that effective altruism can still guide you in one of the most important decisions you'll ever make: how to make money.

So... where should you work? MacAskill (2016, ch. 9) discusses how one should select a career. He first makes the point that “following your heart” is terrible advice in basically any given domain. For starters, the fact that you’re passionate about something typically means that the activity is a worthwhile activity, which means that other people pursue it. This means, of course, that it will be difficult to find a job in this field given that there’s so much competition. This isn’t mere speculation. MacAskill cites a Canadian study that shows that undergraduates with interests in either sports, music, or art represented more than 80% of the student population. However, using census data, one could see that less than 3% of jobs are in sports, music, or art. To follow your heart, then, is a recipe for disappointment for most people.
Second, our interests and passions change over our lifecycle. MacAskill cites a study by a group of psychologists (including Daniel Gilbert, author of Stumbling on Happiness) that concludes that our passions change so much that we reliably overrate their importance. So, what you are passionate about today is likely not what you'll be passionate about twenty years from now.
Third, the best predictors of job satisfaction are actually features of the job itself, not how much your passions link up with the job. For example, what can be referred to as engaging work is one of the strongest predictors of job satisfaction. What is engaging work? Jobs that qualify as engaging work typically feature five traits. First off, it is work that features independence. That is to say, workers have a say in how they spend much of their day: the rational planning of the day's tasks. Engaging work also gives employees a sense of completion. They aren't merely parts of a big assembly line, such that they never get to see the end product of their labor. Engaging work allows workers to see products through to completion. Engaging works also enjoy variety; i.e., it isn't the same task over and over again. Another feature that is an important part of engaging work is regular job feedback. This makes intuitive sense. Getting reliable feedback on how well you are doing your job meshes well with the other traits of engaging work, such as a sense of completion and independence. Lastly, engaging work gives workers a sense of contribution to society. Chances are that if you find a job in which you participate in engaging work, the passion will follow—rather than the other way around.
So, given that one aspect of engaging work is the sense that you are making a meaningful difference, let's assume that you want to make a career-choice that will allow you to help as many people as possible. Some intuitively think that choosing to be a doctor would fulfill this desire. However, MacAskill discusses the work of physician/researcher Greg Lewis, work that might change your mind. Here's the key insight that Lewis had: becoming a doctor doesn’t increase the number of doctors in a country. As all of those attempting to go to medical school know, spots in medical schools are very limited. That's why some start preparing for medical school applications in middle school. Let's just say you do make it to med school. By your becoming a doctor, you don’t increase the raw number of doctors in the industry, just who the doctors are—in particular, you made sure one of them was you. So, you're not actually increasing the number of doctors who are helping others. Put another way, you're not helping anyone that wouldn't have been helped anyway. What then should you do if you want to maximize your impact? One option is to earn-to-give. This is, in fact, what Greg Lewis decided to do. In other words, select a realistic career-path for yourself where you can earn the highest income you can get. Then, donate as much as possible to charitable causes that have been rated as effective by a charity evaluator such as givewell.org. MacAskill goes even further, though. Given his analyses, he wager that going into, say, law, might not be the best option if you want to do is help the most amount of people. The jobs that do have the most impact might surprise you. He's setup an organization that helps people decide how they should spend the 80,000 hours of their life that they will spend working. It's called 80000hours.org (see also the talk by Benjamin Todd in the Related Material section below).
Reparations(?)
Reparations are common after wars. For example, it was French reparations awarded to Germany (after the former lost to the latter in the Franco-Prussian War) that funded the growth of Berlin, both industrially and intellectually. A few decades later, Germany was required to pay reparations to the Allied Powers after its defeats in World War I and World War II. Similarly, Japan was required to pay reparations to the Allies after World War II. These reparations are intended to make amends for unjust harms that were inflicted on innocents. If reparations between states is seen as a legitimate exchange, then it might be the case that states can offer reparations to harmed groups as well, especially if these states were the ones that inflicted the harm on those groups.
Perhaps the most obvious groups of people that might qualify for reparations in the United States are African Americans, especially the descendants of slaves, and Native Americans (Corlett 2018). However, in this section, I'd like to focus on more recent instances of the state disadvantaging a certain group of people for the sake of the white majority. For example, in The Color of Law, Richard Rothstein reviews how government housing policies in the 20th century explicitly discriminated against African Americans to the benefit of whites. Here's the context.
After the Bolshevik Revolution, in which Lenin and his group of revolutionaries took power in Russia, the Wilson administration thought it could stave off communism at home by getting as many white Americans as possible to become homeowners—one of the many ways in which the fear of communism shaped the United States during the 20th century. The idea was that if you own property, you will be invested in the capitalist system. And so, the “Own Your Own Home” campaign began.

The program, however, was largely ineffectual and the housing crisis only grew. In the 1930s, then, as part of his New Deal, Franklin D. Roosevelt subsidized various aspects of homeownership, such as insuring mortgage loans and construction on new housing, through the Federal Housing Administration (FHA). But since segregation was still in the books, these programs necessarily had a racial bias. Due to unfounded worries about African American inability to pay loans and “community compatibility”, African Americans were denied housing in white areas and were denied loans, even when they met all the non-racial qualifications.
This continued after the conclusion of World War II. Several subsidized housing projects with racial restrictions arose as WWII veterans were returning to the mainland. California natives will recognize some of these communities: Lakewood, CA; Westchester, CA; and Westwood, CA.
It's not just the FHA that is culpable. During these time periods, local law enforcement enabled or were at least complicit in the white terrorizing of black citizens which took the form of harassment, protests, lynchings, arson, and assault. Just to see how egregious this negligence by law enforcement was, Rothstein reminds us that this was during a time period when federal and local agencies were expending considerable effort to surveil and arrest leftists and organized crime syndicates. In other words, they certainly had the capabilities to investigate and infiltrate dissident groups. So, it is telling that law enforcement did not extend this effort to curtailing attacks against African Americans; they mostly just let it happen.
It's even the case that the federal government was guilty of suppressing wages for African American workers. When minimum wage requirements were rolled out, it was not uniform across all industries. Namely, no minimum wages were mandated in industries were African Americans predominated. It gets worse. During WWII, FDR mandated that American factories be taken over as military factories, manufacturing war materiel. At first, these factories employed white men, due to segregation. After white male manpower had been exhausted, the factories began to let women into the workforce. It was only when white women were not numerous enough to fill the workforce that African American males were recruited. African American women were last. Until they were hired on to these factories, African Americans had to labor in low-wage industries. And so, in government-run factories with decent wages, there was a policy of delaying employing African Americans as long as possible.

The Battle of Chavez Ravine.
As I'm reminded every time I do my taxes, owning a home pays dividends in more ways than one. Homeowners may deduct both mortgage interest and property tax payments as well as certain other expenses from their federal income tax if they itemize their deductions. And so, by being denied housing, African Americans were denied one of the primary methods for acquiring wealth and passing it on to their descendants. In other words, whites were privileged specifically by disadvantaging African Americans. The limited amount of housing available for African Americans makes it so that they have regularly paid above-market prices for their homes. For the majority who couldn’t pay these prices, the only choice was to rent, but rents were also reliably hiked up for African Americans. Consequently, a greater percentage of African American paychecks goes to housing than for whites. Costs were further compounded due to commuting costs, since African Americans couldn’t get housing close to their work. As such, African Americans, even if they were not blocked from buying a house, could not save enough for a down payment. As such, reparations seem in order. Oliver and Shapiro summarize the case:
“Whites in general, but well-off whites in particular, were able to amass assets and use their secure economic status to pass their wealth from generation to generation. What is often not acknowledged is that the accumulation of wealth for some whites is intimately tied to the poverty of wealth for most blacks. Just as blacks have had ‘cumulative disadvantages,’ whites had had ‘cumulative advantages.’ Practically, every circumstance of bias and discrimination against blacks has produced a circumstance and opportunity of positive gain for whites. When black workers were paid less than white workers, white workers gained a benefit; when black businesses were confined to the segregated black market, white businesses received the benefit of diminished competition; when FHA policies denied loans to blacks, whites were the beneficiaries of the spectacular growth of good housing and housing equity in the suburbs. The cumulative effect of such a process has been to sediment blacks at the bottom of the social hierarchy and to artificially raise the relative position of some whites in society” (Oliver and Shapiro 2006: 51).
Another set of events from recent history might qualify a group of people, primarily Mexican Americans, for reparations. This has to do with Chavez Ravine, an area primarily occupied by Mexican homeowners in the first half of the 20th century and which now houses Dodger Stadium.
For want and need of money...
Money links to the topic of the next lesson—drugs—in many ways. For starters, it was pharmaceutical industries driven by the profit motive that played a role in the opiate epidemic. Relatedly, as Quinones (2015) reports, some med students went into pain management precisely because they knew that's where the money was, and some even openly said that they “want a Bentley.” After people became addicted to prescription opiates, many would have their prescriptions cut off. This is when the Xalisco Boys, a decentralized network of black tar heroin dealers, enter the picture and provide a new source of opiates to addicts. What motivated the Xalisco Boys? They wanted to show off all their money at the annual Feria del Elote in their hometown of Xalisco. Sometimes it really does seem like money is the root of all evil...
Utilizing Marxism of the kind advocated by the Amherst School, we saw an argument for conceiving of the capitalist class-process as a form of legalized exploitation. The solution advocated is to transition to a society of worker-owned enterprises. From both consequentialist and deontological grounds, one could argue that when purchasing goods and services, we should attempt to select only (or primarily from) worker-owned firms.
Peter Singer makes the case that the relative affluence that the citizens of first-world nations enjoy makes it a moral requirement that they donate to international charities to the point of margial utility, that is to the point where giving any more would cause harm to themselves or their dependents.
William MacAskill, one of the founders of the effective altruism movement, uses the scientific method to assess how effective charities and aid organizations are. His organizations both advise people on which charities are the most effective as well as which forms of employment will allow one make the most impact with their donations by earning-to-give.
From a deontological perspective, one can make the case that it is a perfect duty to repair past injustices by awarding certain groups in the United States reparations as an attempt to make up for past injustices.
FYI
Suggested Reading: Peter Singer, Famine Affluence, and Morality
Supplementary Material—
- Video: Richard Wolff, Understanding Marxism
- Video: Crash Course, Poverty & Our Response to It
- Video: TEDTalks, Will MacAskill | What are the most important moral problems of our time?
- Audio: Fresh Air, Interview with Richard Rothstein on his book The Color of Law.
Related Material—
- Video: Thomas Sowell, The Problems of Marxism
- Video: Talks at Google, Peter Singer on Famine, Affluence, and Morality
- Video: Richard Wolff, Workers’ Self-Directed Enterprises
- Audio: Making Sense with Sam Harris, Doing Good: A Conversation with William MacAskill
- Video: TEDx Talks Benjamin Todd | To find work you love, don't follow your passion
Advanced Material—
- Reading: David Ruccio, Strangers in a Strange Land
- Reading: Erik Olsen, Class-analytic Marxism and the Recovery of the Marxian Theory of Enterprise
Footnotes
1. The concept of Stalinism, Priestland admits, is not homogenous. Even Stalin had to become more pragmatic, like Lenin. Stalin, for example, ended the war against managers that began after the revolution. He also restored unequal pay, since he argued that Russia was not ready for full communism and was still in "developmental socialism." Priestland says these are features that exemplify mature Stalinism.
2. In her chapter in González-Ricoy & Gosseries (2016), Virginie Pérotin makes the case that, contrary to popular opinion, worker cooperatives are larger than conventional businesses, are not less capital intensive, survive at least as long as other businesses, have more stable employment, are more productive than conventional businesses (with staff working “better and smarter” and production organised more efficiently), retain a larger share of their profits than other business models, and exhibit much narrower pay differentials between executives and non-executives.
3. In a lesson from my 101 course I discuss primarily the risk of automation. However, in the Sidebar of this lesson, I discuss scientific management—a practice with clear authoritarian roots.
4. For reasons that are completely beyond me, Ronald Reagan once advocated for worker-owned enterprises.
5. One of the most interesting aspects of MacAskill's effective altruism is his use of mathematical techniques for calculating which actions are most likely to make the most positive impact. He uses various tools for this. For example, he measures the good that an aid organization produces through a metric called a quality-adjusted life-year (or QALY for short). This metric allows us to compute the impact of different aid organizations and compare them. It also allows us to measure the effectiveness of an aid organization by calculating a ratio of the QALYs they produce per dollar amount they are given. See the FYI section for interviews of William MacAskill.
Prying Open
the Third Eye
It's not a War on Drugs. It's a War on Personal Freedom is what it is.
Okay?
Keep that in mind at all times.
~Bill Hicks
On why we are addicted
The war on drugs as we know it began in the early 1930s. It was then that Harry Anslinger, Commissioner of the Federal Bureau of Narcotics from 1930 to 1962, shifted the focus of his agents from alcohol to drugs, since prohibition era ended with the passage of the 21st Amendment and, hence, his department was at risk of being defunded or closed. But Anslinger had a political strategy in mind. He would associate drugs with what the majority of Americans feared the most: minorities and communists. By associating cannabis with Hispanics and opium with the Chinese, by making false claims of increased defiance in African Americans who take drugs, by alleging that cannabis was key in how minorities seduced white women, and by recounting (false) anecdotes about how cannabis drives people mad, Anslinger acquired support for his crusade (see Hari 2015, chapters 1-3).

When one hears about how Anslinger acquired support for his war on drugs, using primarily anecdotal stories as opposed to actual social science, one wonders how it is that someone like him acquired so much power and held on to it for so long. Upon reflection, however, part of the reason might be due to the general public's understanding of what causes addiction in the first place. Much like Anslinger himself, most of the public probably thought that addiction to drugs is caused by a moral failure. In other words, people who are addicted to drugs lack the will to do what is right and refrain from evil substances. Perhaps they are lazy or perhaps they just don't care, but, ultimately, they choose to do drugs and continue to do drugs because they are morally lacking. In Kantian terms, they fail to treat themselves as ends-in-themselves and instead use themselves as a means to an end: they use their bodies to pursue pleasure. Perhaps we can also express the problem in Aristotelian terms by saying that drug addicts are lacking virtue and full of vice.1
Today we have a slightly more nuanced view of addiction, to say the least. For example, in chapter 12 of Chasing the Scream, Hari juxtaposes the pharmaceutical theory of addiction, which states that certain chemical compounds have chemical hooks that hijack the reward system of drug users, and Gabor Maté’s susceptibility theory of addiction, which states that certain individuals have been made vulnerable to addiction by facing more hardship than they can bare. Maté argues that prevention starts during the prenatal period, by carefully monitoring stressors on the fetus and mother and diminishing them as much as possible. There's also the work of Bruce Alexander, which Hari discusses in chapter 13. Alexander argues that some become addicted to drugs because they are both predisposed to it due to adversity in life (susceptibility)—essentially agreeing with Maté—as well as because they lack meaningful avenues to express themselves in society. Now, it is important to note that Alexander's initial study was rejected by two major academic journals, Nature and Science. However, subsequent studies showed that there is some support for this view (Solinas et al. 2008, Nader et al. 2015), although the theory might oversimplify addiction—in addition to environment, genes also play an important role (see Petrie 1996). In any case, Maté's and Alexander's claims seem to be at least partially right: an enriched social environment seems likely to be a part of the solution to addiction. For example, supporting Maté's views, we know that adversity early on in life produces an adult organism more vulnerable to drug and alcohol addiction (Oswald et al. 2014, Hensleigh and Pritchard 2014, Karkhanis et al. 2014).

Perhaps the belief that drug addiction is a moral failure can be explained by a cognitive bias called the narrative fallacy. As it turns out, we prefer narratives to complex descriptions, base rates and other statistical information. In other words, given some complicated phenomena, our minds naturally prefer to believe in a nice, easy-to-digest little story over a complex explanation with lots of factors and complicated relationships. In short, we like stories over complicated theories, even when the stories are built off of shotty information (see Kahneman 2011).
One example of this can be found in chapter 8 of Gilovich's How We Know What Ain't So. Gilovich makes the case that people believe in some non-Western, holistic medicine because they, at least sometimes, seem plausible—even though there's typically zero empirical evidence in their favor. For example, Gilovich discusses how one holistic practitioner, Dan Dale Alexander, argued the cure for arthritis is to basically oil your joints, by consuming lots of oils and not consuming water during meals high in oil (since they don’t mix). Another practitioner, D. C. Jarvis, suggested ingesting vinegar (a mild acid) to break up calcium deposits, since plumbers use an acid to break up clogs. Obviously, though, the body transforms that which we ingest so that it is not the same as it is outside of our bodies. Nonetheless, these remedies have a metaphorical appeal, even though they have no empirical (or even logical) validation.

Harry Anslinger (1892-1975).
How does this explain why people see drug addiction as a moral failure? Well, to be honest, drug addiction is extremely complicated. Not only do we have to take into account individual hardships suffered by people which might make them more susceptible to addiction, we also need to look at employment rates, availability of the drugs, whether or not certain governments have been complicit in allowing drugs to flow to a region, genetic factors, culture, and so much more. For example, the opiate epidemic that the US is going through as of this writing has many contributing factors. As Macy (2018) details, this drug epidemic was a confluence of events including regulations on prescription drugs being relaxed, Purdue Pharma's aggressive and irresponsible marketing ploy aimed at boosting the instances of doctors prescribing their drugs, the negative economic effects of NAFTA on Virginia and similar regions which went from being hubs for industry and manufacturing to having 20% unemployment, and OxyContin’s high potency. There's also what journalist Ioan Grillo (2021) calls gunonomics, the complicated flow of weapons from American firearm markets down to Mexico, as well as Central and South America. This flow of weapons creates a vicious cycle such that the drug cartels can act with impunity: cartels have better weapons than the police, which means they can engage in the drug trade more effectively, which means they become stronger, which means they become more profitable, and so they can illegally purchase even more weapons, etc. That's a lot of moving parts to keep in your head. It's much easier to just say, "Drug addicts are full of vice and brought it upon themselves." This kind of lazy thinking is, of course, falling prey to the narrative fallacy.

Sidebar—The Eleusinian Mysteries
In chapter 12, Hari (2015) muses about the ubiquity of substance use in humans. He notes that Ronald Siegel, a psychopharmacologist on faculty at UCLA, calls the desire to change one’s consciousness the fourth drive, after the desires for food, water, and sex. Hari notes that, in fact, there is evidence that people changed their consciousness in every culture from the Andes to China; heck, even Shakespeare got high (cannabis and hallucinogens). Even at the dawn of Western civilization, famous philosophers (including Plato and Aristotle), mathematicians, politicians (like Alcibiades), and artists, along with farmers and craftspeople, got turnt up (i.e., partied while extremely high) at a ten-day festival of the Eleusinian Mysteries every year in September, at the Temple of Eleusis (see image below). This temple was destroyed during the rise of Christianity (see Nixey 2018).2
May I?
Generally speaking...
The ethics of prosecuting the drug war has taken central stage over the last few decades, pushing aside the question of whether or not it is morally permissible to use drugs recreationally. Before turning to the drug war itself, then, let's first take up the question of personal use of recreational drugs.
Per the United Nation's Office on Drug Control, only about 10% of all of those who have used drugs end up developing a substance abuse problem. I say "only about" because I personally always assumed that number was much higher. In any case, given this information, we can make a consequentialist argument for the moral permissibility of recreational drug use:
- If certain people can use certain drugs recreationally without any adverse effects on themselves and others, then recreational drug use is permissible in those cases.
- It seems reasonable to assume that this is possible in some—perhaps most—cases.
- Therefore, recreational drug use is permissible in aforementioned these cases.
On the other hand, Kant was a little more selective. Kant argued that moderate use of fermented beverages which are low in alcohol by volume, like wine and beer, is morally permissible, since it can make more lively certain social situations. Kant wrote, however, that "[t]he use of opium and distilled spirits for enjoyment is closer to baseness than the use of wine because the former, with the dreamy euphoria they produce, make one taciturn, withdrawn, and uncommunicative. Therefore, they are permitted only as medicines" (Kant as quoted in Richards 1982: 173). In other words, stronger liquors and all of the stronger drugs are simply too stupefying for them to be consumed by Rational Beings; only beer, wine, and tobacco are ok. The stronger spirits should be, Kant argued, used only as medication, as was the custom during Kant's time.

Socrates (left) and
Alcibiades, the first
drug-dealer.
It is difficult to say what Aristotle would think about recreational drug use in the modern context. As we learned in the first Sidebar, Aristotle himself did participate in events where drugs were used—albeit as part of a religious ceremony. We also know that Alcibiades, a prominent Athenian statesman, orator, and general, also took part in the Eleusinian Mysteries—making it clear that you could be a successful statesman and still use and deal drugs (see footnote 2). What is more evident is that falling into addiction is definitely a vice. Perhaps Aristotle's Golden Mean points us towards not being too much of a teetotaler (someone who practices complete personal abstinence from alcoholic beverages and other substances) but also not being completely beholden to psychoactive molecules. Perhaps that is the mean between two vices(?).
Lastly, as we will learn in the second Sidebar, culture does appear to play a role in our response to certain substances (Sapolsky 2017: 134; Bushman 1993; Forsyth 2017: 2-3). As such, a cultural relativist might use the different cultural responses to controlled substances to make a relativist argument about the permissibility of recreational drug use. For example, in cultures where alcohol does reliably tend to make people more violent and is sanctioned, then alcohol consumption is not permissible in said society. However, in other society, alcohol consumption might be more innocuous.
Prescription opiates
In Dopesick, Beth Macy details the prescription opiate epidemic that began in the Appalachian mountain region and spread to the suburbs. As was discussed in the Cognitive Bias of the Day, there are several factors that conspired to cause the epidemic. For example, OxyContin was released into the market along with a tsunami of medical salespeople who focused on branding and extolling how non-addictive their product was. Moreover, around this time, there was a movement to count pain as a fifth vital sign, which Purdue Pharma (the creator of OxyContin) capitalized on by distributing signs that said as much, so that doctors could display them in their offices. Additionally, the medical insurance industry inadvertently incentivized doctors to prescribe painkillers by enabling the notion of healthcare as a form of commodity consumption. In other words, patients were encouraged to "rate" their healthcare services, as if they were visits to the mechanic. In an effort to ensure they were getting reimbursed, doctors would give patients what they wanted—the prospect of zero pain—rather than focus on functionality and commonsense solutions that would take effort on the part of the patient.3

As mentioned, The Appalachian mountain region was hit first. As Macy discusses, this had much to do with the economic effects of NAFTA on Virginia and similar regions which used to be hubs for industry and manufacturing. Although Bill Clinton had predicted NAFTA would benefit American workers, since industries would be able to sell to the growing consumer class in China and displaced American workers would get a stipend to learn a new industry, this was not the case. Once the jobs left, even those who retrained were being payed less than they were prior to NAFTA. Crime soared. In some regions, unemployment went as far up as 20%, food stamp claims more than tripled, and disability rates went up more than 60%. Purdue targeted these regions due to the high percentage of disability claims, which qualify for pain relief medication. And so, addiction skyrocketed and drug-related crime increased, as addicted persons stole goods to sell them so they can acquire their next dose and avoid the dreaded dopesickness.
In the end, it was only when the opiate epidemic hit white suburbs that conservatives relaxed their tough-on-crime stance and replaced it with a smart-on-crime stance that favored treating addiction rather than criminalizing it, even though it had always been more fiscally sensible to do the former. Purdue Pharma was taken to court and the defendants (three Purdue execs) plead guilty to orchestrating Purdue’s dangerous promotional campaign. They accepted a plea deal: no jail time and a $34.5 million fine. Some think this might be insufficient. For reference, by 2007, Purdue had earned over $2.8 billion from OxyContin—$595 million in 2006 alone.4

Bertha Alvarez Manninen.
So we can conclude that the prescription-opiate epidemic was sort of a perfect storm of factors. Nonetheless, at least some people did specifically ask for anti-depressants when they very likely didn't need them. As such, Bertha Alvarez Manninen (2006) argues, from a Kantian perspective, against the use of antidepressants by “people who simply wish to feel better quickly when faced with the commonplace problems” (as opposed to those who “really” need it). In other words, a Kantian (says Manninen) would be against resorting to antidepressants for "small problems"; only those who can verifiably be said to have clinical depression should make use of these.
A utilitarian might actually agree with the Kantian. This is because needless use of prescription opiates leads to many negative consequences down the line. For example, OxyContin’s high potency made it highly addictive. So, as prescriptions ran out and to avoid dopesickness, addicts began stealing in order to buy their next fix in the black market. Some turned to heroin. Ultimately, compounded by the fact that politicians largely ignored it, this spread like a vector-phenomenon, which is why it is labeled an epidemic.5
Performance-enhancing drugs

Alex Rodriguez,
user of PEDs.
There's also the question of performance-enhancing drugs (PEDs), like anabolic steroids and drugs that can increase one's cognitive capacity, a.k.a. nootropics. Unsurprisingly, there's good arguments on both sides of the issue. For example, Savulescu and colleagues (2004) argue that anabolic steroid use is already happening, and so we might as well make it safe, i.e., minimize the negative effects. Moreover, allowing some PEDs would actually level the playing field, both in genetic and financial terms, i.e., create some positive effects (for disenfranchised athletes) that wouldn’t otherwise be present. An opposing perspective is that use of PEDs is wrong because “it reduces athletic competition to contests between mechanized bodies rather than total thinking, feeling, willing, and acting persons. It dehumanizes by not respecting the status of athletes as persons” (Fraleigh 1985: 25). Much the same kind of arguments could be made about nootropics (Bostrom and Sandberg 2009). In short, it's consequentialist versus deontological thinking.
Husak & Sher 2003
Connections
There is an interesting link between opiates (a very potent, highly-addictive drug) and the topic of The Troublesome Transition. It was, in fact, Catholic hospitals that pioneered the use of opiates as part of a death-with-dignity mentality. The Catholic hospitals, it should be clear, were not on the side of active euthanasia (a.k.a. physician-assisted suicide), but they sought to minimize pain while disease and treatment took its course, making the terminal phase of disease more tolerable. As you will see, they argued for the permissibility of using opiates via something called the doctrine of double effect. Eventually, however, this idea grew into considering opiates as painkillers for non-terminal cases, first for the recovery-phase after operations. This grew into the "pain as a fifth vital sign" and patients’ rights movements in the 1980s and 90s, when opiates began to be used to treat injuries and eventually depression. This coincided with medical insurance companies encouraging patients to see healthcare as a commodity and giving them the ability to rate their doctors. De-fanged by becoming vulnerable to a bad rating on a patient satisfaction report, doctors wouldn’t stress, say, eating better and exercising to reach health goals over prescribing a medication. Moreover, treating medical treatment like a consumer service led to a speed-up of the whole process, resulting in less time for patients to explain their symptoms to their doctor—perhaps the most useful part of a doctor's visit. It was this environment that allowed Purdue Pharma to enact its radically irresponsible advertising campaign and emphasis on sales.
The ethics of drug use can be broken down into several different topics. In this lesson we covered the question of whether drug addiction is a moral failure, the question of whether recreational drug use is morally permissible, and the question of whether the drug war (as it is currently being waged) is morally justifiable.
There are various ethical perspectives on the use of recreational drugs. Two of the views we covered can be summarized as follows. Utilitarians might be disposed to accepting allowing for recreational drug use if there are no negative consequences. Kant himself opined that some intoxicants (like fermented beverages) are permissible but higher potency liquor and drugs are not.
With regards to whether or not the drug war is justifiable, we also saw a variety of perspectives. Both Husak and Sher argue in terms of negative consequences. In fact, they mostly disagree as to which would have worst consequences, ending the drug war or continuing it. As such, their moral reasoning is consequentialist. In The New Jim Crow, Michelle Alexander made the case that the drug war is not being waged impartially. This sounds like a Kantian concern, although Kantianism allegedly doesn’t look at consequences when deriving moral value. Lastly, in The Rise of the Warrior Cop, Balko argues that the drug war has eroded our civic rights and militarized our police, leading to all sorts of negative consequences.
FYI
Suggested Reading:
Douglas Husak, Four points about drug decriminalization
George Sher, On the decriminalization of drugs
Related Material—
Video: Democracy Now, Interview with Johann Hari
Audio: NPR's Fresh Air, Interview with Michelle Alexander
Reading: Alia Wong, History Class and the Fictions About Race in America
Movie Trailer: Kill the Messenger (2014)
Here’s a link to the book.
Advanced Material—
Reading: B. A. Manninen, Medicating the Mind: A Kantian Analysis of Overprescribing Psychoactive Drugs
Reading: Rob Lovering, On Moral Arguments Against Recreational Drug Use
Reading: David J Nutt, Leslie A King, and Lawrence D Phillips, Drug harms in the UK: a multicriteria decision analysis
Reading: Nick Bostrom & Anders Sandberg, Cognitive Enhancement: Methods, Ethics, Regulatory Challenges
Related Link: Nick Bostrom’s Home Page
Note: The interested student can find several of Bostrom’s publications and working papers here. Of interest may be his Vulnerable World Hypothesis.
Footnotes
1. Not everyone thought that drug addiction was caused by a moral failure. In Anslinger's own time, some argued that drug addiction was a medical problem. For example, Henry Smith Williams, whose doctor brother was targeted by Anslinger for (legally) prescribing opiates to addicts to curb or control addiction, argued this way. In fact, Smith Williams developed a conspiracy theory that Anslinger actually worked for the mafia. This theory went something like this. The dynamics of drug criminalization led to two crime waves. First, addicts sought their drugs through non-legal channels, thereby giving rise to a black market controlled by the mafia. Second, the mafia price gouged, and so addicts had to revert to stealing in order to have enough money for their fix. This all seemed to Smith Williams to implicate Anslinger as a stooge of the mafia. Smith Williams was ultimately wrong, though; no links between the mafia and Anslinger have ever been found (Hari 2015, chapter 2). A second example of historical figure who saw drug addiction as a medical problem was Leopoldo Salazar Viniegra, head of the Secretaría de Salud Pública under President Lázaro Cárdenas. However, Anslinger targeted him and pushed for his being removed from office (see Hari 2015, chapter 10).
2. The substances utilized at the Eleusinian Mysteries were kept a highly-guarded secret of the cult of Eleusis. However, Alcibiades—a prominent Athenian statesman, orator, and general—apparently stole some of the substance and gave it to his friends. This, in a sense, makes him the first recorded drug dealer (see Hari 2015, chapter 11).
3. There is evidence that Purdue knew about how addictive their product was. They kept meticulous records of doctors and their populations, for making strategies for meeting their quarterly sales goals. One could easily note that some small towns were getting several times more deliveries than larger towns with bigger populations. This should've tipped off Purdue that something was happening (see Macy 2018, chapter 2).
4. Mary Jo White was one of the lawyers for Purdue. In 2013, Barack Obama nominated her to the Chair of the US Securities and Exchange Commission.
5. Part of the reason why it spread so stealthily, Macy argues, is because it started in the Appalachian Mountain region, a region most politicians ignore. Another factor is that, once it hit the suburbs, the epidemic was also stealthy, since wealthier people could keep their addiction under wrap since they had more disposable income and didn’t have to resort to stealing to get their fix. Moreover, race might've played a role. Racial bias actually spared black and brown people from the worst of the opioid epidemic. Since doctors didn’t trust black and brown people to not misuse pain killers, they wouldn’t prescribe it to them. This is why addiction rates tripled for whites, while staying roughly the same for black and brown people. It's even the case that the Xalisco Boys, a decentralized network of heroin dealers, actually targeted whites due to their assumption that they had more money and their prejudice against blacks (see Quinones 2015).
The Troublesome Transition
Life is pleasant.
Death is peaceful.
It's the transition that's troublesome.
~Isaac Asimov
Death with Dignity
It appears that disputes over the moral status of suicide can be found as far back as the First Intermediate Period of Egypt, circa 2181-2055 BCE (see Battin 2010: 674). Although that is a topic of importance and interest, in this lesson we will be focusing primarily on the moral status of assisted-dying in its various forms, glancing at issues surrounding suicide itself only briefly in the latter half of the lesson. Before looking at topics in the assisted-dying debate, though, let me say two things.
First off, it should be said that this is an extremely sensitive topic and that you should know that I will take precautions in presenting the material. Having said that, in essence, what we are discussing is ultimately suicide. This is clearly an important issue. So, if you are having any sort of suicidal thoughts, please reach out for help. One place to start is the Student Health Center. At the Student Health Center, the psychological services staff is available to talk confidentially with any student (with current ID) about the challenges with which they are faced. They do their work with the utmost respect for those who go to see them, without judgement. They will try to offer options and solutions that perhaps had not been considered. They will listen carefully to a student's problems and try to help find solutions to the everyday dilemmas that students confront. They are painfully aware that it can take a lot of courage to walk through the door of their facility to discuss personal issues. Click here for the ECC Psychological Services webpage.

Yersinia pestis,
the bacteria which
causes plague.
Second, the fact that a debate surrounding assisted-dying even exists points to the immense progress that humankind has made in the past few centuries. For the majority of humans who have ever existed, death has been caused by factors such as parasitic and infectious diseases. For example, Winegard (2019: 2) provides the following statistic: “The mosquito has killed more people than any other cause of death in human history. Statistical extrapolation situated mosquito-inflicted deaths approaching half of all humans that have ever lived.” In chapter 2 of The Fate of Rome, Kyle Harper reminds us that in the Roman Empire life expectancy was about 30 years. He also notes that the fertility rate had to be very high, since otherwise the population could not be replenished generation after generation. In fact, a high fertility rate was so pivotal that the state actually penalized low fertility. Women had six children on average, although Mary Beard (2016) hypothesizes that the number might've been as high as nine. To add to this horror, it appears that people knew there was a season for death. This is because death came in waves throughout the year, depending on when the climate was most suited for disease ecology and mosquito-breeding (mosquitoes being the carriers of malaria). Even the great physician of antiquity Galen notes that most deaths came in the Fall.
Today, however, we are much more likely to die of degenerative diseases, such as heart disease and cancer—a time period Olshansky and Ault (1987) call The Age of Delayed Degenerative Diseases. Improvements in sanitation, antibiotics, immunization, and other breakthroughs in medical science have allowed humans to lengthen their lifespan substantially. In many developed countries, life expectancy is around 80 years. In other words, through medical science, humans have produced what we can call the terminal phase of dying. It is this stage of the life cycle that gives rise to questions surrounding the moral status of assisted-dying.
“On average, people die at older ages and in slower, far more predictable ways, and it is this new situation in the history of the world that gives rise to the assisted-dying issues to be explored here” (Battin 2010: 674).
Storytime!
Death with Dignity Laws in California
Important Concepts
Two arguments in favor of physician-assisted suicide
The argument from autonomy

Right-to-die advocate
Brittany Maynard (1984-2014).
A person, it seems, has a right to determine as much as possible the course of his or her own life. For example, in most Western-style democracies you are free to choose your career, what level of education you wish to complete, and who you want to engage with romantically—generally speaking.1 Since death is a natural part of the life cycle, it stands to reason that a person also has the right to determine as much as possible the course of his or her own dying. As such, if a person wishes to avoid the more painful stages of the terminal phase of dying, then they should have the choice for aid in dying such that it is safe (i.e., without unnecessary pain) and comforting (see Battin 2010: 676-77).
Although one might object that depression and other psychiatric disturbances might affect one's judgment near the end of their life (and specially if they have a terminal illness), those who advance the argument from autonomy argue that "rational suicide" is possible. As is stipulated in California law, two doctors would have to first determine that the patient is mentally-competent and has no more than six months to live. If the aforementioned criteria are met and it can be reasonably assured that a certain period of this terminal phase will be extremely painful, such as in the terminal phase of stomach cancer, then it seems perfectly rational to choose to avoid this phase.
The argument from the relief of pain and suffering
No person should have to endure pointless terminal suffering. If the physician is unable to relieve the patient's suffering in other ways and the only way to avoid such suffering is by death, then, as a matter of mercy, death may be brought about. Of course, there are such cases in the medical literature. As such, in those cases, physician-assisted suicide is morally permissible (see Battin 2010: 686-690).
A famous example of this form of argumentation comes from Rachels (1975). Rachels argues that current practices—that is to say current in 1975—which do not allow active euthanasia are based on a problematic doctrine, which Rachels calls the conventional doctrine. We should allow active euthanasia, Rachels argues, because this would decrease the hardship of those with terminal diseases.
Three arguments against physician-assisted suicide
The argument from the intrinsic wrongness of killing
The main premise in this argument is that the taking of human life is simply wrong. Suicide is the taking of one's own life. Hence, suicide is wrong. Moreover, assisting someone in suicide is assisting someone in taking a human life. Hence, assisting someone in suicide is also wrong (see Battin 2010: 678-681).
A reliable proponent of this argument throughout history has been the Roman Catholic Church. In the fifth century, Saint Augustine was the first to interpret the biblical commandment "Though shall not kill" as expressing a prohibition on suicide. Several centuries later, in the thirteenth century, Saint Thomas Aquinas developed an even more rigorous sanction against suicide. He argued that everything loves itself and seeks to remain in being. This renders suicide as wholly unnatural. Moreover, Aquinas argued, suicide harms the community and it is a rejection of God's gift of life. For all these reasons, suicide is wrong.

In 1958, Pope Pius XII issued a statement to anesthesiologists called "The Prolongation of Life" where he utilized the doctrine of double effect to make the case that physicians could use opiates for the control of pain—even if the use of these will cause an earlier death. The general idea behind the doctrine of double effect is that two or more effects might arise from a given action: the intended effect and a foreseen but unintended effect. In general, this doctrine is said to be applicable when 1. the action is not intrinsically wrong (such as relieving pain is not intrinsically wrong); 2. the agent must intend only the good effect, not the bad one (as when an anesthesiologist intends only to minimize pain and not necessarily to bring about an early death); 3. the bad effect must not be the means of achieving the good effect (as in anesthesiologists using opiates, as opposed to death, as a way to minimize pain); and 4. the good effect must be proportional to the bad one, meaning the bad effect does not outweigh the good effect.
The argument from the integrity of the profession

Fragment of the
Hippocratic Oath.
Another argument against physician-assisted suicide comes from reflecting on the nature of the medical profession. It is said that doctors should simply not kill; it is prohibited by the Hippocratic Oath. In other words, the physician is only allowed to save life, not take it (see Battin 2010: 681-82).
There is a common objection to this viewpoint, however. As it turns out, the Hippocratic Oath contained much more than the injunction to not take life. It also prohibits, for example, performing surgery and taking fees for teaching medicine. It would be inconsistent, one might argue, to insist that the injunction against taking life is legitimate, but that the parts about not performing surgery and taking fees can be safely ignored. Perhaps it could be argued that, as the practice of medicine was professionalized and became more strenuous to master, fees become reasonable. After all, there has to be incentive for mastering the art of modern medicine. It could also be said that surgery was a bad idea before germ theory. However, the response from the proponent of physician-assisted suicide would be the following: if the Oath can be modified to permit these new practices (i.e., surgery and fees), why not also permit assistance in suicide, in particular in those cases where the patient would simply suffer needlessly without the doctor's help?
The argument from potential abuse
Perhaps the best argument against physician-assisted suicide is the argument from potential abuse (Battin 2010: 682-86). It is telling that this argument is also known as the slippery-slope argument against euthanasia. Here is the gist. Permitting physicians to assist in suicide, even in those cases where it would greatly minimize pointless pain, may lead to situations in which patients are killed against their will. Once it is within the realm of possibilities for a doctor to terminate a patient's life, then either malicious agents or poor judgment might lead to a patient being unjustifiably killed. For example, perhaps a greedy relative might convince the terminal patient that it would simply be easier if they were to choose suicide. Or perhaps a terminal patient doesn't want to burden their family with more hospital costs or more emotional energy invested in a dying relative. Perhaps a doctor will, via unconscious racial prejudice, disproportionately advocate end-of-life assistance to certain races and not others. And, of course, there are countless other possible scenarios, all of which suggest that allowing for physician-assisted suicide is a slippery slope that will eventually include some unjustifiable deaths.
The advocate of physician-assisted suicide could respond that there should be a basis for these ominous predictions before such an argument can hold any water. Having said that, there are some bases for these predictions. Medical care in the United States is, I would say, unreasonably costly. Cost pressures alone might lead to some of the aforementioned situations. Even if cost could be somehow ruled out, we know that greed, laziness, insensitivity, and prejudice all exist. In other words, these are genuine risks that we must be candid about.
Food for thought...

Ultimately, a key issue behind the debates surrounding physician-assisted suicide is the permissibility of suicide itself, independent of whether it is assisted by a physician or not. Here's an important question to ponder. Proponents of assisted suicide assert that autonomy, i.e., one's right to determine as much as possible how the course of one's life is to proceed, is a fundamental good that must be protected. However, they advocate an act that extinguishes the basis of autonomy. In other words, to choose to end your life is a choice that ends the capacity to make any future choices. Is this inconsistent? Do we really have a right to choose to end our ability to choose?
John Stuart Mill might've said that the advocates of physician-assisted suicide are being inconsistent. Again, this debate did not arise until the 20th century, so Mill did not directly comment on the issue. However, some commentators believe he would've sided against physician-assisted suicide. After all, we do know that Mill made a case against voluntary slavery, i.e., giving yourself over to a master in order to acquire food and shelter. Perhaps this is analogous to physician-assisted suicide?
“The same conundrum prompted John Stuart Mill, a stalwart champion of individual liberty, to favor legal proscription [i.e., banning] of voluntary slavery. Mill claimed that an individual cannot freely renounce his freedom without violating that good. Similarly, autonomous acts of assisted suicide annihilate the basis of autonomy and thereby undermine the very ground of their justification” (Safranek 1998: 33; interpolations are mine).
Kant is a little easier to figure out. Kant is definitely against suicide.
“When discussing how the formula of humanity entails the perfect duty to refrain from suicide, Kant writes: [T]he man who contemplates suicide will ask himself whether his action can be consistent with the idea of humanity as an end in itself. If he destroys himself in order to escape from a difficult situation, then he is making use of his person merely as a means so as to maintain a tolerable condition in life. Man, however, is not a thing and hence is not something to be used merely as a means” (Manninen 2006: 102).
Lastly, David Hume thought otherwise. Although Hume argues that to leave behind any dependents in a vulnerable state is not permissible, he generally thinks that “A man who retires from life does no harm to society: he only ceases to do good, which, if it is an injury, is of the lowest kind.” He also stresses, however, that “small motives” are not sufficient for someone to “throw away their life.” In other words, it's wrong to commit suicide if you are leaving family members and other dependents behind with no one else to care for them. But, if this is not the case, Hume finds suicide generally acceptable (if it is freely chosen). I hasten to add, however, that "small motives", Hume argues, are not a good enough reason for suicide. Hume's views will be covered in the next unit.
Dying well
Perhaps it's fitting to end here with a discussion of a much neglected (and even actively avoided) topic: the need for learning how to die well. Considered one way, we can see life as a series of skills that must be mastered. As a toddler, you learn to walk and talk. As a child, you learn basic skills and your role in the family structure. As a teenager, you begin to discover more about yourself and (hopefully) you skillfully discern your different strengths and weaknesses, acknowledging each in a truthful way. As a young adult, you learn your craft, the skill that will earn you your livelihood. Later, you may become a parent and you must learn the skill of parenting. Some of us, as our parents age, must learn the skill of caretaking. If you get married, you must learn the skill of becoming a good partner. As you age, you must learn to let go of some habits that are no longer suited for you. And ultimately, of course, you must prepare for death—the last skill you must learn.
Mortality is something we must all face, and on this topic we can glean great insight by past moralists. In his inspiring How to Think Like A Roman Emperor, Robertson (2019) surveys the Stoic practice of Marcus Aurelius. Although most Stoic writings are lost, we do understand their basic views: you must live in accordance with nature. This was synonymous with living wisely and virtuously. As we know, the Greek word arete is actually best translated not as “virtue” but as “excellence of character.” Something excels when it performs its function well. And so, humans excel, the Stoics argued, when they think clearly and reason well about their lives.

This is easier said than done, of course. There are many factors that might cloud our judgment and not allow us to think well. The Stoics had methods for countering these obstacles. For example, Robertson gives an excellent summary of Stoic views on language. The Stoics thought that language was important not only in how you portray the world to others, but also in how you portray the world to yourself. You should develop a rich vocabulary, and you should always strive for clarity. Importantly, Stoics acknowledged that humans tended to exaggerate and/or paint events and interactions with others through a moral lens. Stoics argued that we should strive as much as possible to portray events in a purely descriptive sense, without exaggeration and without loading our phrases with value judgments. In other words, instead of framing an argument you had with a loved one in a way that's loaded with moral terms like "manipulative" and "unforgivable", as well as exaggerations like "You always...", you should try to describe the event to yourself (and others) with only the facts. Present to yourself reality without making yourself a victim (or a villain!). Just state the facts, and this will allow you to get a better handle on the situation. Words, especially the words you use in framing a situation, do matter.
The most helpful Stoic teaching, in my opinion, is the Stoic ideal of cognitive distancing. Re-frame your situation to lessen its intensity. If you find yourself in a disagreement with a friend, try to take their perspective. If you find yourself riddled with anxiety, focus only on those issues that you have control over (instead of worrying about things over which you have no control), make a reasonable plan of action and execute it. Accept your troubles as an opportunity to practice your Stoic ideals.
With regards to mortality, the Stoics had a very important practice, one that is enshrined in a phrase: memento mori. This phrase, which is translated as "Remember that you are mortal", was a part of daily Stoic practice. Stoic practitioners saw death as a natural part of the life cycle and, as such, tried to prepare themselves for it by reminding themselves daily of their mortality. By doing this, they reminded themselves that time could not be wasted. We must prioritize our actions. We must engage in those activities that are truly important to us, even if they are difficult. It's highly unlikely that, on their deathbeds, most people think to themselves that they should've watched more Netflix. Instead, they probably wish they would've spent more time with their loved ones; they wish they would've taken that trip, hugged their spouse more often, or finished that degree. They regret not apologizing more and they regret not being more forgiving. They may even regret not having enough time to prepare for a good death. Memento mori.
Barking (Jim Harrison)
The moon comes up.
The moon goes down.
This is to inform you
that I didn’t die young.
Age swept past me
but I caught up.
Spring has begun here and each day
brings new birds up from Mexico.
Yesterday I got a call from the outside world
but I said no in thunder.
I was a dog on a short chain
and now there’s no chain.
The debate surrounding physician-assisted suicide arose only in the 20th century, as degenerative disease became the predominant cause of death in the developed world.
Two arguments for the permissibility of physician-assisted suicide are the autonomy argument (which states that we have a right to determine the course of our lives, including our death) and the argument from the relief of pain and suffering (which states that physician-assisted suicide is permissible since it minimizes pain/suffering).
Three arguments against the permissibility of physician-assisted suicide are the argument from the intrinsic wrongness of killing (which states that killing, including killing yourself, is always wrong), the argument from the integrity of the profession (which states that suicide is not in the purview of the medical field), and the argument from potential abuse (which states that allowing physician-assisted suicide might have negative consequences down the line).
The forms of moral reasoning involved are as follows:
- The fundamental motivation behind the argument from the relief of pain and suffering, as well as Rachels (1975), was the minimization of suffering. This is consequentialist reasoning. In fact, some utilitarians go as far as permitting non-voluntary active euthanasia if it will decrease overall suffering (see Singer 1993). Interestingly enough, the argument from potential abuse (a.k.a. the slippery slope argument) is also a form of consequentialist reasoning, albeit on a wider scope.
- Concerns over terminally ill patients being used as a means to an end (as in manipulation and coercion) and lack of universalizability (as in where the costs of end-of-life medication might be prohibitive to some) are deontologic concerns. In other words, these are violations of Kant’s categorical imperative.
- Concerns over violating the function of a doctor are Aristotelian concerns. In other words, bringing a life to an end is not conventionally seen as an action flowing from a good doctor.
FYI
Suggested Reading: James Rachels, Active and Passive Euthanasia
Supplementary Material—
- Video: Crash Course, Assisted Death & the Value of Life
Related Material—
-
Video: The Last Word with Lawrence O'Donnell, Brittany Maynard's death with dignity
Book Chapter: Donald Robertson, The Contemplation of Death
Advanced Material—
-
Reading: Peter Singer, Taking Life: Humans
-
Reading: Franklin G. Miller and Howard Brody, Professional Integrity and Physician-Assisted Death
-
Reading: John Safranek, Autonomy and Assisted Suicide: The Execution of Freedom
Footnotes
1. An interesting exception to the first two is the three-tiered educational system in Germany.
The Fall of the Prince
So long as you promote their advantage, they are all yours, as I said before, and will offer you their blood, their goods, their lives, and their children when the need for these is remote. When the need arises, however, they will turn against you. The prince who bases his security upon their word, lacking other provision, is doomed… Men are less concerned about offending someone they have cause to love than someone they have to fear. Love endures by a bond which men, being scoundrels, may break whenever it serves their advantage to do so; but fear is supported by the dread of pain, which is ever present.
~Niccolò Machiavelli
Validation

As we transition into the last unit of this course, we move towards assessing whether or not any of our ethical theories have any empirical validation (i.e., support from the social sciences) at least with regards to the empirical (or testable) claims that they make. It must be clear, of course, that science cannot directly address questions of morality. In other words, there is no experiment (or set of experiments) that can be performed that would arrive at a conclusion on what actions are morally right and what actions are morally wrong. This is because the property of moral wrongness is, according to most moral objectivists, non-physical, and hence can't be tested for by science. Perhaps some utilitarians would argue that moral wrongness just is the mental state of pleasure, but we cannot assume that hypothesis from the start. As G.E. Moore argued, there is still an open question on the matter. It's also the case that moral nativists would have us assume that moral judgments come from our innate morality module. Even if we accept this, however, we would only be able to study the workings of the morality module, not moral properties themselves. Recall that moral nativism pairs well with moral skepticism; if so, there's no objective moral properties to be studied (since they don't exist). In short, science can't help us discover what is moral and what is immoral (unless you assume hedonism); science can, however, help us discover how we make moral judgments as well as assess the empirical claims of ethical theories.1
Despite the limitations of science on being able to directly address moral questions, it appears that various ethicists of the past made non-moral empirical claims while they were theorizing about their moral views. As such, this gives us an opportunity to collect the empirical claims made by ethical theorists and check if they are true or not using the latest science. Although this will not resolve the fundamental normative question of what is morally right and morally wrong, it will let us assess each theory more holistically, to see if they have any empirical weak points. This can surely partially inform us as to whether the theory has any merit, generally speaking.
But of course, many of these theories were written a long time ago, without the benefit of modern science. So we must be careful (and perhaps charitable) in our re-interpretations. Experts on textual interpretation (e.g., Ricoeur 2007) argue that in order to understand a text you must take into consideration: the historical background in which the text was written, how this historical background affected the author, the historical background of the reader (i.e., you), and how your historical background affects you. Although there are many differences between our historical context and that of the various ethicists we've covered, perhaps the most relevant one scientifically is the dawn of Darwinian thinking in the social sciences. As such, we will take care to try to incorporate, whenever possible, Darwinian processes into ethical theories of the past—a controversial approach for some, I'm sure. For more information on Darwinian thinking, see this lecture by Daniel Dennett.
"Whenever Darwinism is the topic, the temperature rises, because more is at stake than just the empirical facts about how life on Earth evolved, or the correct logic of the theory that accounts for those facts. One of the precious things that is at stake is a vision of what it means to ask, and answer, the question 'Why?'. Darwin's new perspective turns several traditional assumptions upside down, undermining our standard ideas about what ought to count as satisfying answers to this ancient and inescapable question. Here science and philosophy get completely intertwined" (Dennett 1996: 21).

Although I don't expect a ton of outrage against this approach from college students, let me just make the following points before beginning. As previously stated, looking at the relevant data from the cognitive sciences will not help us answer what we may call the fundamental normative question; in other words, we won't get an answer to the question about what is morally right and what is morally wrong. But following a trend in Philosophy that began in the middle of the last century, we can attempt to "naturalize" ethics. That is, we can assume that what we should do (a normative matter) in some way has something to do with what we can do (a descriptive matter). In other words, if science can help us understand what we're really like, then maybe it can help us realize what we can and should do.2
Relatedly, I should tell you that some philosophers do not tire of reminding us that there is a difference between is and ought, between how things are and how they should be. They do not believe that science can in any way help advance the field of ethics. To answer this question fully would require a whole other course, but we can say this much here: ethicists (who allegedly focus only on the normative) make a large number of empirical claims. They make so many empirical claims, in fact, that literally the rest of this course will be devoted to pointing them out and assessing whether they are true or not. In short, I think looking at the relevant science is worth our time.
Important Concepts
The Prince

We will begin in the same way that we started: with the question of psychological egoism. As you recall, we covered two ethical theories borne out of psychological egoism. One is ethical egoism—a simple theory with not much substance and hardly any applicability, as we saw in Unit II. It's cousin, Hobbes' social contract theory, is a much more formidable foe. In fact, there are many social contract theories, but Hobbes' version (or something very similar) has been alluring to some thinkers for centuries. If we allow into the analysis the precursors of Hobbes' theory, then we can say that this view has seemed to be intuitively true to some thinkers for millennia. We think that perhaps the Greek Protagoras held this view over two thousand years ago and, around the same time, Plato felt the view was enough of a threat to his view, that he dedicated most of The Republic to addressing it, featuring Glaucon as the representative of the view.
At the heart of both of these ethical theories is an empirical hypothesis: psychological egoism. This is the view that all human actions are driven by self-interest. Why is it empirical? Because in order to know if it's true or not, it is not enough to just think. We must see what humans are like. We must systematically analyze them. We need to study their motivational states and their behaviors, and we need to determine whether behind every behavior there is in fact some self-interested motivational state. In fact, even one example of non-self-interested behavior will undermine psychological egoism. For millennia, we have not had the experimental tools to peer into this mystery of human motivation. But some researchers think we now do.
All the while, psychological egoism has continued to flourish in some disciplines, although it now goes by the name of rational choice theory, as noted in the Important Concepts. For example, in the 1940's, John von Neumann made rational choice theory an axiom of the new field of game theory. It is also an axiom in the neoclassical approach to economics, which—as of this writing—is still the mainstream approach in the field. And of course, rational choice theory is assumed in some political theorizing, like Glaucon, Hobbes, and, famously, Niccolò Machiavelli, who defended this view in The Prince in the 16th century (see also the epigraph at the beginning of this lesson). So it appears that if we show psychological egoism (a.k.a. rational choice theory) to be false, not only are ethical egoism and Hobbes' social contract theory shown to lack empirical validation, but a fundamental axiom of some disciplines may be called into question.
The death of an idea
According to some theorists, what sets science apart from other forms of inquiry is its embrace of failure. For example, Firestein (2015), who is himself a biologist, wrote Failure: Why science is so successful to dispel a common misconception of the non-science professional: the belief that failure in science is bad. Failure is a necessary ingredient in science. More than that, people have a misguided conception of the actual process of science; they think that you just use reason, think, and then experiment. However, in reality, playfulness, serendipity, intuition, and accidents all play a role in real science.

I say this here for two reasons. First, Firestein's book really is a great read. Second, if it turns out that rational choice theory is false, then this will be a good thing. Rational choice theory, as we've seen, is pervasive throughout history. It can be found as an axiom of several different academic disciplines. If it were to be proven to be wrong—sure, there'd be a period where some disciplines have to reorganize—but the overall effect on the state of our knowledge would improve. This is because as false ideas are rooted out, this creates an opening for new ideas to be tested out. This is, in fact, the main message of my Introduction to Philosophy course: some ideas must die. Which ideas should die? I'd start with the ones that are demonstrably false (so we can make some room for new ideas!).
What will it take to show that psychological egoism is false? Since the hypothesis in question states that all human actions are rooted in self-interest, it only takes one non-self-interested action to prove the theory wrong. In other words, if there is evidence of any altruistic behavior, i.e., behavior for the sake of another, then this is evidence against psychological egoism. So, assessing the truth of psychological egoism is really the search for altruism.
Assessment of the empirical claims
The State of Nature Hypothesis
Question: Has the centralization of authority reduced interpersonal violence, as Hobbes argued?
This is actually a tough question to answer even with the plethora of available evidence. This is because we must first decide what "war" means. If by "war" we mean organized warfare with a clear hierarchy and a national banner, then obviously "war" is a very recent invention. However, if by "war" we mean organized violence more generally, then it goes far back into our prehistory. Recall that one theory about the downfall of the Neanderthals suggests that organized violence is sapiens' lethal custom. But perhaps that's still not what Hobbes meant. Perhaps "war" should be taken to mean any kind of interpersonal violence, such as assault, homicide, and the like. This debate is thorny, so here's some Food for Thought...
The Hobbesian Social Contract
Question: Did humans submit to a central authority out of self-interest?
To be fair to Hobbes, this a historical question and Hobbes did not have a mature field of history to draw information from. In other words, we have the benefit of a modern science of history, while Hobbes didn't. However, it is still an empirical question, so we need not shy away from searching for an answer; we just won't heap scorn on Hobbes if he turns out to be wrong. Put simply, it is either the case that self-interest drove our semi-settled ancestors to centralize power or it was something else that led to the centralization of power.
We have, in fact, already addressed this question in two past lessons. In Eyes in the Sky we were introduced to the work of James C. Scott (2018), who works on the deep history of the earliest states. As we saw, the states in his dataset reliably used coercion to fill and re-fill their pool of subjects. In other words, "citizens" did not "contract" with a central authority out of their own self-interest; they were captured, enslaved, conquered, colonized, or otherwise coerced into the state. On this point, it seems that Hobbes was wrong.
“If the formation of the earliest states were shown to be largely a coercive enterprise, the vision of the state, one dear to the heart of such social contract theorists as Hobbes and Locke, as a magnet of civil peace, social order, and freedom from fear, drawing people in by its charisma, would have to be re-examined. The early state, in fact, as we shall see, often failed to hold its population. It was exceptionally fragile epidemiologically, ecologically, and politically, and prone to collapse or fragmentation. If, however, the State often broke up, it was not for lack of exercising whatever coercive powers it could muster. Evidence for the extensive use of unfree labor, war captives, indentured servitude, temple slavery, slave markets, forced resettlement in labor colonies, convict labor, and communal slavery (for example, Sparta’s helots) is overwhelming” (Scott 2018: 26-9).
Case closed? Not quite. Just like industrialization happened in different places at different times and in different ways, the settling and centralizing of authority in different polities might have different storylines. In Endless Night (Pt. II) we covered the work of Dartnell (2019), who seeks to understand how the Earth's natural history has shaped human history. One of the many examples that he covers in his Origins is that of naturally-occurring cyclical rapid warming and cooling of the planet. This may have been what caused settling in the area around modern-day Egypt. Briefly, due to rapid desertification, climate refugees crowded around the Nile Valley. This was beyond the ecological carrying capacity of the region and so it was a very unstable arrangement. As such, individuals relinquished their autonomy to centralized bureaucracies which coordinated agricultural practices and thus fed the population. Thus the bureaucracies gained legitimacy and the wheels of statecraft were set in motion. As such, this may be a case of self-interested centralization.
Was Hobbes right on this matter? If we consider this an all-or-nothing affair, then certainly not. If we look at this in terms of degrees, it looks like he was more wrong than right—since cases like that of Egypt appear to be the exception rather than the rule. Nonetheless, he wasn't completely wrong. How much credit should he get? Maybe he just lucked out? I'll let you make up your own mind. Let's say that the question remains open.
The Question of Psychological Egoism
Question: Are all human actions driven by self-interest?
Batson's experiments
In addition to Batson's experiments, several disciplines have something to say on this topic. Please enjoy the slideshow below:
Farewell...
“I’m reminded of the last lines of John Milton’s Paradise Lost. The empirical evidence in preceding chapters has impelled us, with some wistfulness, to leave the Eden of Egoism. We find ourselves in a less secure, more complex world. Like Milton’s couple, we need to reassess what it means to be human” (Batson 2019: 252).
With hindsight, maybe you knew all along that psychological egoism was false. Or maybe you still don't buy that it's false. If that's the case, the ball is now in your court. You must respond to volumes of research in multiple fields to try to rescue this view. For the rest of us, we will side with Darwin who saw that cooperation beats self-interest in the setting of natural selection.
“When two tribes of primeval man, living in the same country, came into competition, if (other circumstances being equal) the one tribe included a great number of courageous, sympathetic and faithful members, who were always ready to warn each other of danger, to aid and defend each other, this tribe would succeed better and conquer the other... Selfish and contentious people will not cohere, and without coherence nothing can be effected. A tribe rich in the above qualities would spread and be victorious over other tribes... Thus the social and moral qualities would tend slowly to advance and be diffused throughout the world” (Darwin, The Descent of Man, 87-88).
I leave you with the words of Peter Turchin... “Machiavelli was wrong” (Turchin 2007: 123).
Although reviewing the social scientific aspects of ethical theories will not resolve the fundamental normative question of what is morally right and morally wrong, it will let us assess each of the theories more holistically, to see if they have any empirical weak points. This approach is inspired by a trend in Philosophy that began in the middle of the last century: an attempt to "naturalize" our philosophical theories.
Taken together, Hobbes' social contract theory (SCT) and ethical egoism (EE) make various empirical claims. Psychological egoism is assumed by both. SCT goes further, claiming that the centralization of authority has reduced interpersonal violence and that rational actors submit to a central authority out of their own self-interest.
There is an active debate about whether the centralization of authority and the rise of nation-states has led to more or less violence, with Pinker and Keeley arguing that Hobbes was right and Ferguson (and others) rejecting Pinker and Keeley's data.
There is also some difficulty in establishing whether or not, historically speaking, rational actors submitted to a central authority out of their own self-interest. This appears to have happened at least once, but with the great majority of nation-states using coercion to reinvigorate its pool of subjects.
With regards to the question of psychological egoism, it appears that multiple disciplines, including social psychology, linguistics, and sociology, have either fully rejected the view or at least many of its researchers are moving away from it.
FYI
Suggested Reading: Batson et al., Is Empathic Emotion a Source of Altruistic Motivation?
Supplemental Material—
Video: C. Daniel Batson, Empathy Induced Altruism
Video: Steven Pinker, A History of Violence
Related Material—
Video: Crash Course, Behavioral Economics
Advanced Material—
Reading: Gregory Wheeler, Stanford Encyclopedia of Philosophy Entry on Bounded Rationality
Footnotes
1. As alluded to above, moral skepticism pairs well with moral nativism. An invaluable source for understanding these complexities is Richard Joyce's Entry on Moral Anti-Realism in the Stanford Encyclopedia of Philosophy.
2. The push for approaching a normative field like Ethics in a more empirically-informed manner is referred to as "naturalizing" the field. This trend is the legacy of one of the greatest philosophers of the 20th century: W. V. O. Quine. He pushed for a "naturalized" epistemology. The basic idea is that if epistemology is concerned with developing ideal knowledge-forming practices, then we should take into account (in some way) how actual knowledge-forming practices work. In other words, the prescriptive (ethics, epistemology, etc.) needs to pay attention in some way to the descriptive (cognitive science, social sciences, etc.). The Stanford Encyclopedia Entry on Willard Van Orman Quine is very helpful for understanding Quine's ideas.
Death in the Clouds (Pt. I)
Religion has actually convinced people that there's an invisible man living in the sky who watches everything you do, every minute of every day. And the invisible man has a special list of ten things he does not want you to do. And if you do any of these ten things, he has a special place, full of fire and smoke and burning and torture and anguish, where he will send you to live and suffer and burn and choke and scream and cry forever and ever 'til the end of time! ...But He loves you!
~George Carlin
Debemos arrojar a los océanos del tiempo una botella de náufragos siderales, para que el universo sepa de nosotros lo que no han de contar las cucarachas que nos sobrevivirán: que aquí existió un mundo donde prevaleció el sufrimiento y la injusticia, pero donde conocimos el amor y donde fuimos capaces de imaginar la felicidad.
[We should throw, into the ocean of time, a bottle full of the tales of our cosmic shipwreck, so that the universe will know of us what the roaches that survive us won't be able to say: that this was a world where suffering and injustice prevailed, but where we nevertheless knew love and were capable of imagining what happiness might be like.]
~Gabriel García Marquez1
Important Concepts
Back to Genesis
Today we reassess divine command theory (DCT), an approach to ethics that, like social contract theory, goes back millennia. Assessing this theory is complicated, however, in that DCT makes metaphysical claims (i.e., claims that go beyond the scope of science). Thankfully, a field that we were introduced to in Endless Night (Pt. II), cognitive science, has a theoretical/conceptual branch: philosophy. So today we will look at the conceptual issues that surround DCT.
We should also remind ourselves that DCT in this course, because one of our desiderata for an ethical theory was that it solve the puzzle of human collective action, has always included a purely descriptive, atheist option. That is to say, we can accept DCT along with the existence of God as a theoretical framework that explains morality through the supernatural. Call this DCT (theist) version. Or we can accept that the supernatural does have a role in explaining morality, but we can deny that supernatural beings actually exist. Call this DCT (atheist version).2
Empirical and Ontological

The empirical claims of both versions of DCT are numerous and many times at odds with each other. Obviously an atheist is going to be opposed to various empirical claims made in Christian sacred scriptures that do not cohere with her worldview. Similarly, there is one very large ontological claim that theists make that atheists don't make: the assumption of God's existence. In this lesson we will study both the tension between the atheist and theist camps of DCT, as well as what both camps of DCT agree on, namely that the rise of civilizational complexity is correlated with and perhaps caused by the rise of Big Gods. To be sure, debates between the atheists and theists are best studied in a course on Philosophy of Religion. So, I'll keep my comments on this tension brief in the first half of this lesson. In the second half we'll study the debate between divine command theorists of some kind, i.e., what divine command theorists of both the atheist and theist camps agree on, and people who argue divine command theory is false in all its guises.
So what do both atheist and theist advocates of DCT agree on? After "revelation", human societies increased in complexity as religious devotion to God spread. Let's call this the Big Gods hypothesis. The second claim they might agree on is the following. Watched people are well-behaved people. Let's call this the social monitoring hypothesis. We'll refer to the question regarding God's existence as the ontological question. Theists answer the ontological question in the affirmative; atheists obviously answer it in the negative.3✝
The Ontological Question
Philosopher David Chalmers.
Let's begin with the question of God's existence. This is not the course for a thorough review of all the arguments for and against God's existence. What can be said is that no argument for God's existence receives universal support and/or proceeds without criticism. To be honest, if you look at the field of philosophy as a whole, theism is losing. Seventy-three percent of philosophers are atheists, per a recent survey (see Bourget and Chalmers 2014).
The question of theism is only really considered within philosophy of religion and theology, with most other academic disciplines either not addressing metaphysical questions like this (since they are outside the purview of science) or addressing the issue in a way that does not take a position on the actual ontological question (for example see Ehrman 2014). Since theology explicitly assumes that God exists, this won't actually help us answer the question of whether God exists or not. So we'll have to look at philosophy of religion.
This does not mean that philosophers of religion are an unbiased group. Per the survey sighted earlier, most philosophers that are also theists choose to focus in philosophy of religion. This is to say that theists are disproportionately represented in philosophy of religion (see section 3.3 of Bourget and Chalmers 2014). Nonetheless, those atheists who actively work in the philosophy of religion meticulously critique every argument for God's existence that gets churned out, as we'll see. Let's take a look at one argument for God's existence, as well as the atheists' responses.
The Teleological Argument
In the early 19th century, there was an approach to theology called natural theology. Natural theologians assumed God's existence and sought to discover the grandeur of God's handiwork by studying the natural world. This was all primarily due to William Paley. Paley advocated natural theology as a method of discovering (and praising) God’s work. It was perceived as an act of devotion. In fact, this is why Charles Darwin’s father, realizing his son’s waning interest in medicine (his first career choice), recommended that Charles take up theology instead. While studying theology, Charles fell in love with the study of nature. (Ironic, isn't it?) See chapter 1 of Wright's (2010) The Moral Animal for a brief biography of Darwin.
Let's take a closer look at natural theology. One of Paley's arguments is well-known and goes by many names. We'll call it the teleological argument. It can be summarized as follows:
- A watch displays order for a purpose.
- We correctly conclude that such order was created by a maker.
- The universe also displays order for a purpose.
- Therefore, we should likewise conclude that it was created by a maker.
Paley's argument structure is that of an analogy. Just as some objects in our world (e.g., watches) display some sort of function for a purpose, the universe itself appears to be for a purpose. This is because they both have makers. Thus, God's existence is established.
Objections
Breaking the analogy
The Great Infidel, philosopher David Hume, made a comment relevant to this argument (although Hume had died before the publication of Paley's work). Hume made the point that in order for an analogical argument to work, you have to know the two things you are comparing. That is to say if you are comparing, let's just say, life to a box of chocolates, in order for the comparison to work, you'd have to know both things fairly well. We are, of course, alive. And most of us have had experience with those boxes of assorted chocolates, where some items are very tasty but some are filled with some kind of gross red goo. The box of chocolates takes you by surprise sometimes, just like life. The analogy works because you know both things.
So here's the problem that this poses for the teleological argument: maybe you've seen a watch get made, but you've never seen a universe get created. You're comparing a thing you know and can even learn how to make (a watch) to a thing you don't understand fully and whose genesis even theoretical physicists are unsure of (the universe). So, Hume would say, the analogy doesn't work.
Denying premise #3
Another strategy is to deny that premise #3 is true. If successful, this objection would undermine the soundness of the whole argument. The argument might go like this. First off, we can say that the universe does not display purpose. Even though there are some regularities in the universe (like stable galactic formations and solar systems), none of these have any obvious purpose. What is the purpose of the universe? What is it for? These appear to be questions without answers, at least not definitive ones.

Some atheists (e.g., Firestone 2020) go further and attempt to dispel any notion that the universe might be well-ordered in any way. Firestone argues that the so-called regularities we do observe in the universe only appear to be regularities from our perspective. For example, we know that the early universe, soon after the Big Bang, was very chaotic (Stenger 2008: 121). Further, some parts of the universe are still chaotic (galaxies crashing into each other, black holes swallowing entire solar systems). We couldn't see much of that during Paley's time, and so Paley might be forgiven, but to continue to argue that the universe is well-ordered and displays function seems to be anachronistic (or out of sync with the times).
Some theists might respond to the objections above by arguing that some of the universe does have a function. Perhaps the function of our part of the universe is to harbor human life. If this is the argument, then there is a glaring problem with it. We must remind ourselves that human life on this planet is only temporary. This is because life on this planet will be impossible somewhere between 0.9 and 1.5 billion years from now (see Bostrom and Cirkovic 2008: 34). At this point, the sun will begin to enter its red giant phase and expand. It might consume the planet. Or it might simply heat up our planet until complex life is impossible. In either case, harboring human life would not be one of the functions of Earth.
Lastly, even if we agree that there is some kind of order to the universe, this is not the same kind of order that is seen in a watch. Rather, it is merely the sort of patterns you would find in any complex system. That is to say any sufficiently complex system gives rise to perceived regularities. This is usually referred to as Ramseyian Order (see Graham & Spencer 1990). In other words, this means that Paley is guilty of an informal fallacy; he used the word "order" with two different meanings (see Firestone 2020).

Equivocation is a fallacy in which an arguer uses a word with a particular meaning in one premise, and then uses the same word with a different meaning in another premise. For example:
- Man is the only rational animal.
- No woman is a man.
- Therefore, no woman is rational.
In this argument, the word "man" is used with two senses in mind. In the first premise, "man" (sexist as this may be) refers to the human species. In the second premise, "man" refers to the male gender. If you were to replace the word "man" in each premise with their definitions, the argument would no longer flow, and it would be clearly invalid. Try it.
Firestone argues that this is what Paley did in the teleological argument. In other words, the word "order" has one meaning in premise 1 (functional order) and a different meaning in premise 3 (Ramseyian order). Hence, the argument is invalid. And so, one of the most well-known arguments for God's existence fails.
Worse problems
The Problem of Evil

John Mackie, atheist
and moral
error theorist.
To have no uncontested arguments for answering the ontological question in the affirmative is very problematic for the theist version of DCT. Even worse is when atheists go on the offensive (e.g., see Mackie's Evil and Omnipotence).
There is a famous argument against the existence of God. We will refer to it as The Problem of Evil. Briefly, it states that the world as we know it, a world full of suffering, is incompatible with the existence of an all-powerful, benevolent God. That is to say, if God really is all-powerful and all-loving, he would stop all the unnecessary suffering in our world, everything from natural disasters (like earthquakes) to tiny bacteria or viruses that are less than a micrometer wide (like Yersinia pestis which caused the Black Death or the H1N1 virus that caused the 1918 influenza pandemic). That is because these disasters do nothing by way of "teaching" us anything. In fact, throughout most of human history, their causes were completely unknown to us. The suffering caused by that death appears to be "unnecessary" in the sense that nothing was gained from it. Would an all-powerful, all-loving God really allow this?3
To take another example, re-consider our deadliest predator: the mosquito—which we first discussed in Playing God. Winegard (2019: 2) provides the following statistic: “The mosquito has killed more people than any other cause of death in human history. Statistical extrapolation situated mosquito-inflicted deaths approaching half of all humans that have ever lived.” Why does the mosquito even exist? Elsewhere, Winegard makes the point that, as far as we can tell, mosquitoes appear to only serve two functions: to kill humans and to make more mosquitoes. Does this look like the world that an all-powerful, all-loving God would create?
Here is the Problem of Evil in argument form:
- If God exists, then there would be no unnecessary suffering.
- But there exists unnecessary suffering.
- Therefore, God does not exist.
To be clear, the Problem of Evil isn't blaming suffering on God, an interpretation that some theists sometimes mistakenly make. The Problem of Evil is instead making the claim that—if we assume God as having the traits of being all-powerful, all-knowing, and all-loving—God's existence is impossible given the type of world that we live in. This is not the place for a careful analysis. But it's important to note that this is a powerful argument against theism; one that theists have to respond to satisfactorily if we are to accept the existence of God...
Debunking religion
A more ambitious strategy, from the atheist perspective, is to try to explain how religion arose as itself a product of natural selection. As we saw last lesson, Wilson begins his Darwin's Cathedral with a summary of Darwinism and a defense of the theory of group selection, where entire groups are adaptive units amenable to natural selection. In chapter 3, Wilson then begins to make the case that religious groups in particular might be adaptive units. First, he dispels the notion that belief in the supernatural is irrational.
“Suppose that behavior 1 is adaptive in mutation A but has no effect on fitness in situations B through Z. Why should the afflictions caused by B-Z lead to the abandonment of behavior 1? If the behaviors… benefit the group in many respects, why should they be abandoned because of plagues, droughts, invading armies, and other afflictions beyond the group’s control? The ability of a belief system to survive these shocks is to be admired from an adaptationist perspective, not ridiculed” (Wilson 2003: 102).
What Wilson is trying to say is that it may very well be the case that religion has been harmful or has led to some some erroneous beliefs. But(!) the important thing from the adaptationist perspective is that, if religion has proven to be adaptive (i.e., to help society function as a well-adapted social organism), then this is enough to establish that the hypothesis of the religious group as an adaptive unit is a viable one.
Wilson argues that religious systems like, say, Calvinism, actually do help society overcome many of its social obstacles. For example, Calvinism is internalized from a young age, since children memorize all the rules as part of their upbringing, an internalization that continues through prayer. “When it comes to turning a group into a social organism, scarcely a word of Calvin’s catechism is out of place” (Wilson 2003: 105). This internalization creates a community mindset. In addition to the belief system, a well-tailored social apparatus is prescribed by Calvinism. Strict protocols for the selection of pastors, deacons, elders, and members of the city government are codified. Moreover, continued monitoring of everyone’s role is regular, such as with pastors’ weekly meetings to ensure purity of doctrine and quarterly meetings whose express purpose was for the pastors to criticize each other (see p. 106). This made sure there was uniformity of belief, further strengthening the social unit. There was a meticulous procedure detailed for deviant citizens, and it appears that justice was meted out completely impartially, with even the sons of prominent families being incarcerated if they broke a rule. Families were also visited once a year to have their spiritual health examined (see p. 111), and church attendance was required. These are all ways that Calvinism turns the community into a social organism, capable of adapting to selection pressures such as internal tension, external threats, production of essential goods and services, etc. There is plenty more support for this view, Wilson argues. For example, Geneva clearly thrived during this time period, when Calvinism was the law of the city. Schools, hospitals, and welfare systems were initiated. It was even the case that their Geneva Academy eventually attracted students from throughout Europe.6
If religiosity itself can be explained as the product of natural selection, then there'd be little reason to suppose God actually exists. Instead, we're just predisposed to believing in God, and this is because religious belief systems have been adaptive (to groups) in the past.
But...
I won't pretend that the ontological question is resolved definitively here. We will leave the question of God's existence as an open question, at least for our purposes.5 This means that, in the second half of this lesson, we can unify DCT atheist and theist versions, and assess only their shared empirical claims: the social monitoring hypothesis and the Big Gods hypothesis.
To be continued...
FYI
Suggested Reading: John Mackie, Evil and Omnipotence
Supplemental Material—
-
Video: Crash Course, The Problem of Evil
-
Video: Bart Ehrman, God and the Problem of Suffering
-
Video: Crash Course, Aquinas and the Cosmological Argument
Advanced Material—
-
Reading: Randy Firestone, Paley’s version of the Teleological Argument is Based on an Equivocation Fallacy: There is No Order in the Universe Which Resembles the Order of a Watch
Footnotes
1. Translation by the instructor, R.C.M. García—no relation to García Marquez.
2. Recall also that we are only superficially privileging Christianity over other religions when we conceive of DCT. It just happens to be the case that most students in Southern California are more familiar with Christianity than, say, Sikhism. It's a good time to remind students that we can conceive of a DCT based on Islam, Sikhism, Hinduism, etc. We are covering the Christian (in particular Catholic) version primarily out of convenience.
3. It should be noted that I was inspired to include DCT (atheist version) by the work of Ara Norenzayan, particularly his book Big Gods. To be clear, then, when we refer to the Big Gods hypothesis in this class, we'll mean the hypothesis that human societies increased in complexity as religious devotion to Big Gods spread. It should be added that Norenzayan's own work, which could just as easily be called the "Big Gods hypothesis" makes many more claims that we are not covering. The interested student can refer to his work, which is fascinating.
4. It was only in the 20th century that tectonic theory was accepted and that germ theory began to proliferate in the medical sciences.
5. Various philosophers in the 20th century (e.g., Rudolf Carnap) argued that some questions (like questions over the existence of God) are pseudo-problems, unsolvable squabbles that don’t deserve to be pondered.
6. In the same chapter (chapter 4), Wilson also muses that the Protestant Reformation can be regarded as a large number of social experiments in different adaptive landscapes with many failures for each success—like Darwin machines in action. As you may or may not know, different versions of Protestantism proliferated after Luther's protest. In some cities, some version of Protestantism stabilized (i.e., adapted) and was adopted by most of the members of the community (i.e., created offspring); in other cities, notably in Münster, an unstable form of Protestantism tried to take hold and the result was disastrous (i.e., extinction).
✝ I can imagine someone being upset at the word revelation being in scare quotes. It's just for the sake of being inclusive to the atheists.
Death in the Clouds (Pt. II)
Death in the Clouds (Pt. II)
The social sciences and cognitive science, like all empirical disciplines, cannot directly address the question of God's existence, which appears to be a non-empirical matter, i.e., no experiment or observation could either prove or disprove God's existence.
Much ink has been spilled by atheists in giving their numerous objections and responses to any conceivable argument for God's existence, e.g., Paley's watch analogy.
With regards to arguing against God's existence, a common argument from the atheist camp is known as the Problem of Evil. This argument states that the existence of all-powerful, all-knowing, all-loving God is incompatible with the existence with the unnecessary suffering that we see in the world around us. Since it is unreasonable to deny that unnecessary suffering exists, the atheists argue, the only rational response is to abandon belief in an all-powerful, all-knowing, all-loving God.
In this class, we are leaving the dispute about God's existence as an open question. This allows us to drop the ontological question and unify DCT (atheist version) with DCT (theist version). Moreover, although much data from social psychology demonstrates that the social monitoring hypothesis could be true, there are stunning disconfirmations of the hypothesis too, e.g., Batson's Good Samaritan experiments. Thus, we are labeling the social monitoring hypothesis as an open question. The Big Gods hypothesis fares no better. Turchin and colleagues have recently launched a massive attack on the view and we are awaiting a response by Norenzayan. For now, we will also label it an open question.
FYI
Suggested Viewing: Closer to Truth, Justin Barrett - Does Evolutionary Psychology Undermine Religion?
Supplemental Material—
Video: Andy Luttrell, Good Samaritan Study
Related Material—
-
Reading: Don Marquis, Why Abortion is Immoral
-
Reading: Mary Anne Warren, On the Moral and Legal Status of Abortion
-
Reading: Judith Jarvis Thomson, Aquinas and the Cosmological Argument
Advanced Material—
Reading: Harvey Whitehouse et al., Complex societies precede moralizing gods throughout world history
When in Rome...
How are you going to teach virtue if you teach the relativity of all ethical ideas? Virtue, if it implies anything at all, implies an ethical absolute. A person whose idea of what is proper varies from day to day can be admired for his broadmindedness, but not for his virtue.
~Robert M. Pirsig
Logical Analysis
Refresher
Today we reassess cultural relativism. Relativism was alluring to many intellectuals who were disenchanted with Western values after witnessing the pointless brutality of World War I (Brown 2008: 364-5). It appears that once you've been wholly alienated from your own culture's values, you are more capable of seeing other cultures in a new light, with fewer reservations and with a renewed commitment to assess these alien cultures by their own internal logic. Students today are also attracted to the view because it makes the somewhat obvious point that cultural evolution plays a dominant role in why we see so much variance in the world's moral codes. Moreover, prior to the introduction of radical moral skepticism into the fray, anyone that denied that moral properties are mind-independent (i.e., moral non-objectivism) could only choose between relativism, Hobbes' social contract theory, and ethical egoism. And, to be honest, cultural relativism does seem like the most palatable of these. However, some theorists argue that, not only is relativism false, it's dangerous as well (see the next section).

But before we begin our reassessment, let's remind ourselves of some important features of classical cultural relativism. Relativism, at least in its classical form (which is the one we are covering), accepts the notion of relative truth. That is to say that cultural relativists believe that some things can be true for some people. This is an epistemic position, a philosophical position about the nature of truth. These kinds of philosophical positions cannot be demonstrated through empirical experimentation; rather, they have to be argued for. However, cultural relativism does make an empirical claim—the second thing we have to remind ourselves of. The relativist claims that there are major differences in the moralities that people accept but that these differences do not seem to rest on actual differences in situation or disagreements about the facts. In other words, in different cultures, people generally agree on the facts and on the similarity of their situations but yet they still have different values.
Important Concepts
Before reassessing classical cultural relativism, let's also perform a logical analysis of the view itself as well as the argument put forward for the view. To this end, we will need to learn some new logical concepts. We'll learn those in the Important Concepts below.
The quest for soundness
When first learning the concepts of validity and soundness, students often fail to recognize that validity is a concept that is independent of truth. Validity merely means that if the premises are true, the conclusion must be true. So once you've decided that an argument is valid, a necessary first step in the assessment of arguments, then you proceed to assess each individual premise for truth. If all the premises are true, then we can further brand the argument as sound. If an argument has achieved this status, then a rational person would accept the conclusion.1
Let's take a look at some examples. Here's an argument:
- Every painting ever made is in The Library of Babel.
- “La Persistencia de la Memoria” is a painting by Salvador Dalí.
- Therefore, “La Persistencia de la Memoria” is in The Library of Babel.

Jorge Luis Borges
(1899-1986).
At first glance, some people immediately sense something wrong about this argument, but it is important to specify what is amiss. Let's first assess for validity. If the premises are true, does the conclusion have to be true? Think about it. The answer is yes. If every painting ever is in this library and "La Persistencia de la Memoria" is a painting, then this painting should be housed in this library. So the argument is valid.
But validity is cheap. Anyone who can arrange sentences in the right way can engineer a valid argument. Soundness is what counts. Now that we've assessed the argument as valid, let's assess it for soundness. Are the premises actually true? The answer is: no. The second premise is true (see image below). However, there is no such thing as the Library of Babel; it is a fiction invented by a poet named Borges. So, the argument is not sound. You are not rationally required to believe it.
Here's one more:
- All lawyers are liars.
- Jim is a lawyer.
- Therefore Jim is a liar.
You try it!
Cultural relativism seen anew
With all the moving parts in place, we can now rebuild our argument for cultural relativism. It is as follows:
- There are major differences in the moralities that cultures accept.
- If different cultures accept different moralities, then each culture’s morality is true for them. (This is the missing premise.)
- Therefore, each culture’s morality is true for them.
The argument appears to be valid. In fact, it is in a pattern known as modus ponens, which is a pattern of reasoning that is always valid. With validity established, we can move on to assess the argument for soundness, i.e., to check whether the premises are actually true. Premise 1 appears to be true. In fact, it is obviously true: there are widely different sets of accepted customs and practices. The question, though, is whether or not premise 2 is true. Accepting this premise as true is to implicitly accept the notion of relative truth. Thus, we move on to that topic next.
Alethic relativism
We've seen that the argument for classical cultural relativism requires the truth of the notion of relative truth if it is to be sound. This "notion of relative truth" actually has a name, which I've avoided stating until now. It is called alethic relativism: the claim that what is true for one individual or social group may not be true for another. It appears that alethic relativism is essential to all forms of relativism (including our classical moral relativism) since all (or most) other forms of relativism are in principle, reducible to alethic relativism (see §3.4 of the Stanford Encyclopedia of Philosophy's entry on Relativism.
So the view on truth needed so that the argument for cultural relativism is sound has a name, but it also has many critics. The most common charge against the alethic relativist, it seems, is the accusation of self-refutation. In other words, those who disagree with alethic relativism accuse those that do of the most loathsome trait that can be found in a philosophical theory: the claim that the view refutes itself. Consider this. By its own logic, the theory of alethic relativism would have to be relative. So if you believe that truth is relative, as a relativist, you can only really say that that's the truth for you. In other words, if you are an alethic relativist and I am not, there seems to be no way that you can convince me that your view is true; by your own logic, you can only state that my view is true for me and your view is true for you. A version of this argument goes back to Plato (see Plato's Theaetetus: 171a–c):
- Most people believe that Protagoras’s doctrine is false.
- Protagoras, on the other hand, believes his doctrine to be true.
- By his own doctrine, Protagoras must believe that his opponents’ view is true.
- Therefore, Protagoras must believe that his own doctrine is false.

Donald Davidson (1917-2003).
Of course, this relativizing of truth appears to be either absolutely absurd (collapsing the distinction between truth and falsity) or at least terribly misleading. According to Donald Davidson, if alethic relativism were true, then this would make translation between different languages impossible. According to Davidson, assuming that other people (in general) speak truly is a pre-requisite of all interpretation (ibid.). So, if people from different cultures went around using their own personal notions of truth, then two people from different cultures would never be able to communicate. Obviously, communication between cultures does occur—even if there are occasional hiccups. Alethic relativism, then, appears to be self-defeating and entirely at odds with our commonsense notions of translation and the distinction between truth and falsity.
The debate over alethic relativism is ongoing, however. Thus, the best we can do in this course is allow the matter of the truth of alethic relativism to remain an open question. Nonetheless, we can at this point say this. The argument for classical cultural relativism explored in this course is not sound, since the premises are not all known to be true. This is not to say that it is unsound. We can only say we have to leave it at that.
Food for thought
Other conceptual considerations
Rachels (1986) argues that accepting cultural relativism has some counterintuitive implications, such as:
- We could no longer say that the customs of other societies are morally inferior to our own. But clearly bride kidnapping is wrong.
- We could decide whether actions are right or wrong just by consulting the standards of our society. But clearly if we were to have lived during segregation, consulting the standards of our society would've resulted in believing that segregation is morally permissible (and that's obviously false).
- The idea of moral progress is called into doubt. For example, if we are relativists, then we could not say that the end of the Saudi ban on women driving is moral progress (but it clearly is).

Damnatio ad bestias.
Let's take a specific example to make this more clear. If cultural relativism is true, then we cannot even morally judge some ancient Roman practices. Romans and their subject peoples would gather to watch various events that today we would find morally reprehensible. For example, public executions were held during lunch time at their many colosseums. These executions sometimes were by way of damnatio ad bestias (Latin for "condemnation to beasts"). In these grizzly executions, criminals, runaway slaves, and Christians were killed by animals such as tigers, lions, and bears. Floggings were regular, as was gladiatorial combat, which sometimes (but not always) would result in death. There was animal-baiting and animal battles, such as when Pompey staged a fight between heavily armed gladiators and 18 elephants. Personally, though, the spectacle I find the most gruesome was the fatal charades. These are plays in which the plot included the death of a character. And so, these stories were acted out by those condemned to die and their the deaths were real (see Fagan 2011 and Kyle 2001).
The practices of the Romans are questionable even when we look past the Roman games. For example, they appear to have come to sound moral conclusions but for the wrong reasons. Fagan describes an instance of this:
“One of the chief arguments against maltreatment of slaves in the ancient sources is not the immorality of handling a fellow human being harshly but the deleterious effect such behavior had on the psyche of the owner and the loss of dignity inherent in losing one’s temper” (Fagan 2011: 24-5).
Clearly, though, the reason why we shouldn't treat fellow human beings harshly is because they have dignity and equal moral worth. It does not have to do with how undignified it is to lose one's cool. This would be like saying that someone who punches their child in the face in public is doing something wrong because losing one's cool like that is best done in the home. Clearly, the wrongness comes from what is being done to the child. But the Romans seemed to not have recognized this, at least when it came to slaves. That seems morally condemnable.
Summary of conceptual issues regarding cultural relativism
In short, not only does classical cultural relativism need alethic relativism for a proper defense of the view, which entails (among other things) collapsing the distinction between truth and falsity and doing away with the possibility of translation, but also it does away with intuitive notions like that of moral progress. So, it seems that based on conceptual issues alone, classical cultural relativism is untenable, i.e., not capable of being defended. But we still have to assess its empirical claims. We move to that task next.
When in Rome: Moral Relativism, Reconsidered
Too absolutist(?)

La Piedra del Sol.
It has been brought up during class by students who are sympathetic to cultural relativism that the denial of cultural relativism might be morally disastrous in its own way. For example, if cultural relativism isn't widely believed by people, then people will think that it is ok for some people to invade different cultures to ameliorate their questionable moral practices. Under this way of thinking, perhaps the conquest of the Mexica people by the Spaniards is morally justified in that at least the Spaniards ended the practice of human sacrifice. This, goes the objection, is not ok, though, since cultures should be preserved but are instead being erased by this moral absolutism.2
It is true that some counterintuitive cultural practices do stabilize in a given region and become a cultural representation. Adam Smith himself reflected on this. In his Theory of Moral Sentiments (part 5, chapter 2), Smith first argues that the moral sentiments are not as flexible as the sentiments associated with beauty. In other words, it seems like norms regarding beauty vary much more widely than do norms regarding what is virtuous behavior. For example, no set of customs will make the behavior of Nero agreeable—behaviors which included the murder of his mother and engaging in lots of sexual debauchery. Smith also mentions that some practices, such as infanticide, have stabilized in certain societies because they were first practical. Only after this stage of practicality were they preserved by custom and habit. He makes the case that “barbarous” peoples are sometimes in a perpetual state of want, never being able to secure enough sustenance for themselves. It is understandable, Smith argues, that in these communities, infants should be exposed to the elements, since to not do so would only produce a dead parent and child, rather than just a dead child. Smith ultimately disapproves of this practice, of course, but he is making a case for how this practice can become part of a culture.3

The 'No female genital
mutilation' symbol.
In any case, the point of this digression is to suggest that the rejection of cultural relativism does not necessarily require the acceptance of moral absolutism. In theory, they could both be wrong. One could, of course, opt for something like moral sentimentalism or radical moral skepticism. Despite the label "radical", radical moral skepticism is actually far more amenable to a sort of "middle position". A skeptic might say the following. It is a good principle to respect the cultural practices of others, if there is no fundamental disagreement on the facts. But too many cultures believe blatantly false propositions, for example that having sex with a virgin can cure HIV or that women are not competent to represent their own interests. These practices appear to be both factually and morally wrong. But the invasion of other countries to ameliorate unjustified practices might also be morally wrong. In other words, we can say no to both. The main point here is that several morally abhorrent practices are protected by the invisible shield of cultural relativism. But the notion is ludicrous. It would be better to explain the phenomenon through a careful analysis, like that of Smith, and try to figure out how to update the beliefs of people so that these unfair, unreasonable practices can be stopped. Steven Pinker expresses these sentiments well:
“If only one person in the world held down a terrified, struggling screaming little girl, cut off her genitals with a septic blade, and sowed her back up, leaving only a tiny hole for urine and menstrual flow, the only question would be how severely that person should be punished and whether the death penalty would be a sufficiently severe sanction. But when millions of people do this, instead of the enormity of being magnified millions fold, suddenly it becomes culture and thereby magically becomes less rather than more horrible and is even defended by some Western moral thinkers including feminists” (Pinker 2003: 273).
In this lesson, we focused on two aspects of classical cultural relativism: the conceptual matter of the truth of alethic relativism, the claim that what is true for one individual or social group may not be true for another, and the empirical claim that disagreements in value do not stem from disagreements about the facts; we also reformulated our argument for classical cultural relativism—filling in the missing premise—and briefly looked at other conceptual issues with relativism.
With regards to alethic relativism, it is certainly not obvious that it is true. In fact, it is not clear how relativists can overcome the charge of self-refutation.
Moreover, it appears that moral relativism is incompatible with the notion of moral progress. However, it appears that moral progress does exist. So, we must conclude that classical cultural relativism is conceptually untenable.
Lastly, the one empirical claim made by classical cultural relativists appears to be false. In fact, cultures with widely-divergent practices routinely disagree on the facts.
FYI
Suggested Reading: James Rachels, The Challenge of Cultural Relativism
Supplemental Material—
-
Video: BBC Ideas (A-Z of ISMs Episode 18), Relativism: Is it wrong to judge other cultures?
Reading: Maria Baghramian and J. Adam Carter, Stanford Encyclopedia of Philosophy entry on Relativism
Advanced Material—
-
Reading: Michael F. Brown, Cultural Relativism 2.0
- Note: Brown abandons classical cultural relativism, and instead promotes his own brand.
Footnotes
1. Another common mistake that students make is that they think arguments can only have two premises. That's usually just a simplification that we perform in introductory courses. Arguments can have as many premises as the arguer needs.
2. James Maffie, in his chapter of Latin American and Latinx Philosophy: A Collaborative Introduction, argues that the Mexica did not practice human sacrifice, properly speaking. This is because sacrifice entails the process of making sacred, which can be seen from the etymology of the word (sacer meaning "holy" or "sacred" and facere meaning "to make"). However, for the Mexica, everything was sacred. Rather, the Mexica engaged in reciprocal gift-giving with the gods. So, there is plenty of evidence that the "obligation repayments" of the Mexica to the gods did include human bloood and hearts; however, according to Mexica logic, these wouldn't really be sacrifices, perse (see Sanchez 2019: 13-35).
3. Recall that John Miller and Scott Page note that Smith’s work is an early example of complexity studies. Smith also seems to be discussing moral norms as the products of social dynamics.
The Enigma of Reason
Whereas reason is commonly viewed as the use of logic, or at least some system of rules to expand and improve our knowledge and our decisions, we argue that reason is much more opportunistic and eclectic and is not bound to formal norms. The main role of logic in reasoning, we suggest, may well be a rhetorical one: logic helps simplify and schematize intuitive arguments, highlighting and often exaggerating their force. So, why did reason evolve? What does it provide, over and above what is provided by more ordinary forms of inference, that could have been of special value to humans and to humans alone? To answer, we adopt a much broader perspective. Reason, we argue, has two main functions: that of producing reasons for justifying oneself, and that of producing arguments to convince others.
~Dan Sperber & Hugo Mercier
Historical trends and forces

Just as cultural relativism was forged during and shaped by important historical events, such is the case with the remaining theories left in this unit. Vernant (1965) reminds us that the era in which Aristotle was operating was one in which the Greeks were moving towards rationalism and away from myth. He writes that Greeks were no longer satisfied with supernatural explanations for natural phenomena. "[T]he powers that make up the universe and whose interplay must explain its current organization are no longer primeval beings or the traditional gods. Order cannot be the result of sexual unions and sacred childbirth” (Vernant 2006: 219). Intellectuals, as they were moving away from myth, put all their stock in the power of reason. And so Aristotle dared to reduce morality to being realized only in human dispositions and behaviors. The same is true for utilitarianism and Kantianism. Alasdair MacIntyre (1981) reminds us that utilitarianism and Kantianism were a result of what he calls the hubris of the Enlightenment. That is to say that these theories were developed during a time when faith in reason was at a high point. Anthropologist Robin Fox, in fact, thinks academics are still too enamored with reason (see Fox 1989: 233-4). And it was out of this arrogance that came the attempt to ground morality itself in reason. Hubris indeed. Last but not least, moral skeptics are heavily under the influence of a revolution in science that is still taking place: the Darwinian inversion of reason.
Today we are reassessing virtue ethics and Kantianism, jointly. This is because in both theories reason plays a central role...
Virtue ethics
Aristotle, of course, believed that we can use reason to tease out what virtues we ought to strive for. By studying vices, which are either deficits or excesses of virtue, we can surmise the mean (the middle). In other words, we can see that cowardice is a vice, but we can also see that at the other end of the spectrum is rashness, which is also a vice. Neither are desired traits. But through reason we can infer that between cowardice and rashness lies courage, and so reason can help us discover that courage is a virtue.
Aristotle goes further. He believed that we could use our intellectual capacities to train ourselves to be virtuous. Through practice, we can make ourselves disposed to performing the right actions without any internal conflict. But practice, of course, has to be deliberate. And so reason again plays a vital role: reason helps us acquire virtues. This is indeed a lot of faith in reason and its capacity to acquire accurate beliefs. Aristotle is implicitly making the claim that unaided reason can lead us to accurate beliefs about virtues and about how to best train ourselves to embody those virtues. Let's call this the intellectual function hypothesis.
Another empirical claim that Aristotle made was the following. Aristotle believed that once we’ve developed the right virtues, the right actions will flow out of us when we are put in certain situations. This can be tested for, in theory. Find a virtuous person, put them in a situation that challenges their integrity, and see how they perform. On balance, an virtuous person should do the right thing more often than a non-virtuous person. Let's call this the good-character-good-behavior hypothesis.
One last point before moving on... There are other traditions of virtue ethics. We covered, for example, the ethics of care, a Buddhist virtue ethics, and Nietzsche's views as virtue ethics. But these all pull their theoretical power from Aristotle's basic reasoning. So, for our purposes, we will treat them as a group. This means that if Aristotle falls, they all fall.
Kantianism
Recall that Kant believed that human reason (i.e., the capacity to make inferences) is the source of moral law. In fact, Kant believed that reason is the basis not only of our moral code, but also for our belief in God, free will, and even the immortality of the soul. He argued for this through his transcendental deduction (see Rohl 2020). Although duty, as opposed to virtue, is central to Kant's moral theory, Kant still believes that human reason is a very powerful faculty. As such, we will also count him as positing the the intellectual function hypothesis.1
The intellectual function hypothesis

A crow using a tool.
What is the evolutionary function of reason? In other words, what did reason evolve to do? The question might seem strange at first. We are not used to thinking about our ability to reason, or to make inferences, as a biological trait that evolved. After a little reflection, however, we realize that indeed the capacity to reason is an evolved trait that we (humans) have but not, say, a caterpillar. We now know that various animals perform what appear to be intelligent behaviors through instinct. But we think, we learn, and we make inferences to a degree not matched in the animal world. Sure, some crows and non-human primates display an excellent ability with tools, but no crow or non-human primate could learn how to write a computer program or workout a calculus problem, no matter how long we trained them. So this capacity must've evolved at some point, for some reason.
Aristotle and Kant, although they never explicitly said this (since they were around before the theory of Darwinian evolution came about), would claim that reason evolved so that we could form better, more accurate beliefs. Reason is for understanding the world, so that we may better respond to it, and for understanding our lives, so that we may better live them. This is what we are calling the intellectual function hypothesis. And so far it has looked pretty good for them. Even Darwin himself seems to have thought this was the likely reason for why reason evolved.
“Darwin was content to explain the acquisition of our species’ cognitive abilities as a result of the pressure of natural selection on our precursors over long periods of time. And most scientists today, it would seem, concur with him... Wallace [co-inventor of the notion of evolution by natural selection], however, simply could not see how natural selection could have bridged the gap between the human cognitive state and that of all other life forms. What he did see was the breadth and depth of the discontinuity between symbolic and nonsymbolic cognitive states... As far as we know, modern human anatomy was in place well before Homo sapiens began behaving in the ways that are familiar today” (Tattersall 2008: 101-2; emphasis added).
Nonetheless, only in the last decade or so, a new type of theory about the origins of reasoning has emerged. In fact, many cognitive scientists are moving towards the view that our capacity to reason has a social-communicative origin. This means that reason did not specifically evolve for truth-tracking (the intellectual function). Instead, reason appears to have originated to help us coordinate the social problems that arise from living collaboratively, a way of living we've seen is a part of our evolutionary history.
For example, Tomasello (2018) argues that we developed our capacity to reason in order to better communicate reasons for plans of action for the group. Because collaborating effectively was a matter of life or death for members of tribes, tribes with members who were better able to come up with plans of actions by giving reasons for their plans were more likely to outcompete tribes who didn't have such members. Moreover, the entire tribe was more likely to survive and pass on their genes. Slowly, tribes with members that were able to give reasons for their actions dominated, and so these genes became dominant in the gene pool.
Another theory, this time by Mercier and Sperber (2017), makes the case that we developed reason in order to win arguments. This is because, for most of our evolutionary history, to be without a tribe was certain death. So, if you did something against some member of your tribe, especially during trying times, your only hope was explaining yourself (so that you're hopefully forgiven). Those persons with the capacity to give reasons for their actions and views were more likely to survive and hence pass on their genes. Here is Hugo Mercier explaining his view:

Note that these theories aren't entirely incompatible. Moreover, these theories explain why our reasoning capacity seems to have a built-in confirmation bias. Think about it. You are always naturally disposed to bias your view over that of someone else. Why would this be? One good reason for this might be that reason evolved to defend your view, to explain your actions, to argue for your plan. This view is growing to be so convincing that, at the end of his A Natural History of Human Thinking, Tomasello makes the case that some social-communicative theory must be true.
What does this mean for us? It means two things. First off, if reason has a built-in confirmation bias, then there is a risk that reason does not help us arrive at virtues that allow for human flourishing (eudaimonia) or objective moral values, but instead merely defends our pre-existing biases about how we think we should live. I mean, isn't it convenient that when Aristotle thought about the way one should live, the set of virtues he came up with were that of an aristocrat right around the time he was tutoring Alexander the Great? That smells like confirmation bias. Just like in the Necker cube pictured, you see what you want to see.2
Second, if a social-communicative type of theory is true, then that means that an intellectual function theory is not true. In other words, reason doesn't do what Aristotle and Kant thought that it did. A swing and a miss...
Food for Thought
Good character, good behavior?
Batson comes back
Now we must address the good-character-good-behavior hypothesis. We begin with a return to the work of Daniel Batson. Batson (2016: 42-3) reviews the data from the Character Education Inquiry, a massive, longitudinal study into schoolchildren’s honesty and generosity. The results are not good for Aristotle… “Rather than a general trait (i.e., virtue) of honesty, many children seemed to have more nuanced standards tuned to specific circumstances. Instead of ‘Thou shalt not cheat,’ the behavior of many was more consistent with, ‘Thou shalt not cheat unless you need to in order to succeed and can be certain you won’t get caught.’”. So at least in children, good character did not necessarily result in good behavior. They seemed to primarily follow very self-serving rules, once again showing how dominant confirmation bias is in our cognition. But it gets worse...
Doris' empirical assault

The findings of social psychology that started trickling out in the 20th century are fascinating, and much ink has been spilled about what they tell us about human nature. John Doris saw something very particular in this data: the possibility of refuting virtue ethics. In Lack of Character, John Doris (2010) goes on an all-out empirical assault on virtue ethics.
Let's begin with Milgram's famous experiments on obedience. I'm sure many of you have heard of Milgram's experiments, so I will not recount the entire experiment here (but you can see the video below). The finding of interest for us is that a majority of subjects, if coaxed by a researcher (just a man in a lab coat), gave the maximum voltage shock to the learner despite the learners’ protestations, complaints of heart issues, screams, and eventual silence. All in all, 65% of subjects administered all the shocks, despite the nervous laughter exhibited by many of them (who presumably knew they were doing something wrong). These findings were replicated many times by Milgram (and partial replications were done by others, e.g., Burger 2009). One replication, even involved using real shocks on puppies (Sheridan and King 1972), where many of the subjects were openly crying.
Another experiment of relevance was performed by Philip Zimbardo, who coincidentally went to high school with Stanley Milgram. This has come to be known as the Stanford Prison Experiment. Since many of you also know the background of this experiment, I won't recount it. The finding of interest, however, is that young male Stanford students randomly selected to be guards took on the role of a prison guard and began to behave in increasingly sadistic ways towards the prisoners. This escalated until the experiment had to be terminated. Below you can see the preview for a film depicting a dramatization of the experiment.

Dirty tissues.
To these very famous experiments we add more mundane but equally perplexing experiments. For example, there's the finding that we make harsher value judgments when we breathe in foul air (Schnall et al. 2008) or have recently had bitter (as opposed to sweet) drinks (Eskine, Kacinic, and Prinz 2011). It's also the case that good smells appear to promote prosocial behavior (Liljenquist, Zhong, and Galinsky, 2010). Apparently, washing your hands before filling out questionnaires causes you to be more moralistic (Zhong, Strejcek, and Sivanathan 2010). In fact, merely answering a questionnaire near a hand-sanitizer dispenser makes you temporarily more conservative (Helzer and Pizarro 2011). A person’s tendency to act dishonestly can be enhanced by their wearing sunglasses or being placed in a dimly lit room (Zhong et al. 2010). Lastly, moral opinions can be made more harsh if there is a dirty tissue nearby (Schnall, Benton, et al. 2008).
This is only the tip of the iceberg with regards to the so-called situationist challenge. Doris fills his book with such experiments. But what do we learn from this? Doris puts it this way:
“Social psychologists have repeatedly found that the difference between good conduct and bad appears to reside in the situation more than in the person” (Doris and Stich et al. 2006; emphasis added).
Fellow critic of virtue ethics, Gilbert Harman, goes further and claims we must now be skeptical of the existence of robust character traits in humans. He puts his tentative conclusion this way:
“I do not think that social psychology demonstrates there are no character traits [or virtues]... But I do think that results in social psychology undermine one’s confidence that it is obvious there are such traits” (Harman 2008: 12; interpolation is mine).
In short, the social situation you are in appears to dominate your behavior, moral or otherwise—this is a basic tenet of situationism. If you want to guarantee good behavior, you are apparently better off attempting to control your social environment than you are developing some set of virtues. Big miss for Aristotle.
Taking stock...
In this lesson we address the intellectual function hypothesis, the view that reason is for acquiring more accurate beliefs about ourselves and the world—a view that both Aristotle and Kant seem to accept; we also consider the good-character-good-behavior hypothesis, the view that if one develops robust virtues then one will be inclined to behave in the right ways when situations that call for virtue arise—a hypothesis advanced by Aristotle.
With regards to the intellectual function hypothesis, we can say that theories regarding the origins of our capacity to reason are veering towards a socio-communicative origin—not the intellectualist hypothesis. Moreover, socio-communicative theories about the origins of reason explain why we seem to have a built-in confirmation bias—which appears to be a feature of our capacity to reason rather than a bug.
With regards to the good-character-good-behavior hypothesis, the situationist challenge that is embodied by experiments like the Stanford Prison Experiment and Milgram's Obedience to Authority studies seems to radically undermine our confidence that character traits (like virtues) are more predictive of behavior than the social situation one finds themselves in.
Although the good-character-good-behavior hypothesis was labeled as not true, even if we update it to an open question, the situation is not good for Aristotle, at least with regards to the social science. Aristotle is dismissed from further analysis, while Kant lives to fight another day—against the utilitarians.
FYI
Suggested Reading: John Doris and Stephen Stich, et al., Virtue Ethics and Skepticism About Character
TL;DR: Crash Course, Social Influence
Supplemental Material—
-
Video: Philip Zimbardo, Power of the Situation
Video: VSauce, The Future of Reasoning
-
Audio: Philophy Bites Podcast, Dan Sperber on The Enigma of Reason
- Note: Both audio and links to Sperber's work are found in this link.
-
Video: The You Are Not So Smart Podcast, Why do humans reason? Arguments for an argumentative theory
- Note: Both audio and a transcript to the podcast are found in this link (near the bottom).
Advanced Material—
-
Reading: Hugo Mercier and Dan Sperber, Why do humans reason? Arguments for an argumentative theory
-
Reading: John Doris, Persons, Situations, and Virtue Ethics
Footnotes
1. Kant in fact makes other empirical claims. For example, as part of his transcendental deduction, he argued that time was not objectively real—a view that some contemporary physicists are backing (see Rovelli 2018). Kant also made accurate astronomical predictions, such as the view that other planets must exist outside of our solar system and the nebular hypothesis (the hypothesis that the Solar System is formed from gas and dust orbiting the Sun). He also made ghastly "anthropological" claims (as was discussed in The Tree of Knowledge...) which are not worthy of further mention.
2. The work of Thomas Gilovich (1991) is an excellent primer on motivated reasoning and confirmation bias. Basically, Gilovich finds that if someone has a pre-existing bias to believe in something, they look at the available evidence and ask themselves, “Can I still believe the thing that I wanted to believe?” and they look for ways to make that possible. If they wanted to disbelieve something, they look at the available evidence and ask, “Do I have to believe the thing I don’t want to believe?” and they look for ways to avoid believing what they don’t want to believe.
The Trolley
(Pt. II)
Although many philosophers used to dismiss the relevance of neuroscience on grounds that what mattered was “the software, not the hardware”, increasingly philosophers have come to recognize that understanding how the brain works is essential to understanding the mind.
~Patricia Churchland
Bad blood
As we've seen, Kantian deontological reasoning routinely comes to a head with Utilitarian consequentialist reasoning. They've differed on how to assign moral personhood (whether it be through sentience or through the Rational Being criterion), about what our basic approach to sex should be (whether it be more liberal or more conservative), the justification behind capital punishment (whether it be for the sake of deterrence or retribution), the morality of animal use, whether we must earn-to-give or simply give what we feel is right (since charity is only an imperfect duty for Kant), the permissibility of recreational drug use, and the morality of physician-assisted suicide. Today we let these two have one last battle.1

A dopamine molecule.
Today we are first covering the only clearly empirical claim that utilitarians unambiguously make and it so happens to be a pillar of their view: hedonism. Recall that this is the view that the only thing that humans intrinsically value is pleasure/happiness. This is not to be confused with a related claim: moral naturalism. This second claim asserts that moral properties just are physical properties. It is the combination of these views that gives utilitarianism its argumentative force. The strategy for the utilitarian is to first get you to agree that all that humans intrinsically desire is pleasure (hedonism) and then to agree that this physical property of pleasure is the moral property of goodness (naturalism). This makes you realize that all humans really want good. You throw in some collectivist consequentialism, and the result is that what is morally right is what maximizes happiness for all sentient beings involved.
Utilitarians also make non-empirical claims, some of which we just mentioned. Let's list them off. Consequentialism is the claim that questions of moral rightness or moral wrongness depend on the consequences of the act in question; this, utilitarians claim, is an intuitive truth—something that just seems intuitively true. Most utilitarians also endorse empiricism: the view that the best method for learning about the world around you is through the senses and with the aid of observation and experimentation. In a word, science is the best way to discover what the world is like. Empiricism comes in handy for utilitarians when discovering which actions do in fact produce the most utility. Collectivism (as opposed to egoism) is the view that when making moral judgements you should take into account the well-being of all persons who will be affected, not just yourself. And of course there's moral naturalism: moral properties are actually physical properties.2
What does this mean for us? It means two things. First, empirically-speaking, we can only really assess the truth of hedonism. The other tenets require philosophical argumentation, not empirical validation. Second, it turns out that hedonism and naturalism are orthogonal, or philosophically independent. If it turns out that hedonism is true, this doesn't mean that naturalism is true by default. We still need an argument for naturalism. In fact, as we've seen, moral naturalism has always been the weak link for utilitarianism. G.E. Moore argued against moral naturalism in his Open Question argument. Even the moral skeptics, who are all about "biologicizing" the study of ethics, are not impressed by naturalism in ethics (see Joyce 2016: 6-7 or review the lesson titled The Trolley (Pt. I)). Nonetheless, if hedonism turns out to be false, this looks pretty bad for Utilitarianism, since it would mean that their one empirical claim turns out to be false. Even DCT did better than that. So, is hedonism true? Let's review the relevant empirical data now.
Stumbling on Happiness
Positive psychology is a growing field with many interesting viewpoints and insights for how we should live our lives. That field, however, is best studied in a psychology course. Here I will only bring up a topic that is relevant to our undertaking: affective forecasting errors. Let me introduce this topic with a question. Do you know what will make you happy? Per the work of Daniel Gilbert, synthesized in his 2007 bestselling book Stumbling on Happiness, the answer is simple: No. You reliably make incorrect predictions about how you will feel in the future, and hence you reliably make mistakes about what will make you happy (see also Wilson and Gilbert 2003). Let's look at some examples.

In one famous study, subjects were put into two groups. Both groups were asked to rate two posters and were gifted the one they liked most. One group, however, was asked to give reasons for their preferences, while the other group didn’t have to. In other words, they had to think about why they liked the poster they chose; they had to ask themselves, "Why does this poster make me happy?" Contacted 3 weeks later, those who gave reasons for their preference were less satisfied than those who didn’t. In other words, if you had to think about why you like something, you were likely to get it wrong; you're better off going with your gut (Wilson et al. 1993).
I can go on with studies like this forever... We consistently overestimate how happy we’ll be on our birthdays (Wilson et al. 1989). We consistently underestimate how happy we’ll be on Mondays (Stone et al. 2012). We expect dramatic events to negatively affect us much longer than they actually do and subtle events to not negatively affect us for long at all (Wilson & Gilbert 2003), such as when female subjects overestimate the negative affective impact of hostile sexism and underestimate the negative impact of more subtle sexism (Bosson 2010). Most people expect to regret foolish actions more so than foolish inactions (Kahneman & Tversky 1982), yet people are more likely to actually regret things that they didn’t do rather than things they did (Gilovich & Medvec 1995). Although parents often report that their biggest joy in life is their children, having children, on average, leads to a deterioration of relationship quality (Doss 2009) and relationship quality is the greatest predictor of overall life satisfaction (Diener 1999). It is also the case that gym goers were more likely to rate the unpleasantness of the hunger and thirst associated with being lost during a hike as high if they were surveyed towards the end of their workout, as opposed to at the beginning of it (Loewenstein 2005). This means that they were confusing how they were actually feeling at the end of their workout with how they would feel had they actually been lost during a hike. In other words, they were confusing how they felt in that moment for what they were actually asked to imagine. So, in sum, humans confuse how they feel right now for how they feel about life in general (Schwarz & Clore 2003). Affective forecasting errors are very real. Makes you think twice about those life decisions you're making right now...
Kant vs. Utilitarianism
What does this data mean for us? This at least appears to look bad for Kantianism. It appears that when we prospect (i.e., when we think or reason about the future), we reliably make mistakes about how we will feel. This implies that it is possible that when we attempt to universalize a maxim in our minds, we might be making errors as to whether a maxim should be universalized or not. Although Kant claims that we don't need to think about actual consequences to learn whether an action is right or wrong, this appears to be false. Clearly, when considering whether a maxim is universalizable, we are thinking of possible consequences, and this mental process seems to be very inviting to affective forecasting errors.

A roborat.
With regards to the utilitarians, this data does not, it appears, disprove hedonism. This data only shows that we make mistakes about what will make us happy, not that we don’t want happiness. As a matter of fact, neuroscience has repeatedly shown that human behavior is very much intertwined with the workings of the reward system in the brain (Berridge & Kringelbach 2008). In fact, data that we've seen previously (in The Jungle) might back-up utilitarianism. If you recall, we covered roborats—rats that had neurological implants that allowed researchers to remote control them. Where were the neurological implants connected? To the rat's reward center. In other words, if you want to make a rat do something, you control the input to their reward center, also known as the pleasure center.
Is it the case, however, that pleasure is the only thing that humans value? Moreover, how should we understand the word value? Do we mean that pleasure is the only thing that humans want? Or do we mean that pleasure is the only thing that humans find to be good? Notice please that these are not the same question. If you believe they are the same question, then you have accepted moral naturalism. But recall that we need argument for moral naturalism—one that hasn't yet arrived. And so what can we do? Perhaps we must leave this as an open question.
Food for Thought
Trolleyology
The feud ends...
Shattered dreams
Does this mean that radical moral skepticism takes the title? It certainly is the case that radical moral skepticism is not an ethical theory; it is a combination of an ethical theory (moral sentimentalism) and various meta-ethical positions (non-cognitivism, justification skepticism, and moral error theory). In addition to these, recall, a moral skeptic might also subscribe to moral nativism. In this final unit, however, we are only assessing ethical theories given the available empirical data. Thus, it behooves us to take a final look at moral sentimentalism and see if it survives our empirical filter.
Moral sentimentalism is the view that emotions and desires, as opposed to reason, play a leading role in the anatomy of morality. This is to say that the dominant role in moral judgment is played by the emotional parts of the brain, with empathy playing a pivotal role. As it turns out, under experimental controls, emotion does appear to be pervasive during moral decisions (Prinz & Nichols 2012). In fact, we've already reviewed some of this data:
- We make harsher value judgments when we breathe in foul air (Schnall et al. 2008).
- We make harsher value judgments when we have recently had bitter (as opposed to sweet) drinks (Eskine, Kacinic, and Prinz 2011).
- Good smells appear to promote prosocial behavior (Liljenquist, Zhong, and Galinsky, 2010).
- Washing your hands before filling out questionnaires causes you to be more moralistic (Zhong, Strejcek, and Sivanathan 2010).
- Answering a moral questionnaire near a hand-sanitizer dispenser makes you temporarily more conservative (Helzer and Pizarro 2011).
- A person’s tendency to act dishonestly can be enhanced by their wearing sunglasses or being placed in a dimly lit room (Zhong et al. 2010).
- Moral opinions can be made more harsh if there is a dirty tissue nearby (Schnall, Benton, et al. 2008).
In all of these examples, the subjects feelings were manipulated. Subjects were made to feel either more (or less) disgust, feelings of cleanliness, and feelings of anonymity. In each of these experiments, it was manipulating the feelings of subjects that had the greatest predictive effect on what kinds of moral judgments they would make. The verdict is in: there's something right about moral sentimentalism.

Or perhaps we should say that Hume and Smith were definitely right about some things. Hume, for example, wrote that reason is (and ought to be) only the slave of the passions. In other words, our emotions/feelings are what drive us; reason just figures out the directions. Hume not only thought this was descriptively true, i.e., that's how it actually happens, but he also thought that this was the best way. Of course, during his lifetime, he was seen as an infidel and a skeptic, and this particular view did not mesh well with the Enlightenment ethos in which Hume lived. But now, the mind sciences seem to be vindicating Hume.
Although the distinction between reason and emotion has been a truism from Plato to Descartes—and then some—recent research in neuroscience undermines this view substantially. For example, Antonio Damasio (2006) hypothesizes that, as you consider a complex decision, there are, however fleeting they may be, gut feelings associated with the different plans of action you are reasoning about. Damasio calls these somatic markers. Some of these somatic markers serve to eliminate certain trajectories from the analysis. Put plainly, these gut feelings make inferences possible by limiting the range of options that must be calculated, thereby enabling the information-processing that is required for our everyday existence in a reasonable amount of time—time being of the essence for much of our evolutionary history. So, feelings play a pivotal role in information-processing. Not only is there no hard distinction between affect (which is what neuroscientists call feelings/emotion) and reason, but it turns out that both are required for normal cognitive functioning. Kant's Pure Reason is probably fantasy; meanwhile, Hume and Smith were on to something. Affect is in charge.
“Automatic processes run the human mind, just as they have been running animal minds for 500 million years, so they're very good at what they do, like software that has been improved through thousands of product cycles. When human beings evolved the capacity for language and reasoning at some point in the last million years, the brain did not rewire itself to hand over the reins to a new inexperienced charioteer. Rather, the rider (language-based reasoning) evolved because it did something useful for the elephant [the automatic, affect-based system]” (Haidt 2012: 45-46).
Ah, but if only it were so simple. To explain moral judgment in terms of emotions, as the moral sentimentalists do, is of course only to kick the can down the road. This is to say that to truly understand moral judgment, if they are fueled by moral emotions (like disgust and anger), then we must know the evolutionary rationale for emotions. In other words, we have to get a solid grasp on why our particular emotional profiles evolved, as well as on the mechanics behind moral emotions. A discussion of the evolution of emotions is not, however, how I want to end this course—although I do recommend Patricia Churchland's recent Conscience: The origin of moral intuition. What I'd like to do instead is cast doubt on the traditional conception of emotions.

There is, it appears, a quiet revolution occurring in the science of emotion. This is led by Lisa Feldman Barrett, one of the most highly cited psychologists today. In her (2017) How emotions are made Barrett calls into question the classical view of emotion—that humans are pre-loaded with innate emotions and that each emotion has a distinct pattern of physical changes in the face, body, and brain. This admittedly intuitive conception of emotion is that our emotions and feelings are a response to the external world. In this view, the human mind has a built-in assortment of mental faculties, each with its own separate function, and the capacity for a range of emotions is one of these faculties. Throughout history, this emotion faculty has been referred to in different ways (e.g., the passions), but it has generally been considered separate from the reasoning faculty. The theory assumes that as we process input from the external world, our emotion faculty gives rise to a particular feeling or emotion that is realized in the brain through a particular activation of neurons and throughout the body as a predictable collection of physiological changes.
Barrett’s theory of constructed emotion, however, seeks to refute the classical view on emotions. According to Barrett’s view, emotions are categories. These categories are not innate; they are learned statistical averages of the coupling of social environments and information from the interoceptive system, the system which provides a sense of the internal state of the body. Rather than being a response to the world, as in the classical view, emotional categories are actually our constructions of the world. In other words, by cognizing these emotional categories, we can make sense of the world. Counterfactually, if not for the cognizing of emotional categories, we would be experientially blind in many social contexts; i.e., we would not have the requisite concepts to understand our sensory input.
Obviously, a complete account and defense of Barrett's arguments and evidence cannot be given here. Those arguments and that evidence are for another class and perhaps another teacher. Suffice it to say that if Barrett is right, Hume and Smith's account of moral emotions is in deep trouble.
We can go no further. We must console ourselves merely with the knowledge that our survey of the empirical literature is over. Our result is that no ethical theory remains unscathed by the tsunami of empirical data available today. It is important to also note, though, that it is not only philosophy that leaves us with more questions than answers. The social sciences and the mind sciences seem to also leave us begging for more information, more experiments, and more data. The world, as it turns out, is difficult to grasp and more complex than we could've ever imagined. Thankfully, evolution has fashioned us with curious brains. Thus, I fully expect that we will continue to endeavor to understand reality as it truly is, in the hopes that one day we will understand ourselves, the world, and our place in it.
Utilitarianism makes a series of conceptual claims (e.g., moral naturalism, consequentialism, etc.) but only one unambiguous empirical claim: that pleasure is the only thing that humans intrinsically value.
Affective forecasting errors, which are errors in our predictions about how we will feel in the future, appear to cast doubt on Kant's project, but not necessarily on utilitarianism.
The fMRI studies conducted by Joshua Greene and his colleagues lend credibility to the view that moral judgments are by-products of evolutionarily-evolved cognitive mechanisms that help us navigate the social environment. Per Greene, this debunks the theories, rendering them both untenable.
Moral sentimentalism also faces empirical problems. In particular, Lisa Feldman Barrett and colleagues are arguing that emotions are not what David Hume and Adam Smith thought they were. Her view is that emotions are not built-in but are actually learned statistical averages—a view which (if true) upends moral sentimentalism.
FYI
Suggested Reading: Joshua Greene, From neural ‘is’ to moral ‘ought’: what are the moral implications of neuroscientific moral psychology
Supplemental Material—
-
Video: Talks at Google, Joshua Greene
-
Audio: Radiolab, Chimp Fights and Trolley Rides
Advanced Material—
-
Reading: John Doris, Moral psychology: Empirical approaches
Footnotes
1. Admittedly, Kant comes into this battle severely weakened. While some of the empirical claims that he made were accurate, in particular some regarding the nature of the universe and the solar system, other empirical claims have been shown to be false (see Footnote 1 from The Enigma of Reason).
2. Recall from the lesson titled Playing God that there are various ways of being a non-egoist. One can subscribe to extensionism (e.g., Peter Singer extends the moral community to all sentient beings), biocentrism (e.g., Kenneth Goodpaster extends the moral community so as to include even plants); and ecocentrism (e.g., Rod Nash argues that even rivers and land ought to have moral—and legal—rights).