...Of Good
and Evil

 

 

It has often and confidently been asserted, that man's origin can never be known: but ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.

~Charles Darwin

Back to the beginning...

Sapiens, by Yuval Noah Harari

Let's turn the clock back—way back... A genus is a biological classification of living and fossil organisms. This classification lies above species but below families. Thus, a genus is composed of various species, while a family is composed of various genera (the plural of genus). The genus Homo, of which our species is a member, has been around for about 2 million years. During that time there has been various species of Homo (e.g., Homo habilis, Homo erectus, Homo neanderthalensis, etc.), and these have overlapped in their existences, contrary to what you might've thought. And so it's the case that, just like today there are various species of ants, bears, and pigs all existing simultaneously, H. erectus, H. neanderthalensis, and H. sapiens all existed concurrently, at least briefly. Now, of course, they are all extinct save one: sapiens (see Harari 2015, chapter 1).

The species Homo sapiens emerged only between 300,000 and 200,000 years ago, relatively late in the 2 million year history of the genus. By about 150,000 years ago, however, sapiens had already populated Eastern Africa. About 100,000 years ago, Harari tells us, some sapiens attempted to migrate north into the Eurasian continent, but they were beaten back by the Neanderthals that were already occupying the region. This has led some researchers to believe that the neural structure of those sapiens (circa 100,000 years ago) wasn’t quite like ours yet. One possible theory, reports Harari, is that those sapiens were not as cohesive and did not display the kind of social solidarity required to band together and collectively overcome other species, such as H. neanderthalensis. But we know the story doesn't end there.

Around 70,000 years ago, sapiens migrated out of Africa again, and this time they beat out the Neanderthals. Something had changed. Something allowed them to outcompete other Homo species. What was it? Well, here's one theory. It was this time period, from about 70,000 to 40,000 years ago, that constitutes what some theorists call the cognitive revolution—although other theorists (e.g., von Petzinger 2017) push the start date back as far as 120,000 years ago.1 Regardless of the start date, it is tempting to suggest that it was the acquisition of advanced communication skills and the capacity for abstract thinking and symbolism that were somehow evolved during this time period that allowed sapiens to build more robust social groups, via the use of social constructs, and dominate their environment, to the detriment of other homo species (see Harari 2015, chapter 2). In short, sapiens grew better at working together, collaboratively and with a joint goal.2

The idea that sapiens acquired new cognitive capacities that allowed them to work together more efficiently is fascinating. It is so tempting to see these new capacities as a sort of social glue that allowed sapiens to outcompete, say, H. neanderthalensis. As anyone who has played organized sports knows: teams that work well together are teams that win. What makes this idea even more tempting is that this capacity for large-scale cooperation happened again. Between 15,000 to 12,000 years ago (the so-called Neolithic), sapiens’ capacity for collective action increased dramatically again, this time giving rise to the earliest states and empires. These are multi-ethnic social experiments with massive social inequalities that somehow stabilized and stayed together—at least sometimes. What is this social glue that allows for the sort of collectivism displayed by sapiens?

Two puzzles arise:

1. What happened ~100,000 years ago that allowed the successful migration of sapiens?

2. What happened ~15,000 years ago that allowed sapiens to once again scale up in complexity?

 

Darwin

Perhaps evolutionary theory has the answer. Although many find it counterintuitive, the forces of natural selection have not stopped affecting Homo sapiens. Despite it being the case that sapiens today are more-or-less anatomically indistinguishable from the way they were 200,000 years ago, there have been other changes under the hood, so to speak. In fact, through the study of genomic surveys, Hawks (et al. 2007) calculates that over the last 40,000 years our species has evolved at a rate 100 times as fast as the previous evolution. Homo sapiens has been undergoing dramatic changes in its recent history.

It is, in fact, the father of evolutionary theory, Charles Darwin (1809-1882), pictured left, that first suggested that it was an adaptation, an addition to our cognitive toolkit, that allowed sapiens to work together more collaboratively and with more complex relationships. He tended to refer to this new capacity as making those "tribes" who have it more "well-endowed", giving them "a standard of morality", and, interestingly, he also posited that this wasn't an adaptation that occurred at the individual-level but rather at the level of the group.2 Here is an important passage from Darwin's 1874 The Descent of Man:

“It must not be forgotten that although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an increase in the number of well-endowed men and an advancement in the standard of morality will certainly give an immense advantage to one tribe over another… and this would be natural selection. At all times throughout the world tribes have supplanted other tribes; and as morality is one important element in their success, the standard of morality and the number of well-endowed men will thus everywhere tend to rise and increase” (Darwin 1874 as quoted in Wilson 2003: 9).

And so, the intellectual heirs of Darwin's conjecture (e.g., Haidt 2012, Wilson 2003) suggest that it was cognitive revolutions that are likely responsible for what I'll be referring to as increases in civilizational complexity. For our purposes, civilizational complexity will refer to a. an increased division of labor in a given society and b. growing differences in power relations between members of that society (hierarchy/inequality), despite c. a constant or perhaps even increased level of social solidarity (cohesiveness). With this definition in place, we can see that hunter-gatherer societies were low in civilizational complexity (very little division of labor, very egalitarian) and a modern-day republic, say, Canada, is high in civilizational complexity (a basically uncountable number of different forms of labor, various social classes). Intuitively, the more egalitarian societies would seem more stable, but, with our new cognitive additions, sapiens can find social order in high-diversity, massively populated and massively unequal societies.

Does this solve our puzzles? Quite the contrary. Many more questions now arise. Why did sapiens develop this new capacity for complex communication? What kinds of communication and what kinds of ideas are available to sapiens now that weren't available to them before the cognitive revolution? What specific ideas led to the growth in civilizational complexity? Are there any pitfalls that accompany our new cognitive toolkit? Obviously, we're just getting started.

 

Cave art from Chauvet Cave in southeastern France

 

What's ethics got to do with it?

The puzzles above, which I hope you find engaging, to some theorists are clearly related to the phenomenon of ethics. In a tribe, team and society, there are right and wrong ways to behave, and the more people behave in the right ways, the more stable that society will be. Others argue that the evolutionary story of our capacity for competitive, complex societies, although interesting, is unrelated to ethics. I guess ultimately that really depends on what you mean by ethics. For starters, many people assume a distinction between ethics and morals. A simple internet search will yield funny little distinctions between these two. For example, one website3 claims that ethics is the code that a society or business might endorse for its members to follow, whereas morals is an individual's own moral compass. But this distinction already assumes so much. We'll be covering a theory in this class that tells you that the moral code society endorses is the only thing you have to abide by. This renders "your own morals" as superfluous. We'll also look at a view that argues that the only thing that matters is what you think is right for you, so what society claims doesn't matter. We'll even cover a view that argues that both what society endorses and what you think is right is irrelevant: morality comes from reason. So this distinction is useless for us since it assumes that we've already identified the correct ethical theory.

MacIntyre's A Short History of Ethics

So then what is the study of ethics? Lamentably, there is no easy answer to this question. For many philosophers, to consider the origin of our cognitive capacities and how they allow us to work together more effectively is completely irrelevant to the field of ethics. For them, ethics is the study of universal moral maxims, the search for the answer to the question, "What is good?". For Darwin, as we've seen, it was perfectly sensible to call sapiens' capacity for collective action a 'moral' capacity. And, unfortunately, there are even more potential answers to the question "What is ethics?"

Thankfully, I have it on good authority that we don't need to neatly cordon off just what ethics is at the start. One of the most influential moral philosophers of the 20th century, Alasdair MacIntyre, begins his A Short History of Ethics by making the case that there is in reality a wide variety of moral discourses; different thinkers at different times have conceived of moral concepts—indeed the very goal of ethics—in radically different ways. It is not as if when Plato asked himself “What is justice?” he was attempting to find the same sort of answer that Hobbes was looking for when reflecting on the same topic. So, it is not only pointless but counterproductive to try to delimit the field of inquiry at the outset. In other words, we cannot begin by drawing strict demarcation lines around what is ethics and what is not.

If MacIntyre is correct, then the right approach is a historical one. We must place moral discourse in its historical context to understand it correctly. Moreover, this will allow us to see the continuity of moral discourse. It is the case, as you shall see, that one generation of ethicists influences the generation after them, and so one can see an "evolution" of moral discourse. This is the approach we'll take in this course. For now, then, when we use the word ethics, we'll be referring to the subfield of philosophy that addresses questions of right and wrong, the subfield that attempts to answer questions about morality.

 


 

You might be wondering what the field of philosophy is all about. I have a whole course that tries to answer that question. Let me give you my two sentence summary. There are two approaches to doing philosophy: the philosophy-first approach (which seeks fundamental truths about reality and nature independently and without the help of science) and the science-first approach (which uses the findings of science to help steer and guide its inquiries). The philosophy-first approach (Philosophy with a capital "p") was dominant for centuries but fell into disrepute with the more recent success of the natural sciences, thereby making way for the science-first approach (the philosophy with a lowercase "p" that I engage in)—although many philosophy-first philosophers still haven't gotten the memo that science-first is in. Obviously the preceding two sentences are a cartoonish summary of the history of philosophy, a history that I think is very instructive. The interested student should take my PHIL 101 course.

 


 

 

 

Important Concepts

 

Course Basics

Theories

What we're looking for in this course is a theory, one that (loosely speaking) explains the phenomenon of ethics/morality.5 Now it's far too early to describe all the different aspects of ethics/morality that we'd like our theory to explain, so let's begin by just wrapping our minds around what a theory is. My favorite little explanation of what a theory is comes from psychologist Angela Duckworth:

“A theory is an explanation. A theory takes a blizzard of facts and observations and explains, in the most basic terms, what the heck is going on. By necessity, a theory is incomplete: it oversimplifies. But in doing so, it helps us understand” (Duckworth 2016: 31).

There are some basic requirements to a theory, however; these are sometimes referred to as the virtues of theories. What is it that all good theories have in common? Keas (2017) summarizes for us:

“There are at least twelve major virtues of good theories: evidential accuracy, causal adequacy, explanatory depth, internal consistency, internal coherence, universal coherence, beauty, simplicity, unification, durability, fruitfulness, and applicability” (Keas 2017).

These theoretic virtues are best learned during the process of learning how to engage in science—a process I suspect that you are beginning since you are taking a class in my division, Behavioral and Social Sciences. However, we can highlight a few important theoretic virtues right now.

Illuminati symbol
Symbol for the Illuminati,
protagonists in several
preposterous grand conspiracies.

For a theory to have explanatory depth means that a good theory has greater depth with regards to describing the chain of causation around a phenomenon. In other words, it explains not only the phenomenon in question but also various nearby phenomena that are relevant to the explanation of the phenomenon in question. Another important theoretic virtue is simplicity, or explaining the same facts as rival theories, but with less theoretical content. In other words, if two theories explain some phenomenon but one theory assumes, say, secret cabals intent on one-world government while the second does not, you should go with the second simpler theory. (Why assume crazy conspiracies when human incompetence can explain the facts just as well?!) This principle, by the way, is also known as Ockham's razor. Lastly, a good theory should have durability; i.e., a good theory should be able to survive testing and perhaps even accommodate new, unanticipated data.

Can we find a theory that explains the phenomenon of ethics/morality and has the abovementioned theoretic virtues? We'll see...

Philosophical jargon

For better or worse, some of the first inquiries into ethics/morality came from the field of philosophy. So, in order to learn about these first attempts to grapple with ethics, we'll have to learn some philosophical jargon. Thankfully, the theoretic virtues of philosophical theories often overlap with the theoretic virtues of social scientific theories. As such, let's begin with a theoretic virtue that is a fundamental requirement of any theory: logical consistency—or, as Keas puts it, internal coherence. As you learned in the Important Concepts, logical consistency just means that the sentences in the set you are considering can all be true at the same time. In other words, none of the sentences contradict or undermine each other. This seems simple enough when it comes to theories that are only a few sentences long. But we'll be looking at theories will lots of moving parts, and it won't be obvious which theories have parts that conflict with each other. Thus, we'll have to be very careful when assessing ethical theories for logical consistency.

One more thing about logical consistency: you care about it. I guarantee you that you care about it. If you've ever seen a movie with a plot hole and it bothered you, then you care about logical consistency. You were bothered that one part of the movie conflicted with another part. This is being bothered by inconsistency! Or else think about when one of your friends contradicts him/herself during conversation. If this bothers you as much as it bothers me, we can agree on one thing: consistency matters.

How do we know if one philosophical theory is better than another? For this, we'll have to look at one of the main forms of currency in philosophy: arguments. In philosophy, an argument is just a set of sentences given in support of some other sentence, i.e., the conclusion. Put another way, it is a way of organizing our evidence so as to see whether it necessarily leads to a given conclusion. You'll become very familiar with arguments as we progress through this course. For now, take a look at this example, comprised of two premises (1 & 2) and the conclusion (3). If you believe 1 & 2 (the evidence), then you have to believe 3 (the conclusion).

  1. All men are mortal.
  2. Socrates is a man.
  3. Therefore, Socrates is mortal.

 

Food for Thought...

 

The Death of Socrates, by Jacques-Louis David

 

 

Roadblocks: Cognitive Biases

For evolutionary reasons, our cognition has built-in cognitive biases (see Mercier and Sperber 2017). These biases are wide-ranging and can affect our information processing in many ways. Most relevant to our task is the confirmation bias. This is our tendency to seek, interpret, or selectively recall information in a way that confirms one’s existing beliefs (see Nickerson 1998). Relatedly, the belief bias is the tendency to rate the strength of an argument on the basis of whether or not we agree with the conclusion. We will see various cases of these in this course.

I'll give you two examples of how this might arise. Although this happened long ago, it still stands out in my memory. One student felt strongly about a particular ethical theory. This person would get agitated when we would critique the view, and we couldn't have a reasonable class discussion about the topic while this person was in the room. I later found out that the theorist who was highlighted in that theory worked in the discipline of anthropology, the same major that the student in question had declared. But the fact that the theorist who endorses a particular theory is in your field is not a good argument for the theory. In fact, I can cite various anthropologists who don't side with the theory in question. As a second example, take the countless debates that I've had with vegans about the argument from the Food for Thought section. There is an objection to that example every time I present it. Again, this is not to say that veganism is false or that animals don't have rights, or anything of the sort(!). But we have to be able to call bad arguments bad. And that is a bad argument. I'll give you good arguments for veganism and animal rights. Stay tuned.

As an exercise, try to see why the following are instances of confirmation bias:

  • Volunteers given praise by a supervisor were more likely to read information praising the supervisor’s ability than information to the contrary (Holton & Pyszczynski 1989).
  • Kitchen appliances seem more valuable once you buy them (Brehm 1956).
  • Jobs seem more appealing once you’ve accepted the position (Lawler et al. 1975).
  • High school students rate colleges as more adequate once they’ve been accepted into them (Lyubomirsky and Ross 1999).

By way of closing this section, let me fill you in on the aspect of confirmation bias which makes it a truly worrisome phenomenon. What's particularly worrisome, at least to me, is that confirmation bias and high-knowledge are intertwined—and not in the way you might think. In their 2006 study, Taber and Lodge gave participants a variety of arguments on controversial issues, such as gun control. They divided the participants into two groups: those with low and those with high knowledge of political issues. The low-knowledge group exhibited a solid confirmation bias: they listed twice as many thoughts supporting their side of the issue than thoughts going the other way. This might be expected. Here's the interesting (and worrisome) finding. How did the participants in the high-knowledge group do? They found so many thoughts supporting their favorite position that they gave none going the other way. The conclusion is inescapable. Being more informed—i.e., being of high intelligence in a given domain—appears to only amplify our confirmation bias (Mercier and Sperber 2017: 214).

Ethical theory

The first step in this journey is to look at various ethical theories. In this first unit, we will be focusing on seven classical ethical theories from the field of philosophy. Given how influential they are in contemporary ethical theory, you will likely feel some affinity for some aspects of these theories. In fact, you might be convinced each theory we cover is the right one, at least at the time that we are covering it. This might be even more so the case with the first one, which is an ambitious type of theory that attempts to bridge politics and ethics. Moreover, central to this view is the notion that humans naturally and instinctively behave in purely self-interested way, a view that many find to be intuitively true.

It's time to take a look at the pieces of the puzzle...

 

 

FYI

Supplemental Material—

Related Material—

  • Video: TEDTalk, Stuart Firestein: The pursuit of ignorance
  • Podcast: You Are Not So Smart Podcast, Interview with Hugo Mercier
    • Note: Transcript included.

     

    Footnotes

    1. Von Petzinger (2017) makes the case that studying the evolution of symbolism and the capacity for abstract thinking in human cognition can be furthered by her field of paleoanthropology. Throughout her book, she details how, contrary to early paleoanthropological theories, the capacity for symbolism didn’t start around 40,000 years ago but much earlier. Her work, as well as that of others, shows that consistently utilized, non-utilitarian abstract geometric patterns can be seen since at least about 100,000 years ago, and perhaps as far back as 120,000 years ago(!) in Africa. Her argument is that there was a surprising degree of conformity and continuity to the drawing of different signs across these time periods and across vast geographic locations. It’s even the case that some patterns grew and waned in popularity. This shows that sapiens were already cognitively modern.

    2. What brought about the cognitive revolution is actually hotly disputed (see Previc 2009). In fact, some theorists argue that it doesn’t even strictly-speaking exist (see Ramachandran 2000).

    3. The theory of group selection is far beyond the scope of this course. However, the interested student can refer to Wilson (2003) for a defense of it, as well as an application of the theory to the question of the evolutionary origins of religion.

    4. The website in question is diffen.com. If the page I visited is representative of the quality of distinctions that it makes, then you should not at all trust this website.

    5. In chapter 4 of Failure: Why science is so successful, Firestein makes the case that the concepts of hypothesis and theory are outdated and not actively used by any scientists he knows—he is a biologist himself. Instead, they’ve replaced the words hypothesis and theory with the concept of model, which has less of an air of finality; it’s more of a work in progress. This is because the process of forming a model is more in line with how science is actually done(!), as opposed to the "scientific method"—which Firestein rails against.