The Distance of the Planets
[E]quality is not the empirical claim that all groups of humans are interchangeable; it is the moral principle that individuals should not be judged or constrained by the average properties of their group. ...If we recognize this principle, no one has to spin myths about the indistinguishability of the sexes to justify equality.
~Steven Pinker
Is and Ought
Today we begin to explore an argument that will take several lessons to truly grasp. In fact, I will only hint at it at the very end of this lesson. To begin to work our way towards this conclusion, however, let's take a look at an issue that seems to be wholly unrelated: coming to moral conclusions from purely empirical premises. Don't worry. I'll explain what that means. This will be the order of events. First will look at what this is/ought distinction is all about. Then we'll look at the reading for today, where the difference between is/ought is relevant but never explicitly mentioned. In the next section, we'll return to is/ought with a contemporary example. Lastly, we'll discuss why it might be a good idea to always take note when you are coming to a moral conclusion in one of your arguments, since you will be vulnerable psychologically and might produce an invalid argument.
The is/ought distinction is the thesis that one cannot come to a moral conclusion (i.e., a conclusion which states that you ought or ought not engage in some particular action or have some particular moral belief) from purely empirical premises. Put differently, facts alone cannot get you to a moral conclusion; you need a moral premise somewhere in your argument. Let me give you an example:
1. It is the case that Nicole borrowed $100 from Jac.
2. It is the case that Nicole promised to pay back Jac.
3. It
is the case that Nicole has $100 to spare right now.
4. It is the case that Nicole will see Jac later today.
...
n. Therefore, Nicole should pay back
Jac.
First off, if you look closely at the argument, the conclusion does not necessarily follow from the premises. Of course, this means this argument is not valid. It goes without saying that it's also not sound. In other words, even if you believed all the premises and they were all true, you wouldn't have to believe the conclusion—at least rationally speaking. Now, before you accused me of being a psychopath, let me explain what's going on here. Let me first show you what the argument would look like if it were valid. That might help.
1. It is the case that Nicole borrowed $100 from Jac.
2. It is the case that Nicole promised to pay back Jac.
3. It
is the case that Nicole has $100 to spare right now.
4. It is the case that Nicole will see Jac later today.
5. If you borrow money, you should pay it
back.
6. Therefore, Nicole should pay back Jac.
I hope that now you can see that the conclusion does(!) necessarily follow from the premises in this one. The way I like to sometimes put it is this. Validity has to do with the logical relationship between the premises and the conclusion. In particular, the path from the premises to the conclusion has to be foolproof; it has to be a watertight connection. If you know about computer programming, validity is a lot like writing a computer program. The code you type in is the premises, and the conclusion is the output of your program. As you know if you've tried to write computer code for yourself, you have to get EVERY SINGLE LITTLE DETAIL RIGHT in order for the program to work, i.e., for the conclusion to follow. This is because, on one way of looking at them, computers are extremely literal-minded and, well, dumb. Their competence comes only through the accumulation of hundreds and thousands of easy tasks, like distinguishing between true and false conjunctions (i.e., and-statements), until they can finally perform very difficult tasks, like learning from data (see Dennett 2014: 109-150). So, in your role as a programmer, you have to make computer programs for these (sorta) dumb machines. That's what validity is like. You have to build your case so that there's zero room for doubt, and it's all or nothing2
Let's go back now to the first argument. Let's be honest. Something fishy is going on here. If you're anything like me, when you read the first argument, it does feel like the conclusion follows. In other words, you (like me) read the first argument and say, "Yeah, Nicole should pay back Jac." In fact, even after credible authorities (e.g., a college professor) pointed out to me that such arguments are incomplete, I still didn't believe them. Why is this the case? Well, that's the power of moral arguments. Moral arguments have a way of swaying us in an automatic, non-conscious way. Our brains naturally fill-in the premises so that the argument is valid, especially because paying back our debts is something that (I hope) most of us already believe in. And we don't even notice it. Most people would never notice, to be honest. As far as we can tell, prior to its discovery in the 18th century, no one had previously noticed the is/ought distinction (see Footnote 1). Moreover, even once you learn about it, you have to really think about what validity means before you're able to convince yourself that the first argument isn't valid and that only the second one is. Try to convince yourself of that now.
Why do we "fill-in" the missing premises for arguments whose moral conclusions we agree with? There's honestly too many theories to give an even-handed summary of them here. I will tell you only about two theories. The first is the pretty standard account in psychology today, since many people believe in the dual-process model described in the lesson called Fragility (but see Mercier 2020 for an interesting rebuttal). On this model, we have a confirmation bias, and we selectively seek and interpret information in ways that simply reinforce our pre-existing beliefs. Thus, once we see an argument with a moral conclusion we agree with, we naturally fill-in the blanks and make the argument valid. It helps, I might add, following MacIntyre (2003, 2013), that the terms "right" and "wrong" essentially mean nothing—a sort of linguistic anarchy. As such, since you have some leeway with regards to the meaning of "right", the mind has an easy path towards imagining the argument is actually valid when it's not—confirmation bias at its finest.
The other theory I will mention goes something like this. Our brain is in the business of making predictions; that's what it evolved for (Clark 2015). These predictions will hopefully keep you alive long enough to find a mate, reproduce, and pass on your genes—good ol' fashioned evolutionary fitness. Of course, making predictions is hard work! There's a lot of information to process! So, your brain takes as many shortcuts as it possibly can; this makes evolutionary sense. Remember, through most of our evolutionary history, if you were to sit there and think out a problem in its entirety, you would've been some predator's lunch. So shortcuts are a must. This, by the way, is referred to as a bounded rationality strategy: you don't process all the information available to you; just the info that tends to in general keep you out of some predator's digestive tract. What kind of shortcuts does the brain take? Well, according to Dutton (2020), we evolved three different filters: the fight versus flight filter (where you process just the info that helps you decide whether you should take off or take a stand), the us versus them filter (where you process just the info that helps you decide whether someone is one of your own "tribe" or not), and the right versus wrong filter (where you process the info that gives you cues as to whether some action will be socially acceptable or not). Under this view, then, you have something like a mental module that helps you categorize some actions as permissible and others as not permissible. Now here's where the analogy with a shortcut dissipates. When you take a shortcut to some destination, you're typically aware of what you're doing. However, these cognitive shortcuts are automatic and under the radar. You won't know that you're not processing all the relevant information. You'll just feel yourself come to a conclusion—not that you came to that conclusion through a questionable process of mental "filling in" missing facts/claims. Oh, Darwin!
If you're getting the feeling that I'm arguing you can't blindly follow your intuitions, then now you're getting it. And if you haven't noticed, I'm throwing a ton of evidence your way (see my Works Cited page). The mind is a tricky thing. Watch yourself.
Argument Extraction
Demonic Males
As we learned in the previous section, there is a distinction that goes back to the 18th century between empirical claims and moral conclusions. In particular, the claim is made that no matter how many empirical claims you pile up, you cannot build a bridge to a moral conclusion. Although this might seem completely unrelated to you, there are some happy consequences about this distinction that might be of interest to you. Let me begin with a story.
In the summer of 1975, the distinguished Harvard entomologist Edward O. Wilson, considered the foremost authority on the study of ants, published Sociobiology: The New Synthesis. In it he made the radical(?) proposal that social behavior has a biological basis, and hence there can be a biological science of social behavior—the science he was attempting to usher in with his new book. In this book, however, he explicitly included Homo sapiens as one of the species that could be studied through sociobiology, suggesting that human sex role divisions, aggressiveness, religious beliefs, and much else ultimately all have a genetic basis, even if genes don't tell the whole story. And so, after the publication of this book, the mild-mannered Wilson, who preferred to spend his time studying ants, came under intense criticism. There were vitriolic articles written against him, and he was called a racist and a sexist—despite not making any explicitly racist or sexist claims. Then, some three years after he published his book, Wilson was about to speak at a symposium sponsored by the American Association for the Advancement of Science when he had a jug of water poured over his head by a group of hecklers, later discovered to be associated with the Marxist Progressive Labor Party. For the full story of the sociobiology controversy, see Segerstråle (2000).
Why would leftists pour a pitcher of water over a biologist's head at a (probably pretty boring) academic conference? Some thinkers (e.g., Pinker 2003, Segerstråle 2000) who have studied Wilson's sociobiology controversy and other similar incidents think they know what's going on in the minds of these leftists. The leftist idea, although expressed differently by different thinkers, seems to be as follows. If there are biological explanations for any kind of social inequality (e.g., gender inequality, racial inequality, etc.), then this legitimizes it. In other words, if one is ideologically committed to the view that the status quo is full of injustice, then any attempts to give an account of social inequality on a biological basis is an attempt to justify the status quo (see Pinker 2003, chapters 6 & 7). As it turns out, many participants in the sociobiology controversy were explicit Marxists whose criticisms of Wilson's work demonstrated that they had something like this in mind (Segerstråle 2000: 199-213).
Were these leftists right? With the benefit of hindsight, we can see that there are many aspects of our cognition and social behaviors that are influenced by genes, suggesting that the leftists were if not wrong then at least wrongheaded in their criticism. For example, per Haier (2016, chapter 2), it appears that genes play a major role, more than 50%, in variance in intelligence. Moreover, the genes that play a role in intelligence are of a great number, not an isolated few as some erroneously think. Importantly, environment does play a role in early cognitive development, but(!) it's effect is almost negligible by the teenage years. So, for those who think the nature versus nurture debate is still raging, lets put it bluntly. It's both genes and environment that affect intelligence levels, although their effects fluctuate throughout the lifecycle.
How does this affect the social sphere? Well, there is a strong correlation between measures of intelligence and predictions of job performance (Schmidt & Hunter 2004). In other words, the higher you score on measures of intelligence, the more likely you are to perform well at work. You don't need to be an economist to see how this might lead to greater lifetime earnings for those with high intelligence. There's more. There are three longitudinal studies that establish a correlation between mental abilities and life success: Lewis Terman’s project, Julian Stanley’s Study of Mathematically Precocious Youth at John Hopkin’s, and the Scottish Mental Survey. These studies show a correlation between high mental abilities and physical development and emotional maturity—thankfully dispelling the stereotypes of nerds being puny and socially-maladjusted. Stanley’s study in particular demonstrated the predictive power of a single(!) math-competency test taken at age 13 on lifelong earnings (see Haier 2016: 29-30). The Scottish Mental Survey even found a correlation between mental ability and longevity. So, genes definitely matter for lifetime economic success and even length of life!3
Here's another example of how genes affect our social lives. In Predisposed, the authors make the claim that our political perspectives are driven in large part by genetics. As the authors admit, we like to pretend that our political perspectives are rationally-derived. But by now you should be disabusing yourself of this delusion. Here are the authors:
“Many pretend that politics is a product of citizens taking their civic obligations seriously, sifting through political messages and information, and then carefully and deliberately considering the candidates and issue positions before making a consciously informed decision. Doubtful. In truth, people’s political judgments are affected by all kinds of factors they assume to be wholly irrelevant” (Hibbing, Smith & Alford 2014: 31).
The authors then move to survey all the ways in which genes affect our worldviews and politics. For example, there is evidence that conservatives feel a greater affective magnitude when evaluating positive and negative stimuli. In other words, for conservatives, the highs are higher and the lows are lower. Interestingly, this affects the learning process of liberals and conservatives. Liberals take more chances, even if they suffer occasional negative consequences. Conservatives are more cautious, even if it means limiting the amount of new information acquired. Nonetheless, both perform about the same in school. (Relax, conservatives. Settle down, liberals.) This is because liberal students might take on too much new information such that they can’t process it all.
In all, conservatives are more likely to:
- pay attention to stimuli that signal potential threats, e.g., angry faces;
- follow instructions (unless the source is obviously bogus;
- avoid unfamiliar objects and experiences;
- and in general keep it simple, basic, clear, and decisive.
Liberals:
- seek out new information (even if they might not like it);
- follow instructions only when there is no other choice;
- embrace complexity;
- and engage in new experiences even if they entail some risk.4
As you can see, liberals and conservatives don't just differ in who they vote for. They differ, from cradle to grave, in their entire worldview and approach to life. This is because, the authors of Predisposed argue, there is individual variation in humans. We vary in how open we are to new information, how prone we are to focusing on the negative, and how much complexity we enjoy. Importantly, these are all influenced by genetic factors. Of course, the authors clarify that their findings are all probabilistic. They admit that there simply is no certainty in the social sciences, and so they trade in words like “determine” for “influences”, “affects”, etc. Nonetheless, the authors make a convincing case that you might've been predisposed to some of your political views before you were even born.
My favorite examples of how things that are out of our control influence behavior and society come from neuroscientist David Eagleman. In chapter 6 of Incognito, Eagleman discusses how changes to the brain affect changes to behavioral dispositions. For example, Charles Whitman (the infamous mass shooter known as the "Texas Tower Sniper") had a brain tumor putting pressure on his amygdala, a condition which might have made him more prone to violence. It's also the case that frontal temporal dementia patients are prone to a variety of socially-unacceptable behaviors, like stealing and stripping in public. It's even the case that pedophilia can be brought on by a brain tumor and that Parkinson’s patients can develop gambling addictions due to their dopamine medications.
Most tellingly, Eagleman reminds us of a particular set of genes that really make a difference. If you are a carrier of this particular set of genes, then you are 882% more likely to commit violent crimes. In particular, you’re eight times more likely to commit aggravated assault, ten times more likely to commit murder, thirteen times more likely to commit armed robbery, and 44 times more likely to commit sexual assault. Concerningly, about one half of the human population carries these genes, to the detriment of the other half. As it turns out, the overwhelming majority of convicted prisoners carry these genes, as do 98.4% of those on death row. As Eagleman states, it seems pretty clear that the carriers of these genes are strongly predisposed towards deviance.
Who carries these genes? Have you guessed yet? It's men. Men are much more likely than women to commit violent crimes. It's not even a contest. Moreover, according to the authors of Demonic Males, the male tendency towards violence is not a product of the West, or of patriarchy (like some feminists contend), or settled life (as opposed to some mythical egalitarianism in hunter-gatherer societies); male violence is in our genes.5 In particular, the authors make the case that human males are predisposed to male coalitional violence, i.e., group violence. It is, they say, like a curse.
“Why demonic? In other words, why are human males given to vicious, lethal aggression? Thinking only of war, putting aside for the moment rape and battering and murder, the curse stems from our species’ own special party-gang traits: coalitionary bonds among males, male dominion over an expandable territory, and variable party size. The combination of these traits means that killing a neighboring male is usually worthwhile, and can often be done safely... Why males? Because males coalesce in parties to defend the territory. It might have been different... Hyenas show us that human male violence doesn’t stem merely from maleness [since in groups of hyenas it is the females that dominate the males]” (Wrangham and Peterson 1996: 167).
How does this all relate to the is/ought distinction? Well, it appears that the leftists we started this section with (and some modern ones too) are making an error in reasoning. They seem to believe the following argument is valid.
1. There are innate differences between genders, ethnic groups, those with different political affiliations, etc.
2.
Social differences between these groups are naturally-occurring.
3. Therefore, we ought not try to alter this status quo.
Can you see the error? One cannot go from these two empirical premises to the moral conclusion. There's a missing premise. So, if we're taking the is/ought distinction seriously, the logical thing for the leftists would've been to deny the argument is valid. Instead, they seem to be accepting the argument as valid, but merely denying the truth of the premises (i.e., denying the soundness of the argument). However, as we can see with the benefit of hindsight, this amounts to science denialism—denying the truth of scientific evidence merely because it conflicts with one's political/religious worldview. (Don't get excited conservatives. There's plenty of science denialism in your camp too, see Washburn and Skitka 2018.) It goes without saying that science denialism is never a sign of critical thinking.
So what's the point of this whole lesson? It's basically this: if we treat political ideology like a religion, then this obscures our capacity to think rationally, as Caplan argued. It even makes you unable to recognize that some arguments are invalid. What is this costing us? Are there ideas that certain leftists haven't even been able to listen to because their political dogmas prevented them from it? Pinker (2003) and others (e.g., Segerstråle 2000, McWhorter 2021) think so. Are you ready to hear some of these ideas?
Finally, what's the argument I'm working towards? I'll be arguing that political parties should be abolished. Stay tuned.
- Read from 449a-460a (p. 136-149) of Republic.
To be continued...
FYI
Suggested Reading: Bruce Goldman, Two minds: The cognitive differences between men and women
TL;DR: TEDTalks, The Differences Between Men and Women: Paul Zak
Supplemental Material—
-
Video: BBC Radio 4, The is/ought problem
-
Reading: Texas State Department of Philosophy, Is Ought
-
Video: The Cogito, The Is-Ought distinction in Meta-Ethics
Related Material—
-
Podcast: Making Sense, The New Religion Of Anti-Racism: A Conversation with John McWhorter
-
Reading: Richard Wrangham and Dale Peterson, Chapter 1 of Demonic Males
Advanced Material—
-
Video: Clinton School Speakers, Predisposed: Liberals, Conservatives, and the Biology of Political Differences
Footnotes
1. To be sure, the is/ought distinction does not originate with me. Rather, it goes back to the 18th century, to the Scottish Englightenment thinker David Hume. For more information about Hume (and his best friend Adam Smith), see Endless Night (Pt. I).
2. By the way, the connection between validity and computers is not incidental. Developments in logic directly led to the digital revolution (Shenefelt and White 2013). This is why computer science departments still require that their students take an introductory course in logic, and it's helpful if the instructor knows a thing or two about computer science. Might I recommend the PHIL 106 course taught by R.C.M. García?
3. Interestingly, an estimated 84%-95% of the variance in the mortality/IQ correlation may be due to genes (Arden et al., 2016), once again showing that genes play a massive role.
4. One of the most interesting theories in Predisposed is with regards to the evolutionary origins of conservativism and liberalism. The authors speculate that conservative tribalism, with a prominent negativity bias, a fear of the novel, and a penchant for tradition, was actively selected for early on in sapiens’ history. As the species transitioned to sedentism (i.e., settled life), and after all a long learning curve, eventually violence began to decrease. At that point, liberalism came into the picture. So, liberalism is an evolutionary luxury that came about when negative stimuli became less prevalent and less deadly. The authors also speculate as to whether diversity in political dispositions makes the meta-group (i.e., the group composed of both conservatives and liberals) stronger, but make clear that the verdict is out.
5. In chapter 6 of Demonic Males, the authors respond to the thesis held by some feminists and cultural determinists: that patriarchy, sexual subjugation of women, and imperial tendencies are inventions of the West. The authors clarify that these things are found nearly universally (i.e., not just in the West), and then they proceed to give a catalogue of horror perpetrated by groups from: China, Japan, India, Polynesia, Mesoamerica, Arabs, Africa, and various aboriginal tribes. In other words, males are violent across time and space. The authors also retell Friedrich Engels' theory that, prior to civilization, humans lived in communal bliss with equality of the sexes—a view Engels put forward in The Origin of the Family. In that work, Engels theorized that it was only after the invention of animal husbandry, and thus private property, that social relations became unequal; men began to control their wives. Once private property existed, in other words, men wanted to ensure their heir was actually their offspring. To this the authors respond by reviewing tales of the subjugation of women in tribes that are allegedly egalitarian. The authors conclude that patriarchy is worldwide and history-wide, and it is enforced through male violence.