Fragility
Don’t think outside the box.
Find the box.
~Andrew Hunt and David Thomas
Lazy thinking
The topic for discussion in this first half of the lesson is lazy thinking. This isn't exactly a technical term that you'll find in the literature from the mind sciences, but it's a helpful label that we will use in this course. Before explaining what it is, here's a little context. On occasion, we are faced with a problem and we want to find a solution. So we let the stuff between our ears get to work. Here's the problem, though: the cognitive capacities of the mind seem to sometimes be at odds with each other, as Plato notes in today's reading. There is one part the mind that looks for quick and easy answers but isn't terribly concerned about the quality of the answers. There is another part of the mind that can do some serious thinking, though, and this part of the mind helps to keep you out of the traps that the first part of the mind easily falls into. However, this more rigorous part of the mind is unfortunately extremely lazy; it won't work unless you really force it to (see Kahneman 2011).1
Usually you won't notice when the different parts of your mind come to different conclusions, compete with each other, and ultimately resolve their dispute and engage in some particular course of action, since this is all happening under the hood and outside of your subjective experience (Nisbett and Wilson 1977). Typically, only the course of action that is decided on by your non-conscious mental processes is presented to consciousness (Wegner 2018). If you do feel anything at all, it'll be a feeling of certainty, a feeling that you know what to do now—as opposed to the feeling of not knowing what to do (Burton 2009). This is all very abstract so I'll give you an example from Nobel prize-winning psychologist Daniel Kahneman. As you read the example, try to actually find the answer—really, though.
A bat and ball together cost $1.10. The bat costs $1.00 more than the ball.
How much does the ball cost?
What did you guess? Did you say ten cents? Well, that can't be right! Because if the ball costs ten cents and the bat costs a dollar more than the ball, then the bat alone costs $1.10(!). So, together the bat and the ball would cost $1.20, which is not $1.10. Do the math this time. What's the real price of the ball? Convince yourself that the ball really costs five cents. If the bat costs a dollar more than the ball, then the bat costs $1.05. And, of course, $1.05 plus $0.05 is $1.10(!).
If you are like most people, not only did you get the question wrong at first but it took you a minute to convince yourself that the real price was five cents. This is because that one part of your mind, the one that looks for quick and easy answers, worked out the problem quickly (although incorrectly) and presented it to your consciousness—your subjective experience. Once it's there in your consciousness, it's hard to break free from believing that is the correct answer—confirmation bias. The more rigorous part of your mind eventually did get the right answer. But, because it's lazy, you really had to think about it before you could activate this part of the mind so that it could get to work on finding the right answer. As you can tell, lots of information-processing (including the kind that comes to inaccurate conclusions) happens outside of conscious experience and you're none the wiser (Nisbett and Wilson 1977). What came to you first wasn't any of the non-conscious mental processings but only the (erroneous) conclusion (Wegner 2018). Afterward, when you actually worked out the problem, you finally felt a real feeling of certainty (Burton 2009). The mind is a tricky thing.
And so, with all that setup out of the way, this is what lazy thinking is: when you are satisfied with the quick and easy solution without making sure you're actually engaging the more rigorous information-processing parts of your mind. Now you don't need me to tell you that lazy thinking is rampant. You know it! Although there's no way to actually check this, I'm ok with wagering that most people most of the time go with the easy and quick conclusions that came to their mind (as opposed to actually engaging in rigorous, time-consuming, cognitively-demanding information-processing). I'll go further. I feel that people go out of their way so that they can keep their quick and easy solution. They'll literally push away information that gets in the way of them being able to keep that precious first conclusion that came to mind. (God forbid they actually have to think!) I'll give you an example. It has to do with the Informal Fallacy of the Day:
So strawman arguments are not the way you want to win arguments, because you're not even addressing the actual line of reasoning of your opponent—only a distorted pseudo-argument. (This does you no favors, obviously.) However, in some settings, it is really hard to engage the more rigorous part of your mind. One particular such setting is in politically charged conversations, especially in the hyper-polarized American political environment (Mason 2018). Emotions run too high to be able to calmly wait while the lazy rigorous part of your mind figures out what the real argument being given is. This is what I've noticed (in the past) when discussing the topics of last lesson: extreme wealth/income inequality and identity politics. These topics were hand-picked (by yours truly) to go against the grain of deeply-held convictions in conservatives and liberals, respectively. As we learned in the Food for thought in The One Great Thing (Pt. II), 55% of Republicans blame the poor for their poverty (Loewen 2007: 209). Similarly, "identity politics" (as Murray 2019 conceives of it) is primarily on the liberal side of things. So, by discussing these two issues, I hope I was able to rev up your feelings of outrage and hence disallow you from thinking straight about these. I did this because there's an important lesson in falling for one of my traps.
On the conservative side of things, I've heard arguments against universal basic income and the social investment stipend which focus on how the giving away of money would be "undeserved". On the liberal side of things, I've heard arguments for the continued use of identity politics since only this approach to politics will make sure that society addresses past/current injustices that have been done (or are being done) to specific interest groups (e.g., African Americans, LGBTQ+ community, etc.). However(!), Plato's characters seem to be discussing what will lead to the greatest good for the city as a whole; they're neither discussing what's best for individuals or groups within the city nor the issue of whether stipends are "deserved" or not. In other words, Plato is thinking about what will lead to stability and, if possible, what would make society anti-fragile. Put differently, Plato appears to be thinking at the level of systems (Miller and Page 2009). He's thinking about how to make society adaptive and robust, not likely to fall apart. Thinking about the other issues is ok, since they certainly relate to how adaptive a society can be. But be careful, since some conclusions that come from lazy thinking lurk nearby.
I hope we agree that the topic here is how to make society cohesive. However, some lazy thinkers attempt to move the conversation into whatever pet topic they like to discuss—rather than engaging with the actual topic of discussion. Let me put it bluntly. Just saying "If Plato's view implies (insert pet topic) is false, then Plato must be wrong", i.e., the quick and easy solution, won't work here, because it's a form of lazy thinking. To actually respond to Plato's challenge, you'd need something like an explanation of and evidence for why your pet topic wouldn't interfere with the city's stability. Even better, you can make the case that your pet topic would improve the city's stability! But the point here is that the level of analysis must be at the same level that Plato is thinking at: the systems level. Remember that Plato doesn't know anything about your pet topic. Plato's dead. So, all we can do is see if there's any wisdom in what he wrote (and many people seem to think there is). So when engaging with his work, we actually have to engage with his work.
Lazy thinking, as I had previously mentioned, is all over the place. Heck, I'm sure I engage in it all the time. That's why I'm continually updating my views and lessons to try to eradicate any remnants of lazy thinking. (I'm trying!) Other groups and career-types have been guilty of lazy thinking as well—not just you and me. For example, in a recent interview, psychologist Gordon Pennycook made the case that it isn’t ideology that drives conspiratorial thinking (i.e., beliving strongly in conspiracy theories such that they inform the way you live your life), but rather it is a lack of cognitive reflection (i.e., lazy thinking). What's causing this lazy thinking? Pennycook argues that the media environment, which we've seen is less and less informative, has raised uncertainty to a degree where many feel (non-consciously) that it is ok to process information in a lazy way.
There may also be some lazy thinking in academia. In The Entrepreneurial State, Mariana Mazzucato argues against the view that the state should not interfere with market processes since it is incompetent and only messes things up—a view endorsed by many mainstream economists. She argues instead that the state has played the main role in the development of various technologies that define the modern era: internet, touch-screen technology, and GPS. It has also granted loans to important companies such as Tesla and Intel. Moreover, the state takes on risks in domains that are wholly novel and in which private interests are not active, such as in space exploration in the 1960s. It is a major player both in the demand and supply side. And it also creates the conditions that allow the market to function, such as in the building of roads during the motor vehicle revolution. In short, the state is entrepreneurial and very good at it. Her explanation, by the way, also comes at the level of the system. As such, it is difficult to grasp at first. Relative to her view, the view that the state is incompetent is simplistic. Choosing the simplistic view over the complicated view may be yet another form of lazy thinking: willfully refusing to understand a newer, more complicated view so that you can keep your prior beliefs.
Lazy thinking takes place in history too. In The House of Wisdom (2011, chapter 15), Al-Khalili makes the case for a long decline in Arab science, as opposed to a sudden collapse. But in order to make his argument, he first dispels “lazy” explanations of this decline such as that it was caused by the Mongol conquest of Baghdad, which implies that Baghdad was the only intellectual hub in the region. Instead, he clarifies that intellectual work was being done well into the 14th century. For example:
- Ibn al-Nafis (1213–1288) developed his theories on pulmonary transfer of blood, which were improvements on that of Galen.
- Ibn Khaldun (1332–1406) whose ideas included:
- the necessity and virtue of a division of labor (before Adam Smith),
- the principle of labor value (before David Ricardo),
- a theory of population (before Thomas Malthus),
- the role of the state in the economy (before John Maynard Keynes), and
- the first work of sociology.
- Jamshid Al-kashi (1380-1429) was a great mathematician who progressed the use of decimal notation.
So historians had engaged in lazy thinking, telling a simplistic story about Arab decline and ignoring the important scientific achievements made by Arabs that conflicted with their simplistic narrative. What is the explanation given by the more rigorous part of the mind? Well, Al-Khalili notes that one important reason for the decline in science was the lack of enthusiasm for the printing press among Arab states. In short, Arabic script simply didn't lend itself to mechanizing—something that is necessary for the operation of a printing press. But, of course, the printing press is essential for the vast and fast transmission of ideas. Any state without widespread use of the printing press is necessarily going to progress more slowly than one with widespread use of printing presses. (Another explanation at the level of systems!) As you can see, the better explanation is much harder to arrive at—certainly harder than "it's because they lost a war".2
As you can see, lazy thinking can come in many guises. Watch out. Always ask yourself the following questions when studying an argument: a. what does the author really mean by this? b. what evidence is being given? I think it'll help stave off lazy thinking. For some other tools for thinking, see Dennett (2014).
Argument Extraction
The value alignment problem
Anyone who's spent more than half an hour with me knows of my deep fascination of and interest in artificial intelligence (AI). In fact, in another class (PHIL 101), I give my views on some possible scenarios we might find ourselves in as AI becomes even more ubiquitous in society.3 In this portion of the lecture, however, I'd like to link once more with what we discussed in the last lesson. In particular, I'd like to discuss how, if we aren't careful, rolling out AI across more and more domains of society will increase various forms of inequality, in particular that between the white majority and historically disenfranchised groups.
First off, we must acknowledge that there have been historic wrongs of epic proportions in American history. For example, in The Half Has Never Been Told, historian Edward E. Baptist gives the economic history of slavery and the form of capitalism that it gives rise to. Here are some highlights. In chapter 4, Baptist takes on various issues regarding plantation slave labor during the early 19th century. First off, slave labor was increasingly torturous during this period in the Deep South; handlers endeavored to devise new ways to yield more and more output from the slaves. They were successful. The push and quota system, where slaves were whipped if they didn’t reach their daily goal and the goal is progressively increased as time passes, was the "management development" of this time period—a system that is, in a less brutal and modified form, still used today.4 In fact, Baptist uses this increase in production through coerced labor as an empirical datum against the economists’ view that free and voluntary labor is more productive than coerced labor. In other words, the free-market fundamentalist's claim that a system of rational agents acting out of self-interest is the most efficient system is false, according to Baptist; you can be even more efficient by inflicting unimaginable brutality on workers. Moreover, since cotton was the world’s most coveted commodity in that century, slave labor made plantation owners lavishly rich and made the American empire possible (Beckert 2015, Beckert and Rockman 2016).5
All this to say, some minority groups (e.g., African Americans) have suffered more than their share of injustice. It seems sensible, then, to attempt to avoid any further harm done to these disenfranchised groups—one would think. However, the rolling out of AI might be yet another harm on disenfranchised groups. I'll explain.
Today, we are in the age of machine learning. Machine learning (ML) is an approach to artificial intelligence where the machine is allowed to develop its own algorithm through the use of training data, as opposed to being rigorously coded by a computer programmer. Per Sejnowski (2018), ML became the dominant paradigm in AI research in 2000. Since 2012, the dominant method in ML is deep learning. The most distinctive feature of ML (and deep learning) is its use of artificial neural networks (see the FYI section for more info). Sejnowski claims that any breakthroughs that will happen in AI will happen as a result of research into deep learning.
That's all well and good. If you're like me, you're excited about new developments in technology. However, it's not all as awesome as it might appear to be at first. In the opening chapter of The Alignment Problem, AI researcher Brian Christian discusses just what this alignment problem is. In short, it is research into ensuring that AI systems be designed so that their goals and behaviors can be assured to align with human values. But, Christian points out, we are a long way off the mark at this point. For example, machine learning and deep learning algorithms, due to the biased data sets on which they are trained, have built-in biases that may negatively effect historically disenfranchised groups. Case in point, early ML algorithms did very poorly at recognizing the faces of minorities and women. In fact, Christian points out that the bias goes far back in history, all the way back to the mechanisms within cameras themselves, which were tuned on light-skinned models(!). (No wonder our data sets are biased!) It was apparently chocolate manufacturers(!) that demanded the improvement of camera tuning so as to show the details in their product. The lack of training sets containing black faces even led Google’s image classification algorithms to label people of African descent with non-human categories.
Machine learning algorithms also show bias when it comes to word association. For example, some algorithms were designed so as to be able to "compute" combinations of words. The results were pleasing at first, but then things got sour. One such case is when you input the following: “doctor” - “man” + “woman”. The output was “nurse”, as if women can't be doctors.
There's more problems. There is an opacity to neural nets. What this means is that, since they learn at the sub-symbolic level, programmers do not actually know what the algorithm "learned" to do. Here's one example, as told by AI researcher Eliezer Yudkowsky:
“Once upon a time, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks in trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network. Wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees. They had used only 50 of each for the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back... It turned out that in the researchers’ dataset, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days [not tanks from trees]” (Yudkowsky 2008: 321).
This opacity of neural networks makes them dangerous in, say, medical settings, where they are increasingly being used. This is because if they learn an erroneous pattern, it will be difficult to surmise this by looking at the model. Again, the learning happens at the sub-symbolic level, which means humans can't interpret it. It would be easier, obviously, to glean any mistakes from rule-based models, like the hard-coded programs that AI researchers used prior to the ML revolution. In fact, this has been tested. In one trial, both the rule-based and the neural net models learned a false rule: that asthmatics tend to survive pneumonia. (This, by the way, is definitely not true. You probably know if you know anyone with asthma.) The models learned this because asthmatics, due to their underlying condition, typically get sent immediately to the ICU if they get pneumonia. This makes their death rate artificially low on account of the care they receive as a matter of course. In other words, asthmatics tend to not die of pneumonia if they are admitted to the hospital because there is a streamlined process to give them the care they need, since they would very likely die without this treatment. Both the neural net and the rule-based model, though, suggested asthmatics with pneumonia be sent home—a result from erroneously "thinking" that asthmatics don't tend to die from pneumonia. This very bad advice was easily spotted in the rule-based model, since it’s written in (more or less) plain English—any programmer can read it. However, this would’ve been impossible to recognize at the sub-symbolic level of the neural net. If a hospital is using just neural nets to make diagnoses or treatment recommendations, they'd have no easy way of telling how poorly calibrated their neural network is.6
So our neural nets might end up being racially biased and sexist and we would be none the wiser. This is obviously not good. I've been harping on about the potential social disruptions that AI might cause for years, in particular if we roll out this technology without really understanding what it's doing. However, it seems that the American legislature is composed primarily of people with insufficient computer literacy to understand the looming threats. The way I see it, especially if you consider the other threats coming from AI (see The Chinese Room and Turing's Test), we are sitting around with the Sword of Damocles dangling over our heads. Perhaps we can do better in Ninewells?
- Read from 430d-437e (p. 116-125) of Republic.
Lazy thinking occurs when you accept the intuitive answer that comes to mind without rigorously checking the linear reasoning and noting the quality of evidence that led to said conclusion.
In today's reading of Republic, the characters note that their kallipolis has temperance in the sense that there is unanimity about who should rule, i.e., those that are most skilled at ruling are the only ones who should rule. The characters also decide that justice is found in the city so long as everyone performs their assigned duty.
The alignment problem is an area of research in artificial intelligence that attempts to ensure that our AI systems will actually comport to human values and desires—something that is not currently happening.
FYI
Suggested Reading: Eliezer Yudkowsky, AI Alignment: Why It’s Hard, and Where to Start
TL;DR: UCL Centre for Artificial Intelligence, The Alignment Problem: Brian Christian
Supplementary Material—
-
Video: Inc. Magazine, Daniel Kahneman: Thinking Fast vs. Thinking Slow
-
Podcast: You Are Not So Smart Podcast, YANSS 198 – The psychological mechanisms that led to the the storming of the Capitol, an event that sprang from a widespread belief in a conspiracy theory that, even weeks later, still persists among millions
-
Video: 3Blue1Brown, But what is a neural network?
Related Material—
-
Video: Talks at Google, Intuition Pumps and Other Tools for Thinking | Daniel Dennett
-
Video: TEDTalk, The wonderful and terrifying implications of computers that can learn | Jeremy Howard
-
Video: TEDTalk, What happens when our computers get smarter than we are? | Nick Bostrom
-
Video: TEDTalks, Mariana Mazzucato - The Entrepreneurial State
Footnotes
1. The psychological model endorsed by Nobel laureate Daniel Kahneman, known as dual-process theory, gives a very helpful metaphor for understanding these different parts of the mind (which are sometimes at odds with each other in their conclusions). Although I cannot give a proper summary of his view here, the gist is this. We have two mental systems that operate in concert: a fast, automatic one (System 1) and a slow one that requires cognitive effort to use (System 2). Most of the time, System 1 is in control. You go about your day making rapid, automatic inferences about social behavior, small talk, and the like. System 2 operates in the domain of doubt and uncertainty. You'll know System 2 is activated when you are exerting cognitive effort. This is the type of deliberate reasoning that occurs when you are learning a new skill, doing a complicated math problem, making difficult life choices, etc. The interested student should consult Kahneman's 2011 Thinking, fast and slow. I might add that although I think the theory is very elegant and gives an extremely helpful metaphor for thinking of the mind, I do not think it is ultimately accurate. If you'd like to know my views on the mind, you'll have to buy me a cup of coffee and sit patiently as I explain the predictive coding hypothesis to you. As an alternative, see Clark (2015).
2. Obviously the Mongol conquest did play a major role in the decline of Arab civilization, but it doesn't tell the whole story—especially with regards to Arabic scientific achievements. The interested student should refer to Al-Khalili (2011) and Mackintosh-Smith (2019).
3. Here are four possible scenarios, from best to worst. Scenario 1: We achieve a technological breakthrough. We develop helpful superintelligent AI that solves all human organizational, societal, and production problems. However, we've lost our sense of identity and meaning and now feel obsolete (see Danaher 2019). 2. We avoid the full automation of Scenario 1, but we still have to deal with partial automation. In particular, job-types like management roles are automated such that you are working jobs where you are micro-managed by a supercomputer—a technologically updated version of the push and quota system (see Guendelsberger 2019). Scenario 3: Nanotechnology and machine learning converge, leading to a more war-prone world (see Phoenix and Treder 2012). Scenario 4: Total annihilation of the human race (see Bostrom 2017).
4. Being a picker at an Amazon warehouse is a modern-day job which uses the push and quota system; see Footnote 3.
5. I wish that was the end of it. But Baptist goes on to survey a catalogue of horrors and injustices. In chapter 5, he discusses how slave extemporaneous musicality developed, i.e., the capacity to come up with and improvise on a musical theme, and how this was usurped (i.e., taken) by the whites. In chapter 6, Baptist discusses how slave spirituality developed and is ultimately suppressed by whites, all in the land of religious freedom. The apparent justification was fears of slave insurrections which would lead to the death of whites (which did happen at least once). In chapter 7, Baptist discusses the sexualization of black women. In chapter 8, Baptist discusses how slavery influenced foreign policy. One example of this was what Ulysses S. Grant called the "most wicked" war in American history: the Mexican-American War. As Baptist tells it, slave owners and pro-slavery politicians sought to expand so as to increase the available area for the cotton industry (see also Greenberg 2012). Not to leave out the North, chapter 9 is about how integral slavery was to the rise of industry in the North. By the way, Baptist tells us that after US forces captured the capitol of Mexico, Mexico City, Congress debated whether or not to annex the whole territory of Mexico (i.e., the entire country). Congress opted not to, however, because they saw themselves as a government of the White race. See Baptist 2016 for still more.
6. Neural networks are also used in predictive policing, and this also has some troubling biases built into it; see the section titled "Predict and Surveil" from my PHIL 103 lesson titled Seeing Justice Done.