Some Reasons People Aren’t Rational

Precis on Thinking, Fast and Slow by Daniel Kahneman

Chuk Moran
24 min readFeb 13, 2020

This is a huge book with lots of great examples.

  • Most people are willing to bet that they are superior to others on most traits!
  • Investors tend to sell winning stocks, to cash out on their winnings, even though winning stocks are more likely to keep winning in the future!
  • If it’s easy for you to think of examples of something, you will think this something is common, even if it’s not! Shark attacks are high profile and memorable events, but actually not that common.
  • Radiologists usually don’t know the long term outcomes of the patients they diagnose, so it’s hard for them to make better readings with practice.
  • If you ask spouses “how large was your personal contribution to keeping the place tidy, in percentages?” the sum of their answers is over 100!

So, what’s the point? People are stupid?

Not exactly. Kahneman’s goal is to improve your ability to talk about decision making, rationality, and the kinds of bad guesses that are most common. He imagines that richer conversations than “people are stupid” are possible. Such richer conversation will help you, and the people you talk to often, do better.

In particular, Kahneman wants to dive into the duplicity of the human mind, charting two large rifts. The first gap is between the slower moving analytical part of the mind and the hasty intuitive side. The second gap is between the experiencing self that enjoys a nice dinner and the remembering self that enjoys the memory of that dinner (or forgets about it entirely!)

To be honest, psychology seems to offer a lot of dualistic models of human thought, whether ego/id, neocortex/rest, conscious/unconscious, or just angel/devil; I don’t really see the appeal. But I suspect it’s mostly because they don’t take their meta-theory that seriously, and mostly want to focus on specific examples and relate them together in a broad, gestural way. Anyway.

I’ll review these two distinctions, most of the major phenomena he covers, and also poke holes in his concept of rationality as we go.

Almost every claim in this book is tied directly to psychology studies, which are mostly conducted on college students and which may have been proven false in the last few years. Many have been conducted by Kahneman himself, so this book is kind of a review of his work! On the other hand, if you’re in doubt at some points, you might want to double check the facts and see if the reported notion is still considered valid! One of my favorite ideas in here (priming people with thoughts of money makes them selfish) has been disproven with more research. Still — the claims here have a much firmer basis in science than most of the crap you’ll hear about who people are or how thinking works. So, enjoy!

System 1 and System 2

Kahneman uses these names to distinguish the quick, associative, somewhat primal mind (“System 1”) from the cautious, skeptical, easily-tired analytical mind (“System 2”).

Whenever a person encounters new stimuli, System 1 will be first to act. Consider opening a letter. System 1 will skip past difficult questions, such as “what does this letter say,” and answer easier questions, such as “is this junk, a bill, or a check?” System 1 does not look elsewhere for evidence, but instead follows a principle Kahneman calls “What You See Is All There Is.” It just looks at what is right in front of it and tries to judge by that evidence. System 1 tends to offer emotional reactions very quickly, even when they are inappropriate such as when seeing a deformed face and panicking or seeing handwriting on an envelope and believing the letter is from a person friend. The action of System 1 is not always voluntary, and may jump way ahead, reading more words than you wanted to or responding to details you’d rather ignore. System 1 is a beast, really, and can multi-task scanning different pieces of mail while also coordinating your movement and perhaps engaging someone else in conversation at the same time. System 1 does not notice alternatives often, and tends to forget them easily, seeking evidence for its hunches and rationalizations for its preferences.

When System 1 hits a snag, like interpreting a peculiar letter, System 2 fires up, picking up where System 1 left off. Ideally, System 2 would be skeptical of System 1 and do a great job questioning assumptions, considering alternatives, and doing all that rational-sounding stuff System 1 just barreled past. However, System 2 is expensive to run. Kahneman clarifies that “when you are actively involved in difficult cognitive reasoning or engaged in a task that requires self-control, your blood glucose level drops” and you can actually bring it back up and fuel System 2 by consuming sweet drinks (p. 43)! Splenda does not work, of course. Gatorade should, though, and I’ve long used it for just this purpose!

When System 2 is running, it is experienced as voluntary, doing only what is asked of it. It is not really capable of multi-tasking and will store partial results in short term memory or pass tasks back to System 1 if there are lots of jobs in the queue. When System 2 is running, a person’s pupils dilate and heart rate goes up. System 2 is good at doing arithmetic problems that are somewhat challenging, playing little logic games, or seeing things as specific cases of general ideas. However, System 2 is far from perfect and Kahneman has a lot to say about our poor statistical intuition that seems to be a criticism of System 2. (System 1 has its own “associatively coherent” estimates, which tend to be calibrated to emotional rewards, but have little basis in actual statistics.)

When System 2 fatigues, a person will run short on self-control. In these cases, it’s common for people to choose less healthy food, use sexist language, and be selfish (p. 41–42). Generally, humans can conserve System 2 expenditure by creating conditions for cognitive ease. Familiar situations, plain writing, easy to pronounce words, good mood, and many other factors set the stage for cognitive ease. However, cognitive ease may make it so easy to think about things that System 2 never gets involved at all! If things seem clear, obvious, and easy, there won’t be a need to think challenging thoughts at all! Kahneman reports that cognitive ease is great for creativity, though I would point out that this is only true for some aspects of creativity. The “Yes, and” structure of improv comedy is a good example of creating cognitive ease for everyone, making it easier to follow your instincts somewhere interesting.

Cognitive ease is a state where you won’t need System 2 much. However, this might make you sloppy as you get overconfident.

System 1 seems preoccupied with causality, looking for it everywhere and hallucinating causation even where wildly inappropriate. Religious thinking sometimes satisfies this desire by suggesting there are unseen causes and divine influences everywhere. System 1 is a sloppy thinker, very inclined to find people guilty by association, to fall for a “halo effect” that one good thing has many (or all) good properties, etc. Kahneman doesn’t go for this leap, but I imagine that almost every thing called a “fallacy” is a bad habit of System 1! Ad hominem, slippery slope, equivocation, appeal to authority, genetic fallacy. System 1 is fallacious.

Kahneman begins to associate System 1 with the ego per se (p. 199), which I can’t help but understand in a Freudian way as the liar-face that tries to explain everything that’s happening in a familiar and coherent narrative even though life is just a bunch of insanity. System 1 gets overconfident when the story it’s spinning sounds reasonable. It therefore enjoys having less evidence, because then it’s outlook is clearer!

However, don’t get dismissive of System 1. It’s probably more important in our human world than System 2, and you should learn to live with the System 1 in yourself and in others. System 1 learns more from a specific story than from a statistical summary (p. 174). System 1 understands averages better than sums (p. 93).

Next Kahneman uses the general distinction between System 1 and System 2 to explain a few nifty fallacies. What’s cool here is that Kahneman is about to explain a bunch of types of mistakes that you probably didn’t realize are commonly made in decision making and thought!

Law of Small Numbers

A study of new diagnoses of kidney cancer in the 3,141 counties of the United States reveals a remarkable pattern. The counties in which the incidence of kidney cancer is lowest are mostly rural, sparsely populated, and located in traditionally Republican states in the Midwest, the South, and the West. What do you make of this? (p. 110)

Although there are a lot of hypotheses you might offer about clean country air or traditional lifestyles, there’s something you should know first. The counties where incidence of kidney cancer is highest are also rural, sparsely populated, and located in traditional Republican states in the Midwest, the South, and the West. Wtf?

The trick here is that these counties have so few people living in them that they are more likely to depart from national averages of anything! It’s the inverse of the law of large numbers: if we repeat a random process many times, the outcome will converge on the probability. So if there are very few cases of a random outcome (few people who might get kidney cancer or not), it’s more likely for the group to have an overall outcome that’s far from the average.

The fallacy here is that people tend to jump on a surprising finding without considering whether it is meaningful and that looking at small sample sizes tends to yield atypical results more easily. Kahneman describes this as System 1 jumping to a conclusion based on What You See Is All There Is, and System 2 can (ideally) balance this with skepticism or by simply thinking the opposite of System 1. I’m a big fan of pursuing the line of thought implied by “or not” and have found this a really helpful way to respond to arguments I don’t find compelling. What if I say “or not” and then try to think some thoughts in that direction? There are two basic systems in the mind … or not!

Availability Heuristic

This is a strange one. Because System 1 thinks in terms of examples, we tend to treat those examples as an evidentiary basis for estimates. If I ask if you are often generous, and you can think of a few examples easily, you will probably say that you are! But if I ask you to think of 10 examples, that would be hard and you may conclude you are not generous. Really, your ability to recall cases where you were generous is not a good measure of generosity at all.

I mentioned before that spouses overestimate their contributions to the tidiness of a home, and the same thing happens on larger teams where each person (who is very familiar with their own contributions) thinks they do all the work. So funny!

The needed corrective here is to gather evidence in a more systematic way so the sample corresponds better to the population. (The specific cases you review as evidence represents, or is exactly, the overall set of cases.)

When people see a birdie covered in sad gloopy oil, they are willing to forego reason and freak out. Really, birds covered in oil are not a big part of the damage of an oil spill. But, look at this freaking bird! Don’t you want to worry about it! Not very rational, Kahneman would say.

Risk Evaluation

This one is a bit dark, but a great section of the book. The trouble is that System 1 sees something it doesn’t like and decides to prioritize the issue. That’s how it defines and prioritizes risk. I see this all the time around me, it’s so annoying to me. Internet privacy is a good example: you see the lack of privacy so assume it’s a big deal, but have you considered all the other impacts of the internet and then tried to prioritize privacy concerns relative to those? Probably not, if that’s not your job.

Technical specialists in an area of risk (e.g. environmental impact or internet policy) tend to assess things in much broader, more rational terms. However, it’s tough to convince the public that these rational terms are meaningful. Kahneman gives the example of body count: how many people die with one policy on air pollution versus another. Experts consider that a really important number. The public, however, doesn’t feel that way and may not believe that deaths from particulate pollution are really equivalent to other kinds of deaths. Indeed, I think this has been a major change of attitude around smoking from “your body your call” to “that’s a terrible way to die, you poor addict.” Skiing deaths are somehow fine, but motorcycle deaths are not. Poor brown people dying is not a big deal, but white Americans…

Anyhow, Kahneman concludes that the practical way most societies (and organizations) handle risk is by “availability cascades.” What happens is that there are some examples, someone notices them and shows them to everyone they can with maximum emotional appeal, until a public decides this issue is a big deal and then ignores evidence to the contrary.

To me, the public rejection of nuclear power is a great example of this. It’s actually a very safe technology relative to power output, but coal miners dying of black lung is not interesting and nuclear meltdown is. So all the good people attending to the news accept that nuclear power is unacceptable. For now, those same people accept Liquefied Natural Gas, even though it’s known to be risky. Once a ship of LNG explodes, the fuel will have political obstacles. Coal is, of course, quite unsafe but its impact is so boring the public doesn’t care. You could think through the same logics in the case of bicycle travel vs car vs airplane where the dangerous one (cars) is almost always proposed as the alternative to the safer ones. (Buses are also really safe, btw.)

Kahneman’s advice: System 2 needs to check the evidence for a risk early and then reject it before System 1 gets carried away and evidence becomes unwelcome.

Conjunction Fallacy

I hate this one, but it’s true.

Which alternative is more probable:
Linda is a bank teller.
Linda is a bank teller in the feminist movement.

A huge majority of people, especially educated people, choose the second option. However, they’re all wrong because the chance that Linda is a bank teller who may or may not be in the feminist movement is definitely higher than the chance that Linda is a bank teller AND in the feminist movement. It’s quite simple, there’s some chance Linda is a bank teller (~5% maybe?) and then there’s some chance that she’s in the feminist movement (<80% right?). So the chance of the two together is definitely lower!

Yet no one feels that way. Including me! If I’m going to imagine she’s a bank teller, I’m going to imagine she’s a feminist too, handle that! Anyway, here I am being wrong.

Ultimately, predictions lose accuracy as they gain detail. So if someone predicts “it will rain tomorrow” that’s more likely to be true than “it will rain tomorrow with a big downpour around 3pm.” I know you’ll ignore this advice, but remember that it’s true. “We’ll have the project done by Monday and the documents sent out by Tuesday.” “I can get you a great discount and it will be on an excellent product.” “I will be yours forever and always love you.”

So, remember: Linda might be a misogynist jerkface. Pretty common, really.

Kahneman throws in here that people are more realistic about chances if you present them as “per one thousand people” rather than percent. So, which is greater: the number of women named Linda who are a bank teller (per thousand) or the number of women named Linda who are a bank teller and in the feminist movement (per thousand). I guess 3 and 1. So, yeah, OK, definitely the first one.

To Kahneman, this is what a probability means. However, I still don’t really believe that “per one thousand people” is the same as “percent chance” in most non-academic situations. The truth is that both measures are wrong in normal life, where most things happen exactly once with no reliable way to tell in advance. So Kahneman’s definition of rationality around statistics is rather tightly correlated to processes that repeat thousands of times and are basically interchangeable. Further, in the statistical abstract there is no actor who has to stake their name on one side of a bet or face consequences of the outcome after. In my world, most people are trying to make good guesses and deal with the outcomes, so we’d rather venture that Linda’s a feminist than not. Kahneman calls this irrational.

Regression to the Mean

Surely the best content of the entire book (p. 175–184). Why do very intelligent women tend to marry men who are less intelligent than them? There are a lot of ways you can explain this, but Kahneman insists the best explanation is that most men are less intelligent than a very intelligent person, so that’s just what the field is going to look like for a very intelligent woman! The chance of rolling extremely high twice is low, and unless the very intelligent woman is only looking for intelligence, the outcome is likely to be a combination of something rare and something more common. (Even if she insists on intelligence, what is she likely to end up with? The more uncommon she is the more likely she’ll be more rarely intelligent than the partner. Good news: she also is unlikely to end up with someone extremely unintelligent!)

Kahneman gives a fantastic example from the air force, where instructors say they yell at a pilot after a terrible performance and praise the pilot after a great performance, but what they tend to get in subsequent performances is something closer to average. To them, this proves that yelling works and praising doesn’t!

But, an exceptionally bad run will usually be followed by a better one, regardless of the yelling. And an exceptionally good run will be followed by a less exceptional one.

Again, Kahneman’s basic point is that we don’t understand the world that well and should be more realistic about our ignorance. Reasonable. Especially if it involves less yelling at people, I’m into it.

Hindsight

This one is disturbing. It’s very common that after someone changes their mind about something, they forget why they held the old belief at all! The mind doesn’t hold onto the old beliefs very well and has trouble unseeing the new perspective or defending the old idea. Kahneman points out that after an event turns out a certain way, it’s very common for people to exaggerate the certainty they had before. (So true.) Usually they’ll point out some reason why the outcome should have been considered likely the whole time, even if the outcome really was unlikely!

The most useful point of the whole book for me: humans have a major outcome bias that makes them think the outcome of a process is a great summary of the process (p. 204). In business this is basically a law. If someone has great outcomes listed on a resume, they are great at attaining outcomes. If they have poor outcomes, then it must be their fault. I’ve found that this leads to a form of lying in business I find really gross, ignorant, and pervasive. Reality, Kahneman says, is regression to the mean. An especially successful outcome will be followed by a worse, more common one. A very bad outcome will probably be followed by a better, more common one.

After reading Kahneman I read a business book on the employer-employee relationship someone gave me a few years ago: it was full of outcome bias! The book always showed “what works” at companies that are currently considered to be doing well, and only showed “what doesn’t work” at companies that no longer exist! In the delusional world of business writing, people are talented because they claim success; people who work on a project that’s canceled or who have whatever bad luck are not talented! But, for real, you should assume this is how reality works when you talk about business stuff, though.

Kahneman especially loves to pick on traders in finance, who mostly make bad decisions picking what will be valuable in the future (p. 213). His knock on them is that their picks are often wrong and they would do better to select stocks at random, which sounds like a fun basis for a mutual fund! However, the professional culture validates traders despite this, weaving whatever stories they need to infer causality and prop up these sad soldiers (p. 217).

I believe this is another case where Kahneman’s flavor of rationality breaks down, because there are cultural norms that imbue random processes with an essence of success despite largely unpredictable outcomes. You can see this also in discussions of communism as a “proven failure” when the outcomes were really subject to so many other factors, including competition from more affluent and bellicose capitalist societies! This idea is also called “survivor bias” and really it’s quite unfortunate that Kahneman’s rationality would define many occupations (including salespeople) as random monkeys at the typewriter. It’s nicer to imagine that some of them are better than others, even it is mostly luck!

Expertise

Yeah, Kahneman is against expertise. His take is that experts suck at accurate prediction, particularly the specialists and wonks who have an answer to everything on the TV box. Like, a specialist talking about whether Iran will declare war over so-and-so, that dude is probably wrong. Kahneman cites Philip Tetlock’s extensive research that shows the economic and political forecasts of experts are usually wrong (p. 219).

Instead of expert opinion, Kahneman endorses Paul Meehl’s suggestion that we use simple formulas to consistently evaluate situations and accept fairly low accuracy for our predictions. So, when asking “will the price of bitcoin go up or down next month?” you could come up with a simple formula and just keep using it, so you can get better at interpreting its results. The accuracy will be low, but it will still be higher than if you guess with your gut each time.

I find this notion cheerful and fun: just make a simple formula and stick to it! For example, Kahneman suggests that a good relationship is just the number of good encounters divided by the number of bad ones. (Five is a good target!) A good sexual relationship is the number of sex-sessions minus the number of quarrels. The Apgar test for newborn babies is a very successful version of this, where you just ask a few simple questions about the baby. Is the baby at all bluish? (Scored 0, 1, or 2.) Is there a slow pulse? How easy is it to make the baby grimace? Is the baby flexing its muscles? Is the baby breathing much? Sum these and you know if the baby needs immediate attention or not.

In a bid to give you a useful tool in your own work, Kahneman advocates for extremely simple formulas (rather than complex regressions or big mathematical models). Pick six factors you are confident matter and that are not directly related. Use a small Likert scale for each factor. Sum them. That’s how you should evaluate whether students will succeed later in life, if a job is going to be a great fit for you, or if a proposal is worth pursuing! Sounds great. The accuracy is not great, but he claims it will be better than doing all your homework and thinking a lot. Homework and thinking take lots of time, so I find this idea appealing.

I think I should say something here about machine learning, which mostly aims to slightly increase accuracy by making a super complicated arbitrary formula that uses many more inputs. Machine learning can boost accuracy by a few percentage points, which can be millions of dollars, but until it’s easily accessible, formulas and arithmetic sound good.

Kahneman does want to give some credit to expert judgement. Specifically, if the environment where you apply expert judgement is very regular and the expert gets to see outcomes and make judgement often. So, stock traders can’t have expertise because the stock market is chaotic but fire fighters can become experts because burning structures are similar and they get to make decisions about them a lot, seeing all relevant outcomes. Still, Kahneman wants to point out that experience can teach you the wrong lessons and that you may never get to see some of the long term outcomes of your decisions. Yeah, so experts are irrational too, great.

Planning Fallacy

This is another great one. But it’s kind of a no-brainer. The issue is that those who are planning something tend to be unrealistic about the plan because they are too excited about it working.

Kahneman offers a great story here, where he and a team he really respects work on a new curriculum for a public school system. They all agree their plan is great and they are going to succeed. They try to be realistic by padding their “best case scenario” estimates a bit and think they’re totally on track. Then they ask someone with experience to evaluate how long other teams took to develop new curriculum and how strong their teams were relative to this one. Sadly, the experienced homie has to tell them that most projects like this take a very long time or fail and that this team is slightly below average, for all such teams they have seen (p. 245–247). Bummer. 😑

The basic fallacy is that those planning something are focused on what they have going for them and are trying to remain optimistic to keep going. But they ignore “unknown unknowns” such as divorce and disease that quite often push projects off track or ruin them completely. They tend to work from an “inside perspective,” letting their intentions stand in as predictions and should really chill back and use an “outside perspective,” letting other, similar outcomes stand in as predictions. That “outside perspective” is also called “reference-class forecasting” (p.251).

In this case, Kahneman notes that being too realistic about a project early on might cause problems such as:

  • you’re not a very effective cheerleader when everyone knows how hard the project is going to be
  • the fact that your project will only be a marginal victory is apparent early on and so is harder to sell
  • you reserve extra resources for the project, beyond what is clearly needed at the outset, and the greedy contractors find ways to spend the money early; you still need more resources later

So perhaps we could append to Kahneman’s rationality concepts a set of duplicitous norms for how to represent your clear, rational understanding to others. In this case, figure out your reference-class forecast and then distort those numbers for various audiences.

Optimism

Yeah, optimism is a fallacy. In Kahneman’s terms, an optimist is someone who thinks they are improbably likely to succeed without clear reason. Optimism is partly hereditary and generally beneficial, improving life outcomes and health and so forth (p. 255). However, optimists are usually proven wrong (by definition) and take big risks that they should not. Often they want support from the government and their friends, but the support does not yield good results.

Quote from 2015 diss piece on Donald Trump

Entrepreneurs think they will succeed despite the very low rate of success for new businesses. Overconfident CEOs (who express their overconfidence by owning lots of stock in their own company) tend to make worse deals and drive companies to lower stock values.

There are social rewards for overconfidence: it helps you make faster decisions, it helps you look more expert and authoritative, it helps you defend your self-image, it helps you keep going despite set backs (p. 263).

Kahneman offers another cute business school idea here: you should do a “premortem” for a project before you start to check overconfidence. Ask members of the team to imagine it is now one year in the future and the project has been a huge failure. Ask them to each take 5–10 minutes to write the history of this failure.

Risk Aversion

In addition to being too bold, it’s common to be wrong by being too timid! From Kahneman’s perspective, losses and gains are both normal and all that matters is your next state, not how it relates to the present one.

However, risk aversion is real and very common, though it does vary (optimists may have less of it). People tend to consider gains and losses relative to their current situation, rather than comparing the overall state they are in now to the state they would be in then. Paying an extra $5 fee for something sounds bad, but missing out on a $5 savings seems somehow less bad. The pain of losing things is greater than the pain of not gaining things (by about 2:1), and this biases people against taking bets that they actually should.

Psychological value does not quite equal dollar value (p. 283). Does this mean we “should” experience psychological value equal to dollars? Or does it mean we should never assume humans equate dollars with psychological value? Idk, it’s still a good chart and a peculiar kind of humanthinking.

A common case of risk aversion can be found in civil suits. Consider a business owner who is sued for $1M by an employee for harassment that never took place. The defendant (business owner) may learn from their lawyer that the chance of paying out $1M is 5%. In theory, this should be a $50k problem to the business. Now, it’s quite common for another company to come along and to take the financial risk for 10% of the total — $100k. In that deal, if the plaintiff won, the other company would pay the $1M. If the defendant won, they would still have to pay $100k to the 3rd party. Well, if the risk is really 5%, the defendant should refuse this offer and go to trial. Own the risk! However, many defendants in this position will happily pay to make the problem go away and sell the risk. Because they are risk averse.

One root of this aversion is that the human mind gives priority to bad news (‘negativity dominance”), which sounds like System 1. Fun fact: animals are similar and fight harder to avoid losses to their territory than to establish gains!

See how those first eyes command your attention? Even if they are not more important?

Kahneman considers loss aversion to be the gravitational force that keeps lives stable. If leaving a marriage or moving to a new home sounds painful, we don’t do it, even if there is potentially much more to gain in a new arrangement.

I’ll just throw it out there that, in most cases, there are considerable transaction costs which make risk aversion more understandable, such as the bother of moving or pain and social ramifications of breaking up with someone. The benefit of something “twice as good” seems largely theoretical, since the experienced benefit to a person is rarely “doubled,” if that even makes sense to say about human experience at all. So, relative to experts trying to manipulate quantities, most people are risk averse. But I’m not sure that it’s “irrational” for regular folks trying to get by. It’s hard to predict the total cost of a change, and people may be traumatized by similar changes in the past and so have a strong preference to avoid that. Is this “rational?” I don’t know.

Experiencing versus Remembering

In this much shorter section of the book, Kahneman explores a second fundamental split within human behavior. On the one hand, a person exists in each moment and enjoys or suffers in that moment. On the other hand, a person exists as a set of recollections of times and places and things that happen, and may remember enjoyment or suffering quite differently from how it felt at the time. Legitimately, there is no correct way to summarize the pleasures of a thousand moments, because not only do humans not sum up their feelings, no one else is counting them up for us.

In particular, the experiencing self can evaluate each moment. But the remembering self does not care much about duration. If an experience was great for 1 hour or 3 hours, the duration won’t matter much later. Even stranger, the peak enjoyment or enjoyment in the last moment tends to be more important than average enjoyment. The same is true for suffering, and so a very long experience with no peak suffering is more acceptable than a shorter one with a high intensity or a particularly bad ending.

Who had a worse time during their medical procedure? Patient A actually reported it to be much worse than Patient B. This is because Patient A had a very high peak of pain, and a peak right at the end. Patient B had a much longer procedure, but the remembering self doesn’t seem to care much about duration, so the gentler ending made this procedure less painful, in memory (p. 379).

Kahneman reviews how basic socioeconomic factors influence happiness (remembered and experienced), sharing a few awesome findings. In poverty, one’s suffering-in-the-moment is likely to be higher simply because one is less insulated from pain. A headache, for example, slightly increased suffering-in-the-moment for many people, but greatly increased suffering-in-the-moment for the poor (p.396). (Beyond household income of $75k, everyone seemed about equally insulated from everyday suffering.) At the opposite extreme, the very wealthy were not much happier moment to moment than those in the middle, but they did have greater life satisfaction when asked about their goals, hopes, dreams and how fulfilled they were with these. Kahneman offers some hypotheses about why the wealthy aren’t actually happier each day than anyone else, but my guess is that human bodies have an upper limit to how much they can enjoy themselves.

Independent of wealth, it seems that a small percentage of the population may do most of the suffering. If you ask 100 people how unpleasant their day was, a few will say it was much worse than the average. These people will tend to have the same outcome on most days (p. 394)! Indeed, such temperament seems to be inherited!

Overall, most forms of hardship produce some amount of suffering for a while, but it dwindles as the person gets used to it. The exceptions to this are chronic pain, exposure to loud noises, and severe depression (p. 405)

People will rate how satisfied they are with their life more and more highly right around a wedding, but a few years later, marriage doesn’t help life satisfaction scores at all (p. 398).

Focusing Illusion

The simplest and clearest of Kahneman’s fallacies, the focusing illusion is very simple: “nothing in life is as important as you think it is when you are thinking about it” (p. 402)

Some Conclusions

This book is full of fun, iconoclastic ideas that helped me notice how the business of regular thought actually gets done. It’s a very helpful contrast with more idealistic accounts of human thought and I’ve learned to accept the fallacious thinking of others as a baseline for how to interact with humans, even though I hate it.

On the other hand, as an overall thesis about rationality in thought, the book is very weak. The definition of rationality is unclear and always in the background, so that Kahneman can easily clarify what counts as rational and what doesn’t. The real purpose of the rationality card is to offer moral correction to others, so they can act in a way more in keeping with the nascent institutional orthodoxy of behavioral economics that Kahneman endorses. I find this bid for ethical leadership lacking because it seems to lack virtue or much consideration of other ways of knowing, and instead relies on conventions of psychological research that mostly have to do with establishing that psychology is a hard science rather than exploring human minds and what they can really do. But that’s just me, and I don’t think anyone was really reading this book as a contribution to understanding of human thought.

It’s just a nice summary of a lot of cool research findings from a very interesting research career with some more general theme used for scaffolding. And it’s still a richer account of human intelligence than most! Good book, though really long. The first 100 pages are probably the best.

--

--