“Thinking, Fast and Slow” by Daniel Kahneman (Chapter Summaries)

Chapter 10: The Law of Small Numbers

  • Extreme outcomes (high and low) more likely to be found in small samples (no causal explanation to this)
  • Artifacts: observations produced entirely by some aspect of method of research (for example, differences in sample size)
  • Psychologists traditionally use their (flawed) judgment to decide on sample size, as opposed to calculating it (due to prevalent intuitive misconceptions of extent of sampling variation)
  • “Belief in the law of small numbers”: intuitions about random sampling appear to satisfy law of small numbers, which asserts that law of large numbers applies to small numbers as well (bias that favors certainty over doubt)
  • We tend to focus on the story rather than the reliability of the results, unless it’s obviously low and message is discredited
  • System 1 not prone to doubt—suppresses ambiguity, spontaneously constructs stories that are as coherent as possible (unless message immediately negated, spreading activation will evoke associations as if they’re true)
  • System 2 can doubt cos it can maintain incompatible possibilities at the same time
  • Halo effect—we’re prone to exaggerate consistency and coherence of what we see (producing representation of reality that makes too much sense)
  • Our associative machinery seeks causes—causal explanations of  chance events inevitably wrong

Chapter 11: Anchors

  • Anchoring as adjustment (System 2) and as a priming effect (System 1)
  • Anchoring index can be calculated
  • Strong with money, estimates, willingness to pay
  • Random anchors just as effective as informative ones
  • Anchoring results from associative activation (our reliance on the “story”)

Chapter 12: The Science of Availability

  • Based on how easy and fluent retrieval is (assertiveness experiment)
  • No specific number of instances is needed to get an impression of ease of retrieval
  • Availability heuristic substitutes one question for another (inevitably produces systematic errors)
  • People are less confident that an event was avoidable after listing more ways it could’ve been avoided
  • When drop in fluency was explained by another factor (ie music) then paradoxical results don’t apply—has to do with surprise (“unexplained unavailability” heuristic, System 2 can reset expectations of System 1 easily)
  • If personally involved, more likely to go by number of instances than fluency

Chapter 13: Availability, Emotion, and Risk

  • Our expectations about frequency of events distorted by prevalence and emotional intensity of message to which we’re exposed (ie media, associative memory)
  • Affect heuristic: people make judgments and decisions by consulting their emotions (substitution)
  • Associative coherence and consistent affect, “the emotional tail wags the rational dog” (Jonathan Haidt)
  • We perceive good technologies as having few costs and bad ones no benefits (affect heuristic simplifies our lives by creating world much tidier than reality)
  • Expert vs public opinion: is there such a thing as objective risk? (“risk” depends on measurement we choose)
  • Availability cascade: self-sustaining chain of events through which biases flow into public policy (we tend to either ignore small risks altogether or give them far too much weight, “probability neglect”)

Chapter 14: Tom W’s Speciality

  • Predicting by representativeness, ignoring base rates and veracity of information (substitution occurs, we look to stereotypes)
  • Confusion between probability and likelihood
  • Intuitive impressions produced by representativeness are often more accurate than change guesses though (there’s some truth to stereotypes)
  • Sin: excessive willingness to predict occurrence of unlikely/low base-rate events
  • Enhanced activation of System 2 (by frowning) improves predictive accuracy (reduces overconfidence and reliance on intuition)
  • Sin: insensitivity to quality of evidence (unless immediately rejected, System 1 processes evidence as true)
  • Bayesian statistics: how prior beliefs should be combined with diagnosticity of evidence, the degree to which it favors the hypothesis over the alternative

Chapter 15: Linda, Less is More

  • Pitting logic against representativeness (in absence of competing intuition, ie plausibility and coherence, logic prevails)
  • Conjunction fallacy: judge a conjunction of two events to be more probable than one of the events in a direct comparison
  • The most coherent stories are not necessarily the more probable, but they are plausible, and the notions of coherence, plausibility, and probability are easily confused by the unwary
  • Joint versus single evaluations: larger sets valued more than ones in joint evaluation but less in single evaluation (logic versus intuition, though this didn’t apply to Linda problem)
  • “How many” questions make us think of individuals, while “what percentage” does not

Chapter 16: Causes Trump Statistics

  • Causal stereotypes, statistical base rates and causal base rates (latter is more easily combined with other case-specific information while former is generally underweighted or neglected altogether when specific information is available)
  • The helping experiment (bystander effect, diffusion of responsibility)
  • To teach students, you must surprise them: subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular (Nisbett and Borgida)—surprising individual cases have powerful impact and are more effective for teaching cos the incongruity must be resolved and embedded in a causal story (cognitive dissonance)

Chapter 17: Regression to the Mean

  • If  correlation is imperfect, always assume regression to the mean
  • Talent versus luck
  • Regression effects ubiquitous, misguided by causal stories, has an explanation but no cause (associative memory always looks for a cause though)
  • Confusing correlation for causation (only experimental control groups tell us cause)

Chapter 18: Taming Intuitive Predictions

  • Process of spreading activation that is initially prompted by evidence and question, feeds back upon itself, eventually settles on more coherent solution possible (System 1)
  • Intuitive predictions tend to be overconfident and overly extreme
  • Correction (for predicting quantitative variables): baseline, intuitive prediction (your evaluation of evidence), move from baseline to intuition but distance allowed depends on estimate of correlation, end up with prediction influenced by intuition but far more moderate (intuitive predictions need to be corrected cos they’re not regressive and are therefore biased)
  • System 1 naturally matches extremeness of evidence on perceived extremeness of evidence on which it’s based (associative memory)—substitution effect in predicting rare events from weak evidence
  • Overconfidence occurs from coherence of best story you can tell from present evidence
  • System 2 has difficulty understanding regression to mean

Leave a comment