Wednesday, December 7, 2016

Thinking, Fast and Slow

_Thinking, Fast and Slow_ by Daniel Kahneman
NY:  Farrar, Strauss, and Giroux, 2011
ISBN 978-0-374-27563-1

(6)  The pleasure we [Kahneman and Amos Tversky] found in working together made us exceptionally patient;  it is much easier to strive for perfection when you are never bored.

(9)  A recurrent theme of this book is that luck plays a large role in every story of success;  it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome.

(10)  Five years after the _Science_ article, we published " Prospect Theory:  An Analysis of Decision Under Risk," a theory of choice that is by some counts more influential than our work on judgment, and is one of the foundations of behavioral economics.

(24)  [Invisible gorilla]  It is the counting task - and especially the instruction to ignore one of the teams - that causes the blindness….  The gorilla study illustrates two important facts about our minds:  we can be blind to the obvious, and we are also blind to our blindness.
NB:  asking to ignore something means we continue to ignore other things too

(28)  Many years later I learned that the teacher had warned us against psychopathic charm, and the leading authority in the study of psychopathy confirmed that the teacher's advice was sound….  We were told that a strong attraction to a patient with a repeated history of failed treatment is a danger sign…

(35)  In the economy of action, effort is a cost, and the acquisition of skill is driven by the balance of benefits and costs.  Laziness is built deep into our nature.

(36)  System 2 is the only one that can follow rules, compare objects on several attributes, and make deliberate choices between options.  The automatic System 1 does not have these capabilities.  System 1 detects simple relation ("they are all alike," "the son is much taller than the father") and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information.

(37)  One of the significant discoveries of cognitive psychologists in recent decades is that switching from one task to another is effortful, especially under time pressure… 
NB:  multitasking? 

Time pressure is another driver of effort.

(41)  A series of surprising experiments by the psychologist Roy Baumeister and his colleagues has shown conclusively that all variants of voluntary effort - cognitive, emotional, or physical - draw at least partly on a shared pool of mental energy.  Their experiments involve successive rather than simultaneous tasks.

Baumeister's group has repeatedly found that an effort of will or self-control is tiring;  if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around.  The phenomenon has been named _ego depletion_.

(43)  The most surprising discovery made by Baumeister's group shows, as he puts it, that the idea of mental energy is more than a mere metaphor.  The nervous system consumes more glucose than most other parts of the body, and effortful mental activity appears to be especially expensive in the currency of glucose.  When you are actively involved in difficult cognitive reasoning or engaged in a task that requires self-control, your blood glucose level drops.  The effect is analogous to a runner who draws down glucose stored in her muscles during a sprint.  The bold implication of this idea is that the effects of ego depletion could be undone by ingesting glucose, and Baumeister and his colleagues have confirmed this hypothesis in several experiments.

(45)  The bat-and-ball problem is our first encounter with an observation that will be a recurrent theme of this book:  many people are overconfident, prone to place too much faith in their intuitions.    They apparently find cognitive effort at least mildly unpleasant and avoid it as much as possible….

It suggests that when people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound.  If System 1 is involved, the conclusion comes first and the arguments follow.

(51)  The events that took place as a result of your seeing the words ["bananas", "vomit"] happened by a process called associative activation:  ideas that have been evoked trigger many other ideas, in a spreading cascade of activity in your brain.  The essential feature of this complex set of mental events is its coherence.  Each element is connected, and each supports and strengthens the others…

As cognitive scientists have emphasized in recent years, cognition is embodied;  you think with your body, not only with your brain.

(53)  This remarkable priming phenomenon - the influencing of an action by the idea - is known as the ideomotor effect.  Although you surely were not aware of it, reading this paragraph primed you as well.

(55)  Reminders of money produce some troubling effects.  Participants in one experiment were shown a list of five words from which they were required to construct a four-word phrase that had a money theme ("high a salary desk paying"  became "a high-paying salary").  Other primes were much more subtle, including the presence of an irrelevant money-related object int he background, such as a stack of Monopoly money on a table, or a computer with a screen saver of dollar bills floating in water.

Money-primed people become more independent than they would be without the associative trigger.  They persevered almost twice as long in trying to solve a very difficult problem before they asked the experimenter for help, a crisp demonstration of increased self-reliance.  Money-primed people are also more selfish:  they were much less willing to spend time helping another student who pretended to be confused about an experimental task.  When an experimenter clumsily dropped a bunch of pencils on the floor, the participants with money (unconsciously) on their mind picked up fewer pencils.  In another experiment in the series, participants were told that they would shortly have a get-acquainted conversation with another person and were asked to set up two chairs while the experimenter left to retrieve that person.  Participants primed by money chose to stay much farther apart than their nonprime peers (118 vs. 80 centimeters).  Money-primed undergraduates also showed a greater preference for being alone.

The general theme of these findings is that the idea of money primes individualism:  a reluctance to be involved with others, to depend on others, or to accept demands from others.  The psychologist who has done this remarkable research, Kathleen Vohs, has been laudably retrained in discussing the implications of her findings, leaving the task to her readers.

(56)  The evidence of priming studies suggest that reminding people of their mortality increases the appeal of authoritarian ideas, which may become reassuring in the context of the terror of death.

(62)  Anything that makes it easier for the associative machine to run smoothly will also bias beliefs.  A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth.  Authoritarian institutions and marketers have always known this fact.  But it was psychologists who discovered that you do not have to repeat the entire statement of a fact or idea to make it appear true.
NB:  "familiarity is not easily distinguished from truth" "you do not have to repeat the entire statement of a fact or idea to make it appear true"

(63)  More advice:  if your message is to be printed, use high-quality paper to maximize the contrast between characters and their background.  If you use color, you are more likely to be believed if your text is printed in bright blue or red than in middling shades of green, yellow, or pale blue.

(67)  Around 1960, a young psychologist named Sarnoff Mednick thought he had identified the essence of creativity.  His idea was as simple as it was powerful:  creativity is associative memory that works exceptionally well.

(76)  We are evidently ready from birth to have _impressions_ of causality, which do not depend on reasoning about patterns of causation.  They are products of System 1….

Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities.

(77)  The psychologist Paul Bloom, writing in _the Atlantic_ in 2005, presented the provocative claim that our inborn readiness to separate physical and intentional causality explains the near universality of religious beliefs.  He observes that "we perceive the world of objects as essentially separate from the world of minds, making it possible for us to envision soulless bodies and bodiless souls."  The two modes of causation that we are set to perceive make it natural for us to accept the two central beliefs of many religions:  an immaterial divinity is the ultimate cause of the physical world, and immortal souls temporarily control our bodies while we live and leave them behind as we die.  In Bloom's  view, the two concepts of causality were shaped separately by evolutionary forces, building the origins of religion into the structure of System 1.

(80)  System 1 does not keep track of alternatives that it rejects, or even of the fact that there were alternatives.  Conscious doubt is not in the repertoire of System 1;  it requires maintaining incompatible interpretations in mind at the same time, which demands mental effort.  Uncertainty and doubt are the domain of System 2.

(82)  If you like the president's politics, you probably like his voice and his appearance as well.  The tendency to like (or dislike) everything about a person - including things you have not observed - is known as the halo effect.  The term has been in use in psychology for a century, but it has not come into wide use in everyday language.  This is a pity, because the halo effect is a good name for a common bias that plays a large role in shaping our view of people and situations….

The halo effect is also an example of suppressed ambiguity:  like the word _bank_, the adjective _stubborn_ is ambiguous and will be interpreted in a way that makes it coherent with the context.

(85)  A simple rule [for meetings] can help:  before an issue is discussed, all members of the committee should be asked to write a dry brief summary of their position.  This procedure makes good use of the value of the diversity of knowledge and opinion in the group.  The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them.

(87)  The participants were fully aware of the setup [given one-sided evidence and knowing it], and those who heard only one side could easily have generated the argument for the other side.  Nevertheless, the presentation of one-sided evidence had a very pronounced effect on judgments.  Furthermore, participants who saw one-sided evidence were more confident of their judgments than those who saw both sides.  This is just what you would expect if the confidence that people experience is determined by the coherence of the story they manage to construct from available information.  It is the consistency of the information that matters for a good story, not its completeness.  Indeed, you will often find that knowing little makes it easier to fit everything you know into a coherent pattern….

Overconfidence:  As the WYSIATI [ What You See IS All There IS] rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence.  The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.

(88)  Framing effects:  Different ways of presenting the same information often evoke different emotions….

Base-rate neglect:  …The personality description is salient and vivid, and  although you surely know that there are more male farmers than male librarians, that statistical fact almost certainly did not come to your mind when you first considered the question.

(91)  Surprisingly (at least to me), ratings of competence were far more predictive of voting outcomes in Todorov's study than ratings of likability.

Todorov has found that people judge competence by combining the two dimensions of strength and trustworthiness.  The faces that exude competence combine a strong chin with a slight confident-appearing smile.  There is no evidence that these facial features actually predict how well politicians will perform in office.

(104)  Characteristics of System 1
generates impressions, feelings, and inclinations;  when endorsed by System 2 these become beliefs, attitudes, and intentions
operates automatically and quickly, with little or no effort, and no sense of voluntary control
can be programmed by System 2 to mobilize attention when a particular pattern is detected (search)
executes skilled responses and generates skilled intuitions, after adequate training
creates a coherent pattern of activated ideas in associative memory
links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance
distinguishes the surprising from the normal
infers and invents causes and intentions
neglects ambiguity and suppresses doubt
is biased to believe and confirm
exaggerates emotional consistency (halo effect)
focuses on existing evidence and ignores absent evidence (WYSIATI)
generates a limited set of basic assessments
represents sets by norms and prototypes, does not integrate
matches intensities across scales (e.g. size to loudness)
computes more than intended (mental shotgun)
sometimes substitutes an easier question for a difficult one (heuristics)
is more sensitive to changes than to states (prospect theory)
overweights low probabilities
show diminishing sensitivity to quantity (psychophysics)
responds more strongly to losses than to gains (loss aversion)
frames decision problems narrowly, in isolation from one another
NB:  Prospect theory and climate change?

(110)  [The Law of Small Numbers]  If you summarize the results, you will find that the outcome "2 red, 2 white" occurs (almost exactly) 6 times as often as the outcome "4 red" or "4 white."  This relationship is a mathematical fact….

Again, no hammer, no causation, but a mathematical fact:  samples of 4 marbles yield extreme results more often than samples of 7 marbles do.

(111)  The explanation I offered is statistical:  extreme outcomes (both high and low) are more likely to be found in small than in large samples.  This explanation is not causal.  The small population of a county neither causes nor prevents cancer;  it merely allows the incidence of cancer to be much higher (or much lower) than it is in the larger population.  The deeper truth is that there is nothing to explain.  The incidence of cancer is not truly lower or higher than normal in a county with a small population, it just appears to be so in a particular year because of an accident of sampling.  If we repeat the analysis next year, we will observe the same general pattern of extreme results in the small samples, but the counties where cancer was common last year will not necessarily have a high incidence this year.  If this is the case, the differences between dense and rural counties do not really count as facts:  they are what scientists call artifacts, observations that are produced entirely by some aspect of the method of research - in this case, by differences in sample size.

(114)  The strong bias toward believing that small samples closely resemble the population from which they are drawn is also part of a larger story:  we are prone to exaggerate the consistency and coherence of what we see.  The exaggerated faith of researchers in what can be learned from a few observations is closely related to the halo effect, the sense we often get that we know and understand a person about whom we actually know very little.  System 1 tunas ahead of the facts in constructing a rich image on the basis of scraps of evidence.
NB:  What's the sample size that makes sense in each situation?  What's the proper base rate?

(114-115)  The associative machinery seeks causes.  The difficulty we have with statistical regularities is that they call for a different approach.  Instead of focusing on how the event at hand came to be, the statistical view relates it to what could have happened instead.  Nothing in particular caused it to be what it is - chance selected it from among its alternatives.

(118)  The exaggerated faith in small samples is only one example of a more general illusion - we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify.  Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.

Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations.  Many facts of the world are due to chance, including accidents of sampling.  Causal explanations of chance events are inevitably wrong.

(119)  The phenomenon we were studying is so common and so important in the everyday world that you should know its name:  it is an _anchoring effect_.  It occurs when people consider a particular value for an unknown quantity before estimating that quantity.  What happens is one of the most reliable and robust results of experimental psychology:  the estimates stay close to the number that people  considered - hence the image of an anchor.

(126)  As you may have experienced when negotiating for the first time in a bazaar, the initial anchor has a powerful effect.  My advice to students when I taught negotiations was that if you think the other side has made an outrageous proposal, you should not come back with an equally outrageous counteroffer, creating a gap that will be difficult to bridge in further negotiations.  Instead you should make a scene, storm out or threaten to do so, and make it clear - to yourself as well as to the other side -  that you will not continue the negotiation with that number on the table.

(127)  For example, the anchoring effect is reduced or eliminated when the second mover focuses his attention on the minimal offer that the opponent would accept, or on the costs to the opponent of failing to reach an agreement.  In general, a strategy of deliberately "thinking the opposite" may be a good defense against anchoring effects, because it negates the biased recruitment of thoughts that produces these effects.

Finally, try your hand at working out the effect of anchoring on a problem of public policy:  the size of damages in personal injury cases.  These awards are sometimes very large.  Businesses that are frequent targets of such lawsuits, such as hospitals and chemicals companies, have lobbied to set a cap on the awards.  Before you read this chapter you might have thought that capping awards is certainly good for potential defendants, but now you should not be so sure.  Consider the effect of capping awards at $1 million.  This rule would eliminate all larger awards, but the anchor would also pull up the size of many awards that would otherwise be much smaller.  It would almost certainly benefit serious offenders and large firms much more than small ones.
NB:  Repug policies as psychological tricks?

… we saw in the discussion of the law of small numbers that a message, unless it is immediately rejected as a lie, will have the same effect on the associative system regardless of its reliability.

(131)  I am generally not optimistic about the potential for personal control of biases, but this is an exception.  The opportunity for successful debiasing exists because the circumstances in which issues of credit allocation come up are easy to identify, the more so because tensions often arise when several people at once feel that their efforts are not adequately recognized.  The mere observation that there is usually more than 100% credit to go around is sometimes sufficient to defuse the situation.  In any event, it is a good thing for every individual to remember.  You will occasionally do more than your share, but it is useful to know that you are likely to have that feeling even when each member of the team feels the same way.

(135)  The following are some conditions in which people "go with the flow" and are affected more strongly by ease of retrieval than by the content they retrieved:
when they are engaged in another effortful task at the same time
when they are in a good mood because they just thought of a happy episode in their life
if they score low on a depression scale
if they are knowledgeable novices on the topic of the task, in contrast to true experts
when they score high on a scale of faith in intuition
if they are (or are made to feel) powerful

I find the last finding particularly intriguing.  The authors introduce their article with a famous quote:  "I don't spend a lot of time taking polls around the world to tell me what I think is the right way to act.  I've just got to know how I feel"  (George W. Bush, November 2002).  They go on to show that reliance on intuition is only in part a personality trait.  Merely reminding people of a time when they had power increases their apparent trust in their own intuition.

(139)  An inability to be guided by a "healthy fear" of bad consequences is a disastrous flaw.

(141)  His [Paul Slovic's] point is that the evaluation of the risk depends on the choice of a measure - with the obvious possibility that the choice may have been guided by a preference for one outcome or another.  He goes on to conclude that "defining risk is thus an exercise in power."

(142)  An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action.  On some occasions, a media story about a risk catches the attention of a segment of the public, which becomes aroused and worried.  This emotional reaction becomes a story in itself, prompting additional coverage in the media, which in turn produces greater concern and involvement.

(143)  The Alar tale illustrates a basic limitation in the ability of our mind to deal with small risks:  we either ignore them altogether or give them far too much weight - nothing in between.

(149-150)  The question about probably (likelihood) was difficult, but the question about similarity was easier, and it was answered instead.  This is a serious mistake, because judgments of similarity and probability are not constrained by the same logical rules.  It is entirely acceptable for judgments of similarity to be unaffected by base rates and also by the possibility that the description was inaccurate, but anyone who ignores base rates and the quality of evidence in probability assessments will certainly make mistakes.

(152)  People without training in statistics are quite capable of using base rates in predictions under some conditions.  In the first version of the Tom W problem, which provides no details about him, it is obvious to everyone that the probability of Tom W's being in a particular field is simply the base-rate frequency of enrollment in that field.  However, concern for base rates evidently disappears as soon as Tom W's personality is described….

Frowning, as we have seen, generally increases the vigilance of System 2 and reduces both overconfidence and the reliance on intuition.  

(155)  The essential keys to disciplined Bayesian reasoning can be simply summarized:
Anchor your judgment of the probability of an outcome on a plausible base rate
Question the diagnosticity of your evidence

(159)  The most coherent stories are not necessarily the most probable, but they are _plausible_, and the notions of coherence, plausibility, and probability are easily confused by the unwary.

(168)  The two types of base-rate information are treated differently:
Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available.
Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.

(171)  The experiment shows that individuals feel relieved of responsibility when they know that others have heard the same request for help.

Did the results surprise you?  Very probably.  Most of us think of ourselves as decent people who would rush to help in such a situation, and we expect other decent people to do the same.  The point of the experiment, of course, was to show that this expectation is wrong.  Even normal, decent people do not rush to help when they expect others to take on the unpleasantness of dealing with a seizure.  And that means you, too.

(174)  People who are taught surprising statistical facts about human behavior may be impressed to the point of telling their friends about what they have heard, but this does not mean that their understanding of the world has really changed….  On the other hand, surprising individual cases have a powerful impact and are a more effective tool for teaching psychology because the incongruity must be resolved and embedded in a causal story.

(173)  …rewards for improved performance work better than punishment of mistakes.

(176)  [Regression to the mean]  The instructor had attached a causal interpretation to the inevitable fluctuations of a random process….  I pointed out to the instructors that what they saw on the board coincided with what we had heard about the performance of aerobatic maneuvers on successive attempts:  poor performance was typically followed by improvement and good performance by deterioration, without any help from either praise or punishment….

Because we tend to be nice to other people when they please us and nasty when they do not we are statistically punished for being nice and rewarded for being nasty.

(182)  The observed regression to the mean cannot be more interesting or more explainable than the imperfect correlation….

Indeed, the statistician David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case.  Why is it so hard?  The main reason for the difficulty is the recurrent theme of this book:  our mind is strongly biased toward causal explanations and does not deal with with "mere statistics"…  Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.

(183)  The control group is expected to improve by regression alone, and the aim of the experiment is to determine whether the treated patients improve more than regression can explain….

Max Bazerman's _Judgement in Managerial Decision Making_

(188)  This is perhaps the best evidence we have for the role of substitution.  People are asked for a prediction but they substitute an evaluation of the evidence, without noticing that the question they answer is not the one they were asked.  This process is guaranteed to generate predictions that are systematically biased;  they completely ignore regression to the mean.

(190)  Step 1 gets you the baseline, the GPA you would have predicted if you were told nothing about Julie beyond the fact that she is a graduating senior.  In the absence of information, you would have predicted the average.  (This is similar to assigning the base-rate probability of business administration graduates when you are told nothing about Tom W.)  Step 2 is your intuitive prediction, which matches your evaluation of the evidence.  Step 3 moves you from the baseline toward your intuition, but the distance you are allowed to move depends on your estimate of the correlation.  You end up at Step 4, with a prediction that is influenced by your intuition but is far more moderate.

This approach to prediction is general.  You can apply it whenever you need to predict a quantitative variable, such as GPA, profit from an investment, or the growth of a company.  The approach builds on your intuition, but it moderates it, regresses it toward the mean.  When you have good reasons to trust the accuracy of your intuitive prediction - a strong correlation between the evidence and the prediction - the adjustment will be small.
NB:  1) estimate average;  2) determine your impression;  3) estimate correlation between average (base rate) and your impression;  4) adjust your impression

(191-192)  The biases we find in predictions that are expressed on a scale, such as GPA or the revenue of a firm, are similar to the biases observed in judging the probabilities of outcomes. 

The corrective procedures are also similar:

Both contain a baseline prediction, which you would make if you knew nothing about the case at hand.  In the categorical case, it was the base rate.  In the numerical case, it is the average outcome in the relevant category.

Both contain an intuitive prediction, which expresses the number that comes to your mind, whether it is a probability or a GPA.

In both cases, you aim for a prediction that is intermediate between the baseline and your intuitive response.

In the default case of no useful evidence, you stay with the baseline.

At the other extreme, you also stay with your initial prediction.  This will happen, of course, only if you remain completely confident in your initial prediction after a critical review of the evidence that supports it.

In most cases you will find some reason to doubt that the correlation between your intuitive judgment and the truth is perfect, and you will end up somewhere between the two poles.

(201)  Our comforting conviction that the world makes sense rests on a secure foundation:  our almost unlimited ability to ignore our ignorance.

(203)  We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious only after the fact.  There is a clear _outcome bias_.  When the outcomes are bad, the clients often blame their agents for not seeing the handwriting on the wall - forgetting that it was written in invisible ink that became legible only afterward.  Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.

(207)  In the presence of randomness, regular patterns can only be mirages.

(216)  Our message to the executives [of an investment company] was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill…

The illusions of skill is not only an individual aberration;  it is deeply ingrained in the culture of the [investment] industry.  Facts that challenge such basic assumptions - and thereby threaten people's livelihood and self-esteem - are simply not absorbed.  The mind does not digest them.  This is particularly true of statistical studies of performance, which provide base-rate information that people generally ignore when it clashes with their personal impressions from experience.

(219)  The results were devastating.  The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes.  In other words, people who spend their time, and earn their living, studying a particular topic produce poorer predictions than dart-throwing monkeys who would have distributed their choices evenly over the options.  Even in the region they knew best, experts were not significantly better than nonspecialists.

Those who know more forecast very slightly better than those who know less.  But those with the most knowledge are often less reliable.  The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident.

(224)  Why are experts inferior to algorithms?  One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions…

Another reason for the inferiority of expert judgment is that humans are incorrigibly inconsistent in making summary judgments of complex information.  When asked to evaluate the same information twice, they frequently give different answers.

(225)  The research suggests a surprising conclusion:  to maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments.

(229)  Their rational argument is compelling, but it runs against a stubborn psychological reality:  for most people, the cause of a mistake matters.

(240)  … the confidence that people have in their intuitions is not a reliable guide to their validity.  In other words, do not trust anyone - including yourself - to tell you how much you should trust their judgment….

When do judgments reflect true expertise?  When do they display an illusion of validity?  The answer comes from the two basic conditions for acquiring a skill:
an environment that is sufficiently regular to be predictable
an opportunity to learn these regularities through prolonged practice

When both these conditions are satisfied, intuitions are likely to be skilled.

(241)  Remember this rule:  intuition cannot be trusted in the absence of stable regularities in the environment.

(243)  As in the judgment of whether a work of art is genuine or a fake, you will usually do better by focusing on its provenance than by looking at the piece itself.  If the environment is sufficiently regular and if the judge has had a chance to learn its regularities, the associative machinery will recognize situations and generate quick and accurate predictions and decisions.  You can trust someone's intuitions if these conditions are met.

(247)  The _inside view_ is the one that all of us, including Seymour [Fox], spontaneously adopted to assess the future of our project.  We focused on our specific circumstances and searched for evidence in our own experiences. 

(248)  The second question I asked Seymour directed his attention away from us and toward a class of similar cases.  Seymour estimated the base rate of success in that reference class:  40% failure and seven to ten years for completion….

The spectacular accuracy of the outside-view forecast in our problem was surely a fluke and should not count as evidence for the validity of the _outside view_.  The argument for the outside view should be made on general grounds:  if the reference class is properly chosen, the outside view will give an indication of where the ballpark is, and it may suggest, as it did in our case, that the inside-view forecasts are not even close to it.

(250)  Amos and I coined the term _planning fallacy_ to describe plans and forecasts that
are unrealistically close to best-case scenarios
could be improved by consulting the statistics of similar cases

(251)  The treatment for the planning fallacy has now acquired a technical name, _reference class forecasting_, and [Bent] Flyvbjerg has applied it to transportation projects in several countries.

(251-252)  The forecasting method that Flyvbjerg applies is similar to the practices recommended for overcoming base-rate neglect:
1.  Identify an appropriate reference class (kitchen renovations, large railway projects, etc).
2.  Obtain the statistics of the reference class (in terms of cost per mile of railway, or of the percentage by which expenditures exceeded budget).  Use the statistics to generate a baseline prediction.
3.  Use specific information about the case to adjust the baseline prediction, if there are particular reasons to expect the optimistic bias to be more or less pronounced in this project than in others of the same type.

(255)  Most of us view the world as more benign than it really is, our own attributes as more favorable than they truly are, and the goals we adopt as more achievable than they are likely to be.  We also tend to exaggerate our ability to forecast the future, which fosters optimistic overconfidence.  In terms of its consequences for decisions, the optimistic bias may well be the most significant of the cognitive biases.  Because optimistic bias can be both a blessing and a risk, you should be both happy and wary if you are temperamentally optimistic….

An optimistic attitude is largely inherited, and it is part of a general disposition for well-being, which may also include a preference for seeing the bright side of everything.

(256)  Their self-confidence [the optimistic ones] is reinforced by the admiration of others.  This reasoning leads to a hypothesis:  the people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realize.

(257)  One of the benefits of an optimistic temperament is that it encourages persistence in the face of obstacles.  But persistence can be costly…

More generally, the financial benefits of self-employment are mediocre:  given the same qualifications, people achieve higher average returns by selling their skills to employers than by setting out on their own.  The evidence suggests that optimism is widespread, stubborn, and costly.

(259)  Cognitive biases play an important role, notably the System 1 feature WYSIATI [what you see is all there is].

We focus on our goal, anchor on our plan, and neglect base rates, exposing ourselves to the planning fallacy.

We focus on what we want to do and can do, neglecting the plans and skills of others.

Both in explaining the past and in predicting the future, we focus on the causal role of skill and neglect the role of luck.  We are therefore prone to an _illusion of control_.

We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.

(260)  The upshot is that people tend to be overly optimistic about their relative standing on any activity in which they do moderately well.

(263)  Philip Tetlock observed that the most overconfident experts were the most likely to be invited to strut their stuff in news shows….

The main benefit of optimism is resilience in the face of setbacks…

In essence, the optimistic style involves taking credit for successes but little blame for failures.  This style can be taught, at least to some extent…

(264)  The main obstacle is that subjective confidence is determined by the coherence of the story one has constructed, not by the quality and amount of the information that supports it.

(264-265)  The procedure is simple:  when the organization has almost come to an important decision but has not formally committed itself, [Gary] Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision.  The premise of the session is a short speech:  "Imagine that we are a year into the future.  We implemented the plan as it now exists.  The outcome was  disaster.  Please take 5 to 10 minutes to write a brief history of that disaster."

Gary Klein's idea of the premortem usually evokes immediate enthusiasm.  After I described it casually at a session in Davos, someone behind me muttered, "It was worth coming to Davos just for this!"  (I later noticed that the speaker was the CEO of a major international corporation.)  The premortem has two main advantages:  it overcomes the groupthink that affects many teams one a decision appears to have been made, and it unleashes the imagination of knowledgeable individuals in a much-needed direction.

As a team converges on a decision - and especially when the leader tips her hand - public doubts about the wisdom of the planned move are gradually suppressed and eventually come to be treated as evidence of flawed loyalty to the team and its leaders.  The suppression of doubt contributes to overconfidence in a group where only supporters of the decision have a voice.  The main virtue of the premortem is that it legitimizes doubts.  Furthermore, it encourages even supporters of the decision to search for possible threats that they had not considered earlier.  The premortem is not a panacea and does not provide complete protection against nasty surprises, but it goes some way toward deducting the damage of plans that are subject to the biases of WYSIATI and uncritical optimism.

(269)  Bruno Frey:  "The agent of economic theory is rational, selfish, and his tastes do not change."

…To a psychologist, it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.  Our two disciplines [economics and psychology] seemed to be studying different species, which the behavioral economist Richard Thaler later dubbed Econs and Humans.

(272)  Two years later, we published in _Science_ an account of framing effects:  the large changes of preferences that are sometimes caused by inconsequential variations in the wording of a choice problem.

(274)  His [Bernoulli's] utility function explained why poor people buy insurance and why richer people sell it to them,  as you can see in the table, the loss of 1 million causes a loss of 4 points of utility (from 100 to 96) to someone who has 10 million and a much larger loss of 18 points (from 48 to 30) to someone who starts off with 3 million.  The poorer man will happily pay a premium to transfer the risk to the richer one, which is what insurance is about.

(275)  The happiness that Jack and Jill experience is determined by the recent _change_ in their wealth, relative to the different states of wealth that define their reference points (1 million for Jack, 9 million for Jill).  This reference dependence is ubiquitous in sensation and perception.  The same sound will be experienced as very loud or quite faint, depending on whether it was preceded by a whisper or by a roar.  To predict the subjective experience of loudness, it is not enough to know its absolute energy;  you also need to know the reference sound to which it is automatically compared.  Similarly, you need to know about the background before you can predict whether a gray patch on a page will appear dark or light.  And you need to know the reference before you can predict the utility of an amount of wealth.

(276)  Because Bernoulli's model lacks the idea of a reference point, expected utility theory does not represent the obvious fact that the outcome that is good for Anthony is bad for Betty.  His model could explain Anthony's risk aversion, but it cannot explain Betty's risk-seeking preference for the gamble, a behavior that is often observed in entrepreneurs in general when all options are bad.

(277)  I call it theory-induced blindness:  once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws.

(279)  You know you have made a theoretical advance when you can no longer reconstruct why you failed for so long to see the obvious.  Still, it took us years to explore the implications of thinking about outcomes as gains and losses.

(280)  We were not the first to notice that people become risk seeking when all their options are bad, but theory-induced blindness had prevailed.

(281)  You know something about your preferences that utility theorists do not - that your attitudes to risk would not be different if your net worth were higher or lower by a few thousand dollars (unless you are abjectly poor).  And you also know that your attitudes to gains and losses are not derived from your evaluation of your wealth.  The reason you like the idea of gaining $100 and dislike the idea of losing $100 is not that these amounts change your wealth.  You just like winning and dislike losing - and you almost certainly dislike losing more than you like winning…

The missing variable [in Bernoulli's utility theorem] is the _reference point_, the earlier state relative to which gains and losses are evaluated.  In Bernoulli's theory you need to know only the state of wealth to determine its utility, but in prospect theory you also need to know the reference state.  Prospect theory is therefore more complex than utility theory.  

(282)  Evaluation is relative to a neutral reference point, which is sometimes referred to as an "adaptation level"…

A principle of diminishing sensitivity applies to both sensory dimensions and the evaluation of changes of wealth…

The third principle is loss aversion.  When directly compared or weighted against each other, losses loom larger than gains.   

(284)  The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5.  This is an average, of course;  some people are much more loss averse than others…

All bets are off, of course, if the possible loss is potentially ruinous, or if your lifestyle is threatened.  The loss aversion coefficient is very large in such cases and may even be infinite - there are risks that you will not accept, regardless of how many millions you might stand to win if you are lucky.

(285)  In mixed gambles, where both a gain and a loss are possible, loss aversion causes extremely risk-averse choices.

In bad choices, where a sure loss is compared to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.

(287)  Like a salary increase that has been promised informally, the high probability of winning the large sum sets up a tentative new reference point.  Relative to your expectations, winning nothing will be experienced as a large loss.  Prospect theory cannot cope with this fact, because it does not allow the value of an outcome (in this case, winning nothing) to change when it is highly unlikely, or when the alternative is very valuable.  In simple words, prospect theory cannot deal with disappointment.  Disappointment and the anticipation of disappointment are real, however, and the failure to acknowledge them is as obvious a flaw as the counterexamples that I invoked to criticize Bernoulli's theory.

Prospect theory and utility theory also fail to allow for regret.  The two theories share the assumption that available options in a choice are evaluated separately and independently, and that the option with the highest value is selected.  This assumption is certainly wrong...
NB:  Prospect theory and 2008 crash

(292)  This example [vacation days versus raise in salary] highlights two aspects of choice that the standard model of indifference curves does not predict.  First, tastes are not fixed;  they vary with the reference point.  Second, the disadvantages of a change loom larger than its advantages, nudging a bias that favors the status quo.  Of course, loss aversion does not imply that you never prefer to change your situation;  the benefits of an opportunity may exceed even overweighted losses.  Loss aversion implies only that choices are strongly biased in favor of the reference situation (and generally biased to favor small rather than large changes)….

Richard Thaler - father of behavioral economics

(293)  The just-acceptable selling price and the just-acceptable buying price should have been identical, but in fact the minimum price to sell ($100) was much higher than the maximum buying price of $35.  Owning the good appeared to increase its value.

Richard Thaler found many examples of what he called the _endowment effect_, especially for goods that are not regularly traded.  You can easily imagine yourself in a similar situation. 

(294)  [Jack] Knetsch, Thaler, and I set out to design an experiment that would highlight the contrast between goods that are held for use and for exchange.

(295)  The results were dramatic:  the average selling price was about double the average buying price, and the estimated number of trades was less than half of the number predicted by standard theory.  The magic of the market did not work for a good that the owners expected to use.

(296)  Evidence from brain imaging confirms the difference.  Selling goods that one would normally use activates regions of the brain that are associated with disgust and pain.  Buying also activates these areas, but only when the prices are perceived as too high - when you feel that a seller is taking money that exceeds the exchange value.  Brain recordings also indicate that buying at especially low prices is a pleasurable event...

As economists would predict, customers tend to increase their purchases of eggs, orange juice, or fish when prices drop and to reduce their purchases when prices rise;  however, in contrast to the predictions of economic theory, the effect of price increases (losses relative to the reference price) is about twice as large as the effect of gains.

(297)  The fundamental ideas of prospect theory are that reference points exist, and that losses loom larger than corresponding gains.  Observations in real markets collected over the years illustrate the power of these prospects….

The experimental economist John List, who has studied trading at baseball card conventions, found that novice traders were reluctant to part with the cards they owned, but that this reluctance eventually disappeared with trading experience.  More surprisingly, List found a large effect of trading experience on the endowment effect for new goods.

(298)  Veteran traders have apparently learned to ask the correct question which is "how much do I want to _have_ that mug, compared with other things I could have instead?"  This is the question that Econs ask, and with that question there is no endowment effect, because the asymmetry between the pleasure of getting and the pain of giving up is irrelevant.

Recent studies of the psychology of "decision making under poverty" suggest that the poor are another group in which we do not expect to find the endowment effect.  Being poor, in prospect theory, is living below one's reference point.  There are goods that the poor need and cannot afford, so they are always "in the losses."  Small amounts of money that they receive are therefore perceived as a reduced loss, not as a gain.  The money helps one climb a little toward the reference point, but the poor always remain on the steep limb of the value function.

People who are poor think like traders, but the dynamics are quite different.  Unlike traders, the poor are not indifferent to the differences between gaining and giving up.  Their problem is that all their choices are between losses.  Money that is spent on one good is the loss of another good that could have been purchased instead.  For the poor, costs are losses.

(300)  In fact, however, we know more than our grandmothers did and can now embed loss aversion in the context of a broader two-systems model of the mind, and specifically a biological and psychological view in which negativity and escape dominate positivity and approach.

(301)  The brains of humans and other animals contain a mechanism that is designed to give priority to bad news.

(302)  Other scholars, in a paper titled "Bad Is Stronger Than Good," summarized the evidence as follows:  "Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good.  The self is more motivated to avoid bad self-definitions than to pursue good ones.  Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones."  They cite John Gottman, the well-known expert in marital relations, who observed that the long-term success of a relationship depends far more on avoiding the negative than on seeking the positive.  Gottman estimated that a stable relationship requires that good interactions outnumber bad interactions by at least 5 to 1.  Other asymmetries in the social domain are even more striking.  We all know that a friendship that may take years to develop can be ruined by a single action….

(302-303)  Loss aversion refers to the relative strength of two motives:  we are driven more strongly to avoid losses than to achieve gains.  A reference point is sometimes the status quo, but it can also be a goal in the future:  not achieving a goal is a loss, exceeding the goal is a gain.  As we might expect from negativity dominance, the two motives are not equal powerful.  The aversion to the failure of not reaching the goal is much stronger than the desire to exceed it.

(304)  Loss aversion creates an asymmetry that makes agreements difficult to reach.  The concessions you make to me are my gains, but they are your losses;  they cause you much more pain than they give me pleasure.  Inevitably, you will place a higher value on them than I do.
NB:  Unless I also take pleasure in your loss

(305)  As initially conceived, plans for reform almost always produce many winners and some losers while achieving an overall improvement.  If the affected parties have any political influence, however, potential losers will be more active and determined than potential winners;  the outcome will be biased in their favor and inevitably more expensive and less effective than initially planned.  Reforms commonly include grandfather clauses that protect current stakeholders - for example, when the existing workforce is reduced by attrition rather than by dismissals, or when cuts in salaries and benefits apply only to future workers.  Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.  This conservatism helps keep us stable in our neighborhood, our marriage, and our job;  it is the gravitational force that holds our life together near the reference point.

(306)  A basic rule of fairness, we found, is that the exploitation of market power to impose losses on others is unacceptable.  

(307)  The important task for students of economic fairness is not to identify ideal behavior but to find the line that separates acceptable conduct from a actions that invite opprobrium and punishment.

(308)  Neuroeconomists (scientists who combine economics with brain research) have used MRI  machines to examine the brains of people who are engaged in punishing one stranger for behaving unfairly to another stranger.  Remarkably, altruistic punishment is accompanied by increased activity in the "pleasure centers" of the brain.  It appears that maintaining the social order and the rules of airiness in this fashion is its own reward.  Altruistic punishment could well be the glue that holds societies together.  However, our brains are not designed to reward generosity as reliably as they punish meanness.  Here again, we find a marked asymmetry between losses and gains.

(311)  The large impact of 0 -> 5% illustrates the _possibility effect_, which causes highly unlikely outcomes to be weighted disproportionately more than they "deserve."

…The improvement from 95% to 100% is another qualitative change that has a large impact, the _certainty effect_.  Outcomes that are almost certain are given less weight than their probability justifies.

(315)  Probability (%) to Decision Weight
0 to 0
1 to 5.5
2 to 8.1
5 to 13.2
10 to 18.6
20 to  26.1
50 to 42.1
60 to 60.1
90 to 71.2
95 to  79.3
98 to 87.1 
99 to 91.2
100 to 100

(316)  When you pay attention to a threat, you worry - and the decision weights reflect how much you worry.   Because of the possibility effect, the worry is not proportional to the probability of the threat.  reduction or mitigating the risk is not adequate;  to eliminate the worry the probably must be brought down to zero.

(317)  Fourfold Pattern
                                                GAINS                               LOSSES
HIGH PROBABILITY            95% chance to win $10,000       95% chance to lose $10,000
Certainty Effect               Fear of disappointment               Hope to avoid loss
                                       RISK AVERSE                              RISK SEEKING
                                       Accept unfavorable settlement   Reject favorable settlement
                                              
                                                 
LOW PROBABILITY           5% chance to win $10,000           5% chance to lose $10,000
Possibility Effect              Hope of large gain                        fear of large loss
                                      RISK SEEKING                               RISK AVERSE
                                      Reject favorable settlement          Accept unfavorable settlement

(318-319)  Many unfortunate human situations unfold in the top right cell.  This is where people who face very bad options take desperate gambles, accepting a high probability of making things worse in exchange for a small hope of avoiding a large loss.  Risk taking of this kind often turns manageable failures into disasters.  The thought of accepting the large sure loss is too painful, and the hope of complete relief too enticing, to make the sensible decision that it is time to cut one's losses.  This is where businesses that are losing ground to a superior technology waste their remaining assets in futile attempts to catch up.  Because defeat is so difficult to accept, the losing side in wars often fight long past the point at which the victory of the other side is certain, and only a matter of time.

(322)  [After a bus bombing in Israel] What drove me was the experience of the moment:  being next to a bus made me think of bombs, and these thought were unpleasant.  I was avoiding buses because I wanted to think of something else.

My experience illustrates how terrorism works and why it is so effective:  it induces an availability cascade.  

(323)  Many stores in New York City sell lottery tickets, and business is good.  The psychology of high-prize lotteries is similar to the psychology of terrorism.

(324)  People overestimate the probabilities of unlikely events.
People overweight unlikely events in their decisions.

Although overestimation and overweighting are distinct phenomena, the same psychological mechanisms are involved in both:  focused attention, confirmation bias, and cognitive ease.

(325)  The probability of a rare event is most likely to be overestimated when the alternative is not fully specified.  My favorite example comes for a study that the psychologist Craig Fox conducted while he was Amos's student.  Fox recruited fans of professional basketball and elicited several judgments and decisions concerning the winner of the NBA playoffs.  In particular, he asked them to estimate the probability that each of the participating teams would win the playoff;  the victory of each team in turn was the focal event….

The eight best professional basketball teams in the United States are all very good,  and it is possible to imagine even a relatively weak team among them emerging as champion.  the result:  the probably judgments generated successively for the eight teams added up to 240%!  This pattern is absurd, of douse, because the sum of the chances of the eight events _must_ add up to 100%.  The absurdity disappeared when the same judges were asked whether the winner would be from the Eastern or the Western conference.  The focal event and its alternative were equally specific in that question and the judgments of their probabilities added up to 100%.

(328)  The idea that fluency, vividness, and the ease of imagining contribute to decision weights gains support from many other observations.  

(329)  …_denominator neglect_.  If your attention is drawn to the winning marbles, you do not assess the number of non winning marbles with the same care.

(331)  As expected from prospect theory, choice from description yields a possibility effect - rare outcomes are overweighted relative to their probability.  In sharp contrast, overweighting is never observed in choice from experience, and underweighting is common.

(332)  …there is general agreement on one major cause of underweighting of rare events, both in experiment and in the real world:  many participants never experienced a major earthquake, and in 2007 no banker had personally experienced a devastating financial crisis.  Ralph Hertwig and Ido Erev note that "chances of rare events (such as the burst of housing bubbles) receive less impact than they deserve accruing to their objective probabilities."  They point to the public's tepid response to long-term environmental threats as an example.

(333)  The probability of a rare event will (often, not always) be overestimated, because of the confirmatory bias of memory.  Thinking about that event, you try to make it true in your mind.  A rare event will be overweighted if it specifically attracts attention.  Separate attention is effectively guaranteed when prospects are described explicitly ("99% chance to win $1,000, and 1% chance to win nothing").   Obsessive concerns (the bus in Jerusalem [which blew up]), vivid images (the roses), concrete representations (1 of 1,000), and explicit reminders (as in choice from description) all contribute to overweighting.  And when there is no overweighting, there will be neglect.  When it comes to rare probabilities, our mind is not designed to get things quite right.  For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news….

"It's a familiar disaster cycle.  Begun by exaggeration and overweighting, then neglect sets in."

"We shouldn't focus on a single scenario, or we will overestimate its probability.  Let's set up specific alternatives and make the probabilities add up to 100%."
NB:  No Elijah seat

(336)  Broad framing was obviously superior in this case.  Indeed, it will be superior (or at least not inferior) in every case in which several decisions are to be contemplated together….

A rational agent will of course engage in broad framing, but Humans are by nature narrow framers.

(340)  Decision makers who are prone to narrow framing construct a preference every time they face a risky choice.  They would do better by having a risk _policy_ that they routinely apply whenever a relevant problem arises.  [take highest deductible, don't buy extended warranties…]…  A risk policy is a broad frame…  A risk policy that aggregates decisions is analogous to the outside view of planning problems that I discussed earlier.  The outside view shifts the focus from the specifics of the current situation to the statistics of outcomes in similar situations….  The outside view and the risk policy are remedies against two distinct biases that affect many decisions:  the exaggerated optimism of the planning fallacy and the exaggerated caution induced by loss aversion.  The two biases oppose each other…  There is no guarantee, of course, that the biases cancel out in every situation.

(342)  Except for the very poor, for whom income coincides with survival, the main motivators of money-seeking are not necessarily economic. For the billionaire looking for the extra billion, and indeed for the participant in an experimental economics project looking for the extra dollar, money is a proxy for points on a scale of self-regard and achievement.  These rewards and punishments, promises and threats, are all in our heads.  We carefully keep score of them…

The ultimate currency that rewards or punishes is often emotional, a form of mental self-dealing that inevitably creates conflicts of interest when the individual acts as an agent on behalf of an organization.

(343)  The emotions that people attach to the state of their mental accounts are not acknowledged in standard economic theory.

(344)  As might be expected, finance research has documented a massive preference for selling winners rather than losers - a bias that has been given an opaque label:  the _disposition effect_.

(345)  The decision to invest additional resources in a losing account, when better investments are available, is known as the _sunk-cost fallacy_, a costly mistake that is observed in decisions large and small.  

(346)  Regret is an emotion, and it is also a punishment that we administer to ourselves.  The fear of regret is a factor in many of the decisions that people make ("Don't do this, you will regret it" is a common warning), and the actual experience of regret is familiar.

(347)  Regret and blame are both evoked by a comparison to a norm, but the relevant norms are different.

(348)  … people expect to have stronger emotional reactions (including regret) to an outcome that is produced by action than to the same outcome when it is produced by inaction.  This has been verified in the context of gambling:  people expect to be happier if they gamble and win than if they refrain from gambling and get the same amount.  The asymmetry is at least as strong for losses, and it applies to blame as well as to regret.  The key is not the difference between commission and omission but the distinction between default options and actions that deviate from the default.

(349)  The asymmetry in the risk of regret flavors conventional and risk-averse choices…  The physician who prescribes the unusual treatment faces a substantial risk of regret, blame, and perhaps litigation.  In hindsight, it will  be easier to imagine the normal choice;  the abnormal choice will be easy to undo.  True, a good outcome will contribute to the reputation of the physician who dared, but the potential benefit is smaller than the potential cost because success is generally a more normal outcome than is failure.

(350)  The parents were asked for the discount that would induce them to switch to the less expensive (and less safe) product [insecticide].  More than two-thirds of the parents in the survey responded that they would not purchase the new product at any price!  They were evidently revolted by the very idea of trading the safety of their child for money.  The minority who found a discount they could accept demanded an amount that was significantly higher than the amount they were willing to pay for a far larger improvement in the safety of the planet.

 (351)  The _taboo tradeoff_ against accepting any increase in risk is not an efficient way to use the safety budget.  In fact, the resistance may be motivated by a selfish fear of regret more than by a wish to optimize the child's safety.  The what-if? thought….

In the regulatory context, the precautionary principle imposes the entire burden of proving safety on anyone who undertakes actions that might harm people or the environment.  Mulitple international bodies have specified that the absence of scientific evidence of potential damage is not sufficient justification for taking risks.  As the jurist Cass Sunstein points out, the precautionary principle is costly, and when interpreted strictly it can be paralyzing.  He mentions an impressive list of innovations that would not have passed the test, including "airplanes, air conditioning, antibiotics, automobiles, chlorine, the measles vaccine, open-heart surgery, radio, refrigeration, small pox vaccine, and X-rays."  The strong version of the precautionary principle is obviously untenable.  But _enhanced loss aversion_ is embedded in a strong and widely shared moral intuition;  it originates in System 1.  The dilemma between intensely loss-averse moral attitude and efficient risk management does not have a simple and compelling solution. 

(352)  If you can remember when things go badly that you considered the possibility of regret carefully before deciding, you are likely to experience less of it.  You should also know that regret and hindsight bias will come together, so anything you can do to preclude hindsight is likely to be helpful.  My personal hindsight-avoiding policy is to be either very thorough or completely casual when making a decision with long-term consequences.  Hindsight is worse when you think a little, just enough to tell yourself later, "I almost made a better choice."

Daniel Gilbert and his colleagues provocatively claim that people generally anticipate more regret than they will actually experience, because they underestimate the efficacy of the psychological defenses they will deploy - which they label the "psychological immune system."  Their recommendation is that you should not put too much weight on regret;  even if you have some, it will hurt less than you now think.

(355)  Bet A:  11/36 to win $160, 25/36 to lose $15
Bet B:  35/36 to win $40, 1/36 to lose $10

...Remember that you are not negotiating with anyone - your task is to determine the lowest price at which you would truly be willing give up the bet.  Try it.  You may find that the prize that can be won is salient in this task, and that your evaluation of what the bet is worth is anchored in that value.  The results support this conjecture, and the selling price is higher for bet A than for bet B.  This is a presence reversal:  people choose B over A, but if they imagine owning only one of them, they set a higher value on A than on B.  As in the burglary scenarios, the preference reversal occurs because joint evaluation focuses attention on an aspect of the situation - the fact that bet A is much less safe than bet B - which was less salient in single evaluation.  The features that caused the difference between the judgments of the options in single evaluation - the poignancy of the victim being in the wrong grocery store and the anchoring on the prize - are suppressed or irrelevant when the options are evaluated jointly.  The emotional reactions of System 1 are much more likely to determine single evaluation;  the comparison that occurs in joint evaluation always involves a more careful and effortful assessment, which calls for System 2.

(356)  …"it allows individual choice to depend on the context in which the choices are made" a clear violation of coherence doctrine.  

(357)  Judgments and preferences are coherent within categories but potentially incoherent when the objects that are evaluated belong to different categories.

(360)  .._evaluability hypothesis:  The numbers of entries is given no weight in single evaluation, because the numbers are not "evaluable" on their own.

(361)  The legal system, contrary to psychological common sense, favors single evaluation.

(363)  "Italy won."  "France lost."  Do these statements have the same meaning?  The answer depends entirely on what you mean by _meaning_.

….As philosophers say, their truth conditions are identical:  if one of these sentences is true, then the other is true as well.  This is how Econs understand things.  Their beliefs and preferences are reality-bound.  In particular, the objects of their choices are states of the world, which are not affected by the words chosen to describe them.

There is another sense of _meaning_, in which "Italy won" and "France lost" do not have the same meaning at all.  In this sense, the meaning of a sentence is what happens in your associative machinery while you understand it…  The fact that logically equivalent statements evoke different reactions makes it impossible for Humans to be as reliably rational as Econs.

(364)  The problem we constructed was influenced by what we had learned from Richard Thaler, who told us that when he was a graduate student he had pinned on his board a card that said COSTS ARE NOT LOSSES.  In his early essay on consumer behavior, Thaler described the debate about whether gas stations would be allowed to charge different prices for purchases paid with cash or on credit.  The credit-card lobby pushed hard to make differential pricing illegal, but it had a fallback position:  the difference, if allowed, would be labeled a cash discount, not a credit surcharge.  Their psychology was sound:  people will more readily forgo a discount than pay a surcharge.  the two may be economically equivalent, but they are not emotionally equivalent.

(365)  … but we already know that the Human mind is not bound to reality….

[20 subjects in KEEP/LOSE frame experiment ranking system] rationality index

(367)  Reframing is effortful and System 2 is normally lazy.  Unless there is an obvious reason to do otherwise, most of us passively accept decision problems as they are framed and therefore rarely have an opportunity to discover the extent to which our preferences are _frame-bound_ rather than _reality-bound_.

(368)  The different choices in the two frames fit prospect theory, in which choices between gambles and sure things are resolved differently, depending on whether true outcomes are good or bad.  Decision makers tend to prefer the sure thing over the gamble (they are risk averse) when the outcomes are good.  They tend to reject the sure thing and accept the gamble (they are risk seeking) when both outcomes are negative.  These conclusions were well established for choices about gambles and sure things in the domain of money.  The disease problem shows that the same rule applies when the outcomes are measured in lives saved or lost.  In this context, as well, the framing experiment reveals that risk-averse and risk-seeking preferences are not reality-bound.  Preferences between the same objective outcomes reverse with different formulations.

(369- 370)  Thomas Schelling, Choice and Consequence:  example of framing effect:
should the child exemption be larger for the rich than for the poor?
Should the childless poor pay as large a surcharge as the childless rich?

(370)  We can recognize System 1 at work.  It delivers an immediate response to any question about rich and poor:  when in doubt, favor the poor.
NB:  a kick the poors response for some of the rich?

(371)  It is a better frame because the loss, even if the tickets were lost, is "sunk," and sunk costs should be ignored.  History is irrelevant and the only issue that matters is the set of options the theater patron has now and their likely consequences.

(372)  Broader frames and inclusive accounts generally lead to more rational decisions.

…..The mpg frame is wrong, and it should be replaced by the gallons-per-mile frame (or liters-per-100 kilometers, which is used in most other countries).  As Larrick and Soll point out, the misleading intuitions fostered by the mpg frame are likely to mislead policy makers as well as car buyers.

(373)  An article published in 2003 noted that the rate of organ donation was close to 100% in Austria but only 12% in Germany, 86% in Sweden but only 4% in Denmark.

These enormous differences are a framing effect, which is caused by the format of the critical question.  The high-donation countries have an opt-out form, where individuals who wish not to donate must check an appropriate box  Unless they take this simple action, they are considered willing donors.  The low-contribution countries have an opt-in form:  you must check a box to become a donor.  That is all.  The best single predictor of whether or not people will donate their organs is the designation of the default option that will be adopted without having to check a box.

Unlike other framing effects that have been traced to features of System 1, the organ donation effect is best explained by the laziness of System 2.

(380)  Peak-end rule:  The global retrospective rating was well predicted by the average of the level of pain reported at the worst moment of the experience and at its end.
Duration neglect:  The duration of the procedure had no effect whatsoever on the ratings of total pain.

(381)  The _experiencing self_ is the one that answers the question:  "Does it hurt now?"  The _remembering self_ is the one that answers the question:  "How was it, on the whole?"  Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self…

Does the actual experience count for nothing?

Confusing experience with the memory of it is a compelling cognitive illusion - and it is the substitution that makes us believe a past experience can be ruined.   The experiencing self does not have a voice.  The remembering self is sometimes wrong, but it is the one that keeps score and governs what we learn from living, and it is the one that makes decisions.  What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience.  This is the tyranny of the remembering self.
NB:  Kerkegaard's we live life forward and remember it backward.

(383)  The same operating feature of System 1 accounts for all three situations:  System 1 represents sets by averages, norms, and prototypes, not by sums.  Each cold-hand episode is a set of moments, which the remembering self stores as a prototypical moment.  This leads to a conflict.  For an objective observer evaluating the episode from the reports of the experiencing self, what counts is the "area under the curve" that integrates pain over time;  it has the nature of a sum.  The memory that the remembering self keeps, in contrast, is a representative moment, strongly influenced by the peak and the end.

(387)  A story is about significant events and memorable moments, not about time passing.  Duration neglect is normal in a story, and the ending often defines its character…

Caring for people often takes the form of concern for the quality of their stories, not for their feelings.

(390)  Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.

(394)  The mood of the moment depends primarily on the current situation.  Mood at work, for example, is largely unaffected by the factors that influence general job satisfaction, including benefits and status.  More important are situational factors such as an opportunity to socialize with coworkers, exposure to  loud noise, time pressure (a significant source of negative affect), and the immediate presence of a boss (in our first study, the only thing that was worse than being alone).  Attention is key.  Our emotional state is largely determined by what we attend to, and we are normally focused on our current activity and immediate environment.

(395)  From the social perspective, improved transportation for the labor force, availability of child care for working women, and improved socializing opportunities for the elderly may be relatively efficient ways to reduce the U-index of society - even a reduction by 1% would be a significant achievement, amounting to millions of hours of avoided suffering.

(397)  The satiation level beyond which experienced well-being no longer increases was a household income of about $75,000 in high-cost areas (it could be less in areas where the cost of living is lower).  The average increase of experienced well-being associated with incomes beyond that level was precisely zero.  This is surprising because higher income undoubtedly permits the purchase of many pleasures, including vacations in interesting places and opera tickets, as well as an improved living environment.  Why do these added pleasures not show up in reports of emotional experience?  A plausible interpretation is that higher income is associated with a reduced ability to enjoy the small pleasures of life.  There is suggestive evidence in favor of this idea:  priming student with the idea of wealth reduces the pleasure their face expresses as they eat a bar of chocolate!

……."The easiest way to increase happiness is to control your use of time.  Can you find more time to do the things you enjoy doing?"

(400)  Even when it is not influenced by completely irrelevant accidents such as the coin on the machine, the score that you quickly assign to your life is determined by a small sample of highly available ideas, not by a careful weighting of the domains of your life.

(402)  Any aspect of life to which attention is directed will loom large in a global evaluation.  This is the essence of the _focusing illusion_, which can be described in a  single sentence:
Nothing in life is as important as you think it is when you are thinking about it.

(406)  Daniel Gilbert and Timothy Wilson introduced the word _miswanting_ to describe bad choices that arise from errors of affective forecasting.  This word deserves to be in everyday language.  The focusing illusion (which Gilbert and Wilson call focalism) is a rich choice of miswanting.  In particular, it makes us prone to exaggerate the effect of significant purchases or changed circumstances on our future well-being….

The focusing illusion creates a bias in favor of goods and experiences that are initially exciting, even if they will eventually lose their appeal.  Time is neglected, causing experiences that will retain their attention value in the long term to be appreciated less than they deserve to be.

(407)  The mistake that people make in the focusing illusion involves attention to selected moments and neglect of what happens at other times.  The mind is good with stories, but it does not appear to be well designed for the processing of time.

(408)  I began this book by introducing two fictitious characters, spent some time discussing two species, and ended with two selves.  The two characters were the intuitive System 1, which does the fast thinking, and the effortful and slower System 2, which does the slow thinking, monitors System 1, and maintains control as best it can within its limited resources.  The two species were the fictitious Econs, who live in the land of theory, and the Humans, who act in the real world.  The two selves are the experiencing self, which does the living, and the remembering self, which keeps score and makes the choices.

(409)  We believe that duration's important, but our memory tells us it is not.  The rules that govern the evaluation of the past are poor guides for decision making, because time does matter.  The central fact of our existence is that time is the ultimate finite resource, but the remembering self ignores that reality.  The neglect of duration combined with the peak-end rule causes a bias that favors a short period of intense joy over a long period of moderate happiness.  The mirror image of the same bias makes us fear a short period of intense but tolerable suffering more than we fear a much longer period of moderate pain.

(412)  The decision of whether or not to protect individuals against their mistakes therefore presents a dilemma for behavioral economists.  The economists of the Chicago school do not face that problem, because rational agents do not make mistakes.  For adherents of this school, freedom is free of charge.

(413)  Thaler and Sunstein [Nudge] advocate a position of libertarian paternalism, in which the state and other institutions are allowed to _nudge_ people to make decisions that serve their own long-term interests.  The designation of joining a pension plan as the default option is an example by being automatically enrolled in the plan, when they merely have to check a box to opt out.  As we saw earlier, the framing of the individual's decision - Thaler and Sunstein call it choice architecture - has a huge effect on the outcome.  The nudge is based on sound psychology, which I described earlier.  The default option is naturally perceived as the normal choice.

(417)  Its operative features, which include WYSIATI, intensity matching, and associative coherence, among others, give rise to predictable biases and to cognitive illusions such as anchoring, nonregressive predictions, over-confidence, and numerouss others.

(418)  There is much to be done to improve decision making.  One example out of many is the remarkable absence of systematic training for the essential skill of conducting efficient meetings.

(424)  In social interaction, as well as in training, rewards are typically administered when performance is good, and punishments are typically administered when performance is poor.  Bu regression alone, therefore, behavior is most likely to improve after punishment and most likely to deteriorate after reward.  Consequently, the human condition is such that, by chance alone, one is most often rewarded for punishing others and most often punished for rewarding them.  People are generally not aware of this contingency.  In fact, the elusive role of regression is determining the apparent consequences of reward and punishment seems to have escaped the notice of students  of this area.
NB:  Time lag and our inability to recognize it:  we always want a proximate cause.

(428)  Studies of choice among gambles and of judgments of probability indicate that people tend to overestimate the probability of conjunctive events and to underestimate the probability of disjunctive events.  These biases are readily explained as effects of anchoring….

Because of anchoring, people will tend to underestimate the probabilities of failure in complex systems.  Thus, the direction of the anchoring bias can sometimes be inferred from the structure of the event.  The chain-like structure of conjunctions leads to overestimation, the funnel-like structure of disjunctions leads to underestimation.

(431)  This article described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B;  (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development;  and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.  These heuristics are highly economical and unusually effective, but they lead to systematic and predictable errors.  A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.

(435)  For example most respondents in a sample of undergraduates refused to stake $10 on the toss of a coin if they stood to win less than $30.

(438)  These considerations suggest a category-bound effect:  A change from impossibility to possibility or from possibility to certainty has a bigger impact than a comparable change in the middle of the scale.
NB:  Hence the resistance to switching from a problem mindset to a solutions mindset.

(445-446)  The preceding analysis implies that an individual's subjective state can be improved by framing negative outcomes as costs rather than as losses.  The possibility of such psychological manipulations may explain a paradoxical form of behavior that could be labeled the dead-loss effect.  Thaler (1980) discussed the example of a man who develops tennis elbow soon after paying the membership fee in a tennis club and continues to play in agony to avoid wasting his investment.  Assuming that the individual would not play if he had not paid the membership fee, the question arises:  How can playing in agony improve the individual's lot?  Playing in pain, we suggest, maintains the evaluation of the membership fee as a cost.  If the individual were to stop playing, he would be forced to recognize the fee as a dead loss, which may be more aversive than playing in pain.

1 comment:

  1. 8 Most Interesting Psychology Facts About Human Behavior. What if we all understand psychology facts about human behavior and judge someone with just their activities or behavior.

    ReplyDelete