venerdì 11 marzo 2016

Are Disagreements Honest? Tyler Cowen Robin Hanson

Are Disagreements Honest? Tyler Cowen Robin Hanson
  • ABSTRACT We review literatures on agreeing to disagree and on the rationality of differing priors, in order to evaluate the honesty of typical disagreements. A robust result is that honest truth-seeking agents with common priors should not knowingly disagree. Typical disagreement seems explainable by a combination of random belief influences and by priors that tell each person that he reasons better than others. When criticizing others, however, people seem to uphold rationality standards that disapprove of such self-favoring priors. This suggests that typical disagreements are dishonest. We conclude by briefly considering how one might try to become more honest when disagreeing.
  • I. Introduction
  • MATERIE. politics, morality, religion, and relative abilities.
  • OBIETTIVITÀ. they often consider their disagreements to be about what is objectively true, rather than about how they each feel or use words.
  • people often consider their disagreements to be honest...Yet according to well-known theory, such honest disagreement is impossible.
  • Robert Aumann (1976) first developed general results about the irrationality of “agreeing to disagree.”He showed that if two or more Bayesians would believe the same thing given the same information (i.e., have “common priors”), and if they are mutually aware of each other's opinions (i.e., have “common knowledge”), then those individuals cannot knowingly disagree. Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information.
  • One of Aumann’s assumptions, however, does make a big difference. This is the assumption of common priors... While some people do take the extreme position that that priors must be common to be rational, others take the opposite extreme position, that any possible prior is rational.
  • TESI. We will tentatively conclude that typical disagreements are best explained by postulating that people have self-favoring priors, even though they disapprove of such priors, and that self-deception usually prevents them from seeing this fact.
  • II. The Phenomena of Disagreement
  • Disagreements do not typically embarrass us.
  • and high-IQ individuals seem no less likely to disagree than others. Not only are disagreements not embarrassing, more social shame often falls on those who agree too easily, and so lack “the courage of their convictions.”
  • Psychologists suggest that human disagreements typically depend heavily on each person believing that he or she is better than others
  • People are usually more eager to speak than they are to listen,
  • For example, most people, especially men, estimate themselves to be more able than others
  • Gilovich (1991, p.77) cites a survey of university professors, which found that 94% thought they were better at their jobs than their average colleagues.
  • III. The Basic Theory of Agreeing to Disagree
  • Imagine that John hears a noise, looks out his window and sees a car speeding away. Mary also hears the same noise, looks out a nearby window, and sees the same car.... John and Mary’s immediate impressions about the car will differ, due both to differences in what they saw and how they interpreted their sense impressions. John’s first impression is that the car was an old tan Ford, and he tells Mary this. Mary’s first impression is that the car was a newer brown Chevy, but she updates her beliefs upon hearing from John. Upon hearing Mary’s opinion, John also updates his beliefs. They then continue back and forth, trading their opinions about the likelihood of various possible car features... If Mary sees John as an honest truth-seeker who would believe the same things as Mary given the same information... then Mary should treat John’s differing opinion as indicating things that he knows but she does not. Mary should realize that they are both capable of mistaken first impressions. If her goal is to predict the truth, she has no good reason to give her own observation greater weight, simply because it was hers. Of course, if Mary has 20/20 eyesight, while John is nearsighted, then Mary might reasonably give more weight to her own observation. But then John should give her observation greater weight as well.
  • If John and Mary repeatedly exchange their opinions with each other, their opinions should eventually stop changing, at which point they should become mutually aware (i.e., have “common knowledge”) of their opinions
  • A more detailed analysis says not only that people must ultimately agree, but also that the discussion path of their alternating expressed opinions must follow a random walk.
  • Yet in ordinary practice, as well as in controlled laboratory experiments (Hanson and Nelson 2004), we know that disagreement is persistent.
  • IV. Generalizations of the Basic Theory
  • Thus John and Mary need not be absolutely sure that they are both honest, that they heard each other correctly, or that they interpret language the same way.
  • The beliefs of real people usually depend not only on their information about the problem at hand, but also on their mental context, john should pay attention to Mary's opinion not only because it may embody information that John does not have, but also because it is the product of a different mental context, and John should want to average over as many mental contexts as he can.
  • V. Comparing Theory and Phenomena
  • The stylized facts of human disagreement are in conflict with the above theory of disagreement.
  • The theory above implicitly assumed that people say what they believe.
  • people usually have the strong impression that they are not lying, and it hard to see how people could be so mistaken about this.... People sometimes accuse their opponents of insincerity, but rarely accept this same label as a self-description.
  • RAZIONALITÀ COME ONESTÀ. Bayesians can easily disagree due to differing priors, regardless of whether or not they have differing information, mental contexts, or anything else. Does this allow typical human disagreement to be rational?.. we would also have to decide if these prior differences are rational. And this last topic turns out to be very controversial.
  • To evaluate the honesty of disagreement, we do not need to know what sorts of differing priors are actually rational, but only what sorts of differences people think are rational.
  • VI. Proposed Rationality Constraints On Priors
  • In general, Bayesian agents can have beliefs not only about the world, but also about the beliefs of other agents, about other agent’s beliefs about other agents, and so on.
  • common knowledge among Bayesians. Thus when priors differ, all agents know those differences, know that they all know them, and so on. So while agents with differing priors can agree to disagree, they must anticipate such disagreements.
  • But how different can rational priors be? One extreme position is that no differences are rational (Harsanyi 1983, Aumann 1998). The most common argument given for this common prior position is that differences in beliefs should depend only on differences in information.
  • Another extreme position is that a prior is much like a utility function: an ex post reconstruction of what happens, rather than a real entity subject to independent scrutiny.
  • one prior is no more rational than another than one utility function is more rational than another.
  • A consequence of this is that if there are no constraints on which priors are rational, there are almost no constraints on which beliefs are rational. People who think that some beliefs are irrational are thus forced to impose constraints on what priors are rational.
  • it is common to require Bayesians to change their beliefs by conditioning when they learn (or forget) information.
  • Finally, some theorists use considerations of the causal origins of priors to argue that certain prior differences are irrational.
  • In summary, prior-based disagreements should be fully anticipated, and there are many possible positions on the question of when differing priors are rational. Some say no differences are rational, while others say all differences are rational.
  • VII. Commonly Upheld Rationality Standards
  • Most people have not directly declared a position on the subject of what kinds of prior differences are rational.
  • people who feel free to criticize consistently complain when they notice someone making a sequence of statements that is inconsistent or incoherent. They also complain when they notice that someone’s opinion does not change in response to relevant information.
  • Perhaps even more frequently, people criticize others when their opinions appear to have self-serving biases.
  • Though critics acknowledge that self-favoring belief is a natural tendency, such critics do not seem to endorse those beliefs as accurate or reliable.
  • These common criticisms suggest that most people implicitly uphold rationality standards that disapprove of self-favoring priors,
  • VIII. Truth-Seeking and Self-Deception
  • Non-truth-seeking and self-deception offer two complementary explanations for this difference in behavior. First, believing in yourself can be more functional that believing in logical contradictions. Second, while it is hard to deny that you have stated a logical contradiction, once the contradiction is pointed out, it is much easier to deny that a disagreement is due to your having a self-favoring prior.
  • Scientists with unreasonably optimistic beliefs about their research projects may work harder and thus better advance scientific knowledge (Everett 2001; Kitcher 1990).
  • Self-favoring priors can thus be “rational”in the sense of helping one to achieve familiar goals, even if they are not “rational”in the sense of helping one to achieve the best possible estimate of the true situation (Caplan 2000).
  • Evolutionary arguments have even been offered for why we might have evolved to be biased and self-deceived.
  • This story is also commonly told in literature. For example, the concluding dream in Fyodor Dostoevsky's (1994 [1866]) Crime and Punishment seems to describe disagreement as the original sin, from which arises all other sins. In contrast, the description of the Houyhnhnms in Jonathan swift (1962 [1726]) Gulliver’s Travels can be considered a critique showing how creatures (intelligent horses in this case) that agree too much lose their “humanity.”
  • VIII. How Few Meta-Rationals?
  • We can call someone a truth-seeker if, given his information and level of effort on a topic, he chooses his beliefs to be as close as possible to the truth. A non-truth seeker will, in contrast, also put substantial weight on other goals when choosing his beliefs.
  • Let us also call someone meta-rational if he is an honest truth-seeker who chooses his opinions as if he understands the basic theory of disagreement... and abides by the rationality standards that most people uphold, which seem to preclude self-favoring priors.
  • TESI. Our working hypothesis for explaining the ubiquity of persistent disagreement is that people are not usually meta-rational.
  • How many meta-rational people can there be?
  • If meta-rational people were common, and able to distinguish one another, then we should see many pairs of people who have almost no dishonest disagreements with each other.
  • Yet it seems that meta-rational people should be discernable via their conversation style.
  • two meta-rational people should be able to discern one another via a long enough conversation. And once they discern one another, two meta-rational people should no longer have dishonest disagreements.
  • unless meta-rationals simply cannot distinguish each other, only a tiny non-descript percentage of the population, or of academics, can be meta-rational. Either few people have truth-seeking rational cores, and those that do cannot be readily distinguished, or most people have such cores but they are in control infrequently and unpredictably.
  • IX. Personal Policy Implications
  • Let us assume, however, that you, the reader, are trying to be one of those rare meta-rational souls in the world,
  • One approach would be to try to never assume that you are more meta-rational than anyone else.
  • Alternatively, you could adopt a "middle" opinion. There are, however, many ways to define middle, and people can disagree about which middle is best
  • psychologists have found numerous correlates of self-deception. Self-deception is harder regarding one’s overt behaviors, there is less self-deception in a galvanic skin response (as used in lie detector tests) than in speech, the right brain hemisphere tends to be more honest, evaluations of actions are less honest after those actions are chosen than before (Trivers 2000), self-deceivers have more self-esteem and less psychopathology, especially less depression (Paulhus 1986), and older children are better than younger ones at hiding their self-deception from others (Feldman & Custrini 1988). Each correlate implies a corresponding sign of self-deception. Other commonly suggested signs of self-deception include idiocy, self-interest, emotional arousal, informality of analysis, an inability
  • While we have identified some considerations to keep in mind, were one trying to be one of those rare meta-rational souls, we have no general recipe for how to proceed. Perhaps recognizing the difficulty of this problem can at least make us a bit more wary of our own judgments when we disagree.
  • X. Conclusion
  • We have therefore hypothesized that most disagreement is due to most people not being meta-rational, i.e., honest truth-seekers who understand disagreement theory and abide by the rationality standards that most people uphold. We have suggested that this is at root due to people fundamentally not being truth-seeking. This in turn suggests that most disagreement is dishonest.
continua