So, are we rational? Or does our very desire to impose “rationality” on our observations of the world, even when we cannot know enough to discern a pattern, make us fundamentally irrational?
When I think about it, the rarity, the oddball, the one who is at least vaguely disliked and distrusted is the one who has no superstitions, who acts “rationally” in all circumstances. We don’t like the idea of talking to a psychiatrist at a cocktail party, for fear they are seeing our basic irrationality.
Is human rationality the exception? Are we only capable of applying it in limited circumstances? What’s the answer to the good professor’s question?
One of my favorite quotes comes from, of all places, one of Harry Harrison’s Stainless Steel Rat* books: the man who trains the Stainless Steel Rat observes that: “Man is not a rational animal, but a rationalizing one.” I think that there is no doubt that we construct causual structures out of our enviroment on less than adequate evidence, but unlike the article quoted above, I don’t think this is a survival technique that was valuble only in the time of cavemen and tigers, or for small babies.
If you think about it, the scientific method itself relies on creating structure before you have evidence to support it absolutly: observation, hypothosis, testing. In practice we recognize that attemting to gather information without some sort of ad-hoc framework is inefficient. The question of why this is is probably the real meat of this debate. Yes, it is an important insight that we create these ad-hoc frameworks before we have enough facts to really support them, and we must remember at all times that our brain tends to forget the ad-hoc nature of that framework and shows a distinct bias towards it. That’s why it is good to use all sorts of artificial safe-guards (such as carefuly documented testing methods, peer review, and even just the recognition and the discussion of the existance of confirmation bias) to try and limit this unfortunate side effect. But we have ot remember that it is an unfortunate side effect of a critical process, and it makes no sense to deride ourselves for being irrational when that very irrationality is what gives us a place to start.
I would guess that since we invented the idea of rationality, we must be rational to some degree.We use it because it’s rationality is a human trait and/or because we’ve exhibited rational behavior to determine reality.
I don’t think we have to be completely one way or the other. It’s rational to be irrational sometimes. Conversely, it’s irrational to be rational all of the time, since this is not in our best interest.
I hate to say it, but yes, I think rational behavior (or at least what I think of as rational behavior) is pretty rare on a day to day basis.
Sometimes. The role of coincidence is interesting. Everything is coincidental, I think. The probability that anything at all happened exactly the way it did is near zero. I think everyone is impressed by unusually striking coincidences to some extent. Some people very little and to others almost everything is amazing.
Well, I think rationality is something we have the capacity for. We’d still have capacity on an individual level whether or not to accept the rationality–the little gaps we see in coincidence that the good professor speaks of.
I mean, how else to explain disagreement in the scientific community, or on these boards? The same patterns in a contested issue are before us, but only some people accept them and others reject them. Either one side is imposing patterns where they don’t exist in a vain attempt to create them, or the other is ignoring the patterns in a vain attempt to reject them.
…I have no idea what I’m talking about. I’m feeling fundamentally irrational myself.
If rationality is defined as acting in pursuit of self-interest, then it’s tautological. Otherwise, rational behavior must necessarily be measured against an external cultural or societal standard. So perspective’s kinda right: collectively, we determine what rational behavior is and isn’t. There’s no single discernable rational path without this extrinsic determination.
(n.b. It’s interesting that you should start this thread now, Sua. The fellowship I mentioned to you this evening involves writing a paper on the nature of rationality–specifically, the degrees to which external and internal countervailing forces (for example, advertising and cognitive heuristics, respectively) can affect an individual’s perception of his/her own dispositive interest such that “rational behavior” is less easily predictable. Interesting stuff.)
I get a real rush of pleasure from identifying connections where there at first appeared to be none. Call it satisfaction with learning how to figure something out, call it imposing a human-friendly framework of thought where one didn’t exist before, call it an allusional point of view. Some kind of biochemical response seems to result. This seems like a sub-rational spur to rationality.
(Yikes, that sounded kinda Freudian–the rational superego understanding and overseeing the biochemical and emotional id and ego, I suppose.)
In terms of an economist’s conception of rationality, globally rational man could not have evolved. Brain-power is too expensive. It is likely - particularly given that we live in very different circumstances to those to which we are adapted - that we differ significantly from rationality. Certainly it is possible in the lab to elicit violations of just about all the axioms of consumer theory and you don’t even need to try to get people to violate the von Neuman-Morgenstern axioms. (In other words, Gaderene’s statement that
But less than perfect rationality doesn’t mean irrationality, just bounded rationality.
If humans were rational, we would all exercise regularly, drive the speed limit, take local public transportation, eat our veggies, actively participate in debates about public affairs and public policy so that public moneys would be spent on things besides sports stadiums, and would come up with a system for electing president of the United States that produces candidates besides Bush or Gore. That evidently not being the case, we ain’t rational.
Rational, reasonable, reason… three concepts which are interrelated but subtly different.
What any society or man determines to be reasonable behavior need not be objectively derived from reason, though it may yet be a rational decision (that is, one in which behavior is decided upon in order to reach a definite conclusion or goal, though the goal need not be reasoned or reasonable).
I think man is a rational animal. Man is very goal-oriented in the general case. The goals he sets are often not reasonable, or reasoned over, but felt [instinctively] and acted upon.
I think it is fair to say man is rational, but not always reasonable.
You assume too much. This is a valid theory if and only if these actions promote, more than they stifle, the goals of the individual. Since an individual can have non-exclusive goals for which any given action may stifle one or promote the other, you cannot draw a reasonable conclusion that man is irrational without proving these actions promote people’s goals without stifling other goals, and that people do not, in fact, already do them.
hawthorne: You’re quite learned in the ways of economics, and I respect you and your posts immensely. So could you explain to me–in smaller words, maybe–why I’m wrong about rational choice theory being, essentially, a tautology? Thanks.
OK, Gaderene, see if this answers your question.
It’s true that most of the time economists don’t examine the question of rationality - we just go ahead and impose it in some form on the data. It’s also true that some economists regard rationality as untouchable and just won’t talk about challenges to it or alternatives to it. It certainly true that in most professional work rationality is not being tested.
But as long as you put specific measurable things in the utility function[sup]1[/sup], it’s open to challenge. The thing about putting specific things in the utility function is important because it means that there are some choices that a person could make (in principle or in the lab) that you would have to call irrational. You can set up experiments where if an individual prefers A to B they must prefer C to D if they are rational. If a significant number of test subjects don’t choose this way then rationality is called into question. You can set up experiments with payoffs structured in such a way as to lead a rational individual to behave in a certain way (iterated Prisoner’s Dilemma games, contribution to public goods games or the Dictator game) and see whether people behave rationally.
There is a big literature on this - it is possible to induce most people to violate the axioms of consumer theory, and expected utility theory is just plain embarrassing. Cooperation sometimes occurs in finitely iterated PDs. People contribute to public goods at a greater than Nash level. People reject non-zero offers in the Dictator game. Questions framed differently elicit preference reversals, etc. What this means is of course open to dispute.
But what you say about self-interest makes me think that you are getting at another question. Most of the time economists implicitly assume that people are selfish: that their utility depends only on their own welfare. But this (confused as it is in the literature) is not required by utility theory and is not that same as saying that people are self interested. A self interested person may maximise a utility function which contains as arguments the welfare of other people. All that rationality requires is that you pursue a goal (which is really a weighting of numerous sub-goals) consistently. It does not require that all you care about is effects on yourself, just that it is your valuation of states of the world. Genuinely caring about others is not inconsistent with self interest in this sense. If you are interested in this line of thinking go and check out Chapter 10 of Colander’s edited volume Complexity and the History of Economic Thought (2000) which grinds through this argument pretty thoroughly.[sup]2[/sup] In addition it provides back-up to my statement earlier in this thread that globally rational man could not have evolved.
What you often find is that certain economists weasel on this point, switching between the strong claim that people are selfish and the vacuous (within a utility maximising framework) claim that they are self interested.[sup]3[/sup]
[sub]I’m using the economists’ toy the utility function interchangeably with rational actor theory here. Not much violence in that. A person with preferences that are complete, reflexive, transitive and locally non-satiated can be represented as having a utility function except if their preferences are lexicographic. [/sub]
[sub]It’s called something like “Competition, Rationality and Complexity in Economics and Biology”. No royalties for me in it.[/sub]
[sub]This is a paraphrase of someone, but I’m at home at the moment and can’t remember the source.[/sub]