Call for Constructive Responses To the Jobless Economy

Great replies, all. I was gone for the weekend, hence my silence.

No point in replying to everything, so here are some highlights. Kimstu, your understanding is entirely correct, despite my wilting prose.

A quick reply to Voodoochile’s assault on the discipline of economics:

This is nonsense. The purpose of a model is to pare down a strategic economic interaction to its bare essentials in order to underscore a basic decision making process and to transform a large problem into a form that is intellectually tractable. The usefulness of models is evident both in policymaking and in daily business practice, arenas where models are relied on frequently The assumptions we make are typically for tractability. Only in a poor model do they actually drive the results. We often assume that utility functions, for example, are concave, additively separable, and twice-differentiable, regardless of the specific functional form of the utility. These assumptions may not be strictly true, but it really doesn’t matter. We aren’t trying to predict individual behavior so much as reveal a rational process.

The rest of your objections to the model are irrelevant. They are akin to the fourth grade scenario in which a student objects to one of the “a train leaves Seattle going 60 miles per hour” questions by quibbling over the speed of the wind or whether the conductor had a good breakfast.

This is, of course, pure assertion that can opens itself up to empirical counterexamples. Far stronger are analytical proofs, which the models I cited above offer. More importantly, analytical proofs can tell us exactly where the cut points are. Arguing that people don’t want to kick in more than they get out doesn’t really tell us anything about how to form optimal institutions or to set up an optimal tax rate. The math, however, does.

pervert:

Rational is indeed the correct word here. The assumptions about individual behavior required for “rationality” are very, very thin.

[ul]
[li]Individuals are goal-directed[/li][li]Individuals maximize their utility with respect to these goals[/li][li]Individuals have preferences over outcomes that are reflexive, transitive, and acyclic[/li][/ul]

That’s all. This is a standard definition in “rational choice” disciplines like economics and political science. Suffice to say, it can get us into a lot of trouble with certain people, who believe they can undermine our paradigms with the simple objection that “people aren’t always rational.” People don’t always make good decisions, absolutely. We make no assumptions over the outcomes of individual or aggregate decision-making. We just believe that people tend to think their decisions through in certain basic ways that I believe are totally noncontroversial.

Bingo. Hence people have the incentive to misrepresent their preferences in order to shift the burden of cost to others. Hence the existence of coercive institutions like taxation.

I’ll try to wade through some of the other posts if no one would mind my further hijacking…

But in the example you are talking about, why is there such a short term value assumed for the “goals”? It seems that the goals are unnecesarily short.

But what I am trying to say is that this is only true because you (not necessarily you individually) have stacked the deck by defining the goals to produce this result. Notice that in the model there is no incentive for participants to be independantly self sufficient. This can, however, have real benefits in the real world. I realize that may be one of the variables not represented in the particular model, but it also seems that your (again, not necessarily you individually) definition of rationality seems to preclude the variable in any model of this type. On the contrary, it seems that the use of rational in this case requires people to act in more dependant manners.

Do you have the names of any papers which discuss where the definition of rational came from? Or perhaps on how it is used?

Thanks again, for the expert information , BTW.

I am not sure I understand what you mean. The goal is very general: to consume the most for the least cost. The model doesn’t deal explicitly with the dynamics, but that is certainly possible. For example, take an infinite generation model. In order to solve the game for a player’s long term utility, simply take the integral of each period’s utility for a particular player discounted by some coefficient that is increasing over time. In discrete time, you take the sum rather than the integral. The game solves the same way

Of course. In this model, as in many of this type, all individuals are almost totally homogenous. Individual endowments are not modeled. In the “real world,” as it were, they are certainly not irrelevant. However, increasing the complexity of the model does not shed any new intuition on the essential reasoning process individuals use to decide whether or not to fund public goods voluntarily. All they do is change the cut points and make the math really, really ugly. If you’ve taken a look at the paper, you will see that the math is tricky enough. The authors have to use several tricks to make it tractable in the first place.

I am not certain that I understand your argument here.

That’s a big literature. Here are some classics:

Jon Elster, ed. Rational Choice. Oxford: Blackwell. 1986 (Read the introduction and the article by Gary Becker at least)

Kenneth Arrow. Mathematical Models in the Social Sciences. (Despite its title, it is very non-formal. Arrow is one of the premier rational choice theorists in the business.)

I also found this neat little overview on the internet. I haven’t read the entire thing, but my skim of the beginning gives me reason to believe that it is both thorough and accurate.

By way of full disclosure, I am not a professional economist and thus do not mean to speak with authority from on high, despite my occasional tone of frustration. :slight_smile: I am a grad student in political science. My program has been entirely colonized by rational choice theory, so all of the literature I cite I am familiar with. I am not, however, a published expert in the field. I will always try to cite lots of sources so you never have to take my word for anything I say.

No, the goal is to get the most coffee with the least economic output. That is the goal seems to be phrased to imply tricking, using, or leeching off of the coworkers in question. Take that goal in isolation, and it is true that such behavior is rational. Take that goal into the context of living in the world, however, and I’m not sure that rational is the corect word any longer. Even assuming that no visible results from drinking more coffee than you pay for, such an individual still has to live with himself. I’m not really suggesting that this is a flaw in the model, more like an anomoly in the method used to create the model. That is, it seems to me that the way the model is constructed (with the definition of rationality you gave) precludes behavior that we can see in the world around us.

If I actualy give as much or more to the coffee fund than I drink, do you call my behavior irrational? Possibly, but only if you consider the goal of drinking as much free coffee as possible in a vacuum. If you consider that goal as a sub goal of the many that make up my life, it might actually be irrational to pay less than the amount of coffee I drink.

I guess what I am saying is that another word might be more appropriate. I’m not really sure what it would be. Perhaps something like model-rationality. something to convey that the behavior being described is “rational within the model”, but may not be directly transferable to the real world.

Not really trying to make an argument. I’m just trying to understand the models you have brought up. It seems to me that the we have a model economic interaction and a model behavior motivation. However, it also seems to me that we have constructed them in order to get the expected result. That is, I’m not sure that results based on the goal of maximum consumption of coffee with minimal expediture of cash can be scaled up to social security and national health care. For instance, what if the goal of the coffee fund model were changed such that each actor wanted to ensure that coffee was always readily available. Not that they wanted to consume mor than they paid for, but that they wanted to ensure that a supply of coffee was always there. It might be possible to assign a different value to coffe paid for but not consumed.

As before, thanks very much for these pointers. You’re experience is most appreciated.

In the link you provided there is a discussion of something like what I was talking about. Section 6. A. Definition of Rationality, puts it this way,

And

The rest of that section goes into far more detail than I can summarize. Perhaps looking through it will help you understand my questions.

Thanks again for the link.

BTW, "Simon (1987, p. 5) uses the term bounded rationality to designate rational choice that takes into account the cognitive limitations of the decision-maker limitations of both knowledge and computational capacity Perhaps some sort of term like this would be appropriate.

Or perhaps the word “logical” would be more appropriate. It seems that what the method is really describing is the logical functioning of certain mathematical models. Given the premises and data of the model, the activity of the modeled individual could be said to be “logical”. Rational (in its broader sense) seems not quite to fit.

This quote does point to a huge problem in using economic models for dealing with social and political issues. In fact, it’s the very problem that I think doomed Communism as a social system. Marx did a powerful analysis of economics and class conflicts in Das Kapital but he never properly addressed the issue of “What will keep the new proletarian rulers from being the same bunch of greedy pigs that the capitalists and plebians were?”

Answer, as indicated by the Soviet and Chinese oligarchies: nothing. Not a damn thing.

This is also what bothers me about free marketers’ claims about the supremacy of the Invisible Hand. Under ANY kind of economic system, including free market capitalism, there will be an inevitable tendency for the people at the top to change the rules and game the system so that they have a considerable advantage over others in accumulating wealth.

It’s happening in America, home of the free market, right now, to wit:

So my feeling is that no matter what your economic model is, the fundamental problem of human greed will eventually swamp it if not countered by social/political ethics/rules/what-have-you.

The reason why the government has become the landlord of last resort, is NOT because the housing market cannot work. Rather, it is because ofgovernment interference in this market. It is simply NOT profitable to be a landlord in many American cities…the fact is that the housing market is highly regulated. Take Boston, for example:
-high rents, yet low profits
-blighted public housing, huge taxes
-a totally corrupt housing inspection system
To build new housing in Boston requires 27 seperate permits. In theory, this is to “protect” the tenants. In fact, it is in effect a bribe to numerous city agencies. That is why slum housing exists, and is able to charge large rents for inferior housing. If the housing market were open, private investors would deliver quality housing in very little time. But no, because of the predatory regulation, private investors have been chased away-so the government builds its own slums.
The assertion that the market has FAILED to deliver housing is mendacious-the city government is responsible for he bad conditions.
A few years ago, the stste police RAIDED the Boston inspectional department-they found a department where most employess hadother jobs…very little time was spent inspecting housing! :smiley:

No. I do not think that your understanding is entirely consistent with the argument of decision-making models in general.

The “goal” is to maximize your utility. Depending on your preference profile (which I will deal with below), the maximization of your utility may very well be coughing up a dollar to buy a new machine. The argument of the model, however, is that it is highly unlikely that a decentralized public goods provision structure will result in an optimal outcome, that is, with the right number of people paying a buck for the new machine.

It certainly is. I think that your conception of the word “rational” implies far more normative overtones than I (or any economist, for that matter) would be willing to be responsible for. There is no normative content whatsoever in behaving “rationally.” Rationality does not mean goodness, morality, ethics, etc.

This is not really relevant. This “living with oneself” afterwards is already taken into account by the individual’s utility function. If he knows that he will feel lousy afterwards for not kicking in even if he is not terribly big on coffee, then this is already part of his decision-making calculus, and consequently, he very well might contribute voluntarily.

Absolutely. And all of this can be modeled. Your decision to contribute or not to contribute can easily be endogenized. The question then becomes, is endogenization worth the effort? Does it lead to new intuitions? Is the analysis tractable?

If you read the introduction to the article whose model I have been shamelessly pirating, you will see that the real world is littered with examples of just this kind of rationality.

Absolutely. Like I said, this can all be modeled. The question formal theorists are always asking themselves is that do additional layers of complexity yield any actual benefits? In practice, sometimes they do, sometimes they don’t.

There appears to be some confusion about the idea of preferences and rationality.

From the linked document:

This is a very misleading statement. The fact is, we usually don’t try to portray people’s tastes. We make as few assumptions as we possibly can about the preferences of individuals. Preferences are private information, and they can almost never be observed. Inducing them from outcomes is tautological, assuming them a priori is problematic, and deducing them from prior theory is useful but very difficult. We sometimes assume that preferences over a particular good follows a certain probability distribution, often the uniform distribution because we don’t have to assume a mean and variance parameter, and because the mathematics of uniform distributions is really easy. In other words, we assume that preferences for a particular good lie on a continuum, but that we can never truly know the preference of any individual.

Consequently, the pursuit of individual objectives can take many different forms, which can encompass just about all possible behavioral outcomes. People who pay for the coffee pot aren’t behaving irrationally, they are making the decision that the sum of their discounted utility for having coffee is greater than the cost of buying it, regardless of the likelihood that others are free riding.

The point of the model is that, when preferences are distributed probabilistically, it is highly unlikely that exactly the correct number of people will pay to cover the cost of the pot without going under or over. The fact that it is difficult to reach optimal outcomes implies that instutitions are needed.

There is something to be said about the functional form of utility. A bad model’s results will be driven by its functional form. This is something that is apparent to experienced consumers and producers of mathematical models. We try to be as general as possible. For example, imagine a single-dimensional issue space. There is a number line, say, from 0 to 1, which represents everyone’s preferences over a particular policy. Suppose each person has an ideal point, and that these ideal points are distributed about the line probabilistically. A decent formal specification of a utility function would simply output the distance, or perhaps the square of the distance, of a particular policy from an individual’s ideal point. Sure, distance can be measured in many different ways. But assuming a well-behaved functional form is not really a heroic assumption in a good model.

Hope this clears up some of the confusion.

Many communities’ zoning laws will not allow for pre-fab housing or modular homes. Nor can you have a “basement house”.

When I was a kid back in the 50s and 60s, I knew kids who lived in “basement houses”. Evidently, it was once legal to build just the foundation of a house, put utilities in it, and a tarpaper roof on it, and live in it that way until you could afford to build the actual house over the top. Can’t do that now–gotta have enough money up-front to build all at once or leave a perfectly good foundation uninhabited until you can build the “real” house.

A completed house with grandfathered-in 60 amp service unsafely “buddy-boxed” to beyond reason and/or worn-out plumbing and/or vermin is rated as superior to a new “godawful- tacky, redneck-looking trailer house” straight from the factory because slumlords help elect Councilmen and Councilmen appoint zoning boards.

Zoning boards rank the desires of the developers, builders, slumlords, and snobs who think that any unconventional solution means a dimunition of their precious property values above the needs of the poor.

But in this case, how are these two things different. Is it not true that in the model we are discussing that rational is defined as aquiring more coffee than you pay for?

But again, this depends on your definition of optimal. If we collect money for a new machine and any extra is spent on coffe for the comming weeks, then the only non optimal outcome is when not enough money is collected. It seems to me that the model has so narrowly defined the optimal outcome as to preclude any but the expected outcome.

I realize that some of the model’s assumptions are taken to make the math easier. But those sorts of assumptions should be taken into account before a pronouncement against private charity’s usefullness is made.

We are going around in circles. I am getting the impression that what I am saying is not really helping your understanding of these issues.

No. I made it very clear above what the definition of rationality is, that is, goal-directed, utility-maximizing behavior with reflexive, transitive, and acyclic preferences. Under these constraints, buying the whole damned machine yourself could be rational behavior. I make no rationality assumptions with respect to the outcome, merely the decision-making process. This is a pretty common stumbling block a lot of people have with rational choice theory. We do not place any normative value on any particular outcome. We only constrain actors to act in ways that they believe maximize their goals. If flushing your money down the toilet maximizes your utility, then knock yourself out. To rational choice theorists, this behavior is perfectly rational.

The authors solve this problem as well, and the results are largely the same. Adding these layers of supposedly realistic complexity do not introduce any new intuition.

But if you don’t take my word for it, do the reasoning for yourself. If you believe that the end result will be overprovision, you are more likely to decline to contribute. Only the people with really strong preferences over the good will contribute, and probabilistically speaking, it is unlikely to find an entire office full of people with sufficiently strong preferences that they will resist the incentive to misrepresent. Is it possible? Sure. Is there huge selection bias in this case? Definitely. People who work 9-5 office jobs probably tend to have homogeneous preferences over this public good.

I made no pronouncements about the utility of private charity. I do contend that the analytical and the experimental literature makes abundantly clear that decentralized, voluntary public goods provision does not lead to optimal outcomes. Optimal in this case is typically defined by Pareto optimality, an extremely weak condition, that is, a situation is Pareto optimal if it is impossible to make any individual in the system better off without making anyone else worse off. That’s it. A very, very limited definition of optimality. We impose as few conditions on outcomes as possible.

Good models are not driven by peculiar assumptions or the a priori ideological convictions of the authors. The great thing about modeling is that you can get some real traction on big, complicated issues with clean logic, transparent assumptions, and easily testable hypotheses. Without this apparatus, the risk of tautology is enormous. Most of this thread is a testament to the dangers of this kind of analysis. People view some phenomena and jump to all sorts of post hoc conclusions and employ all sorts of wacky tautological causal stories based on their ideology du jour. While modeling is far from perfect and is often only a mediocre predictor of individual human behavior, its ability to explain aggregation is considerable.

We don’t take this stuff on faith. We test it. A typical work in the literature proposes a model, explores its results analytically, generates hypotheses, gathers data, and performs statistical testing to accept or reject the hypotheses. While the articles and books in the literature aren’t always of even quality, the good ones are damned good.

Ha! You need to learn that debating/arguing with Pervert is no different than talking to a wall.

I’m sorry for that. I am learning quite a bit. I think the problem may be your misunderstanding my question.

I understand this. I really do. It would be silly to put expected outcomes into the math of the model. The question I am having has more to do with how the probabilistic values are assigned to the various choices the individuals might make. I understood your definition of rational. I’m simply not sure how the “goal oriented” part of it is translated into math.

This is a perfect example of what I am questioning. I agree that if flushing your money down the toilet maximizes your utility, then such an action is perfectly logical. But only if you accept that premise in a total vacuum. While accepting such a premise might shed light on some arcane type of economic activity, I’d argue that you should be very careful before scaling the results of any model based on them back into the real world.

But this is only if you accept certain things about my goals. They may be reasonable, and they may not. I’m only questioning how such assumptions are turned into math.

Like this one.

It sounds reasonable, but is it mathematically true?

I agree completely. I hope I have not given the impression that I disaprove or am arguing with the utility of economic models.

For the most part, they’re not. They are only “assigned,” as it were, if we need to generate numerical examples to illustrate how a model works. I will try to explain how the processs of goal orientation is expressed symbolically. This will be mildly technical. At this point, if you are seriously interested in this sort of stuff, I suggest you check out some of the books I mentioned above.

There are two basic steps one has to take to figure out how individuals behave to achieve their goals. There are often many, many intervening steps depending on the complexity of the model. The first step is calculating the individual’s expected utility, the second is solving the individual’s optimization problem.

Expected Utility Calculations

This calculation makes explicit what most people do fairly intuitively, if frequently incorrectly. They multiply the probability that an event will occur by its payoff, and add that to the probability that an event will not occur times its payoff. These calculations can become incredibly complicated.

Here is a trivial but illustrative example. Suppose you and your buddy found a dollar. Neither of you has change, and you both want it. You decide that the fairest way to split it is to flip a fair coin. If it lands heads, you keep the dollar. Your expected utility can be expressed thus:

U_i = Prob(Coin lands heads) * 1 + Prob(Coin lands tails) * 0
U_i = .5 * 1 + .5 * 0
U_i - .5

Your expected utility for finding a dollar in this scenario is fifty cents.

The probability in this case is obvious. It is pretty easy to calculate the probability of a fair coin landing heads or tails. However, solving for these probabilities when all you know (or assume) is their distributions is challenging. Sometimes you don’t even make any assumptions about the probability distributions. This is really hard.

The Optimization Problem

When we calculate the expected utility of the ith individual, we can now figure out what he would do when confronted with the problem. Since we assume that people in general are rational actors, they act in ways that maximize their utility. In other words, they are goal-oriented. In a teeny tiny nutshell, you solve the optimization problem by taking the partial derivative of the expected utility function with respect to the choice variable, setting the partial derivative equal to zero, and solving. This produces the equilibrium strategy, a best response to the anticipated best responses of the other players in the game from which no player has any reason to deviate from unilaterally.

This is the kernel of formal modeling in this context. You specify a problem, calculate expected utilities, and look for an equilibrium. This is not easy to do.

This is why I spend as much time on econometrics as I do on formal modeling. Half the battle is collecting and analyzing the data to see if the model is contradicted by the hard facts from the real world. If so, then there is probably something wrong with either the model or the data analysis, or even both. Back to the drawing board.

We make no assumptions about the goals of specific individuals. All we have to assume is that they are somehow distributed. We normalize frequently for convenience, but this does not drive the conclusions of the model.

Things aren’t “mathematically true” in any meaningful way in this context. Math is a convenient way to express arguments and to deduce results. I am not sure what you are getting at here.

Once again, I hope this helps.

[John Cleese]

I would attack the unemployed, first by bombarding their homes, and then, when they run out into the streets, mowing them down with machine-gun fire. And then, releasing the vultures.

I realize these views are unpopular, but I never court popularity.

[/John Cleese]

I suppose that’s one way to solve the dynamic optimization problem.

But wouldn’t we each have to have a chunk of vulture meat to make it optimal? :wink: