Is a real predictive social/political science possible?

My brother the engineer tells me that engineering, in practice, is largely a matter of policy decisions. E.g., the Navy wants a fighter jet that can go twice as fast as anything they’ve got now. The engineers answer: Sure, we can do that, no big conceptual breakthroughs are necessary, we’ll just make the engine bigger. But that will also make it heavier. We’ll need to reconfigure the landing gear. It will also reduce the fuel efficiency, which will reduce the craft’s range, the number of air miles it can fly before refueling. We can make the fuel tanks bigger, but that means even more weight . . . So just what other advantages are you willing to trade off for all that speed? Policy decisions.

What struck me about that illustration is that the engineers, generally, know what they’re doing. They don’t have to build the plane half a dozen different ways and test each prototype. They know (almost) exactly what will happen if they change a given design feature. It’s all subject to precise calculations.

Whereas when politicians, the public policy makers for society, make “policy decisions,” they not only have to wrangle over basic disagreements about values and principles and justice, they really have no such certain way of knowing what the effects of any particular policy will be. Too many x-factors. “Law of unintended consequences.”

Will it ever be possible to develop political science and sociology to the point where policy formers will be able to rely on them the way engineers can rely on physics and chemistry?

Will we ever have something like Isaac Asimov’s “psychohistory” that can predict the course of human history in its broad outlines? http://en.wikipedia.org/wiki/Psychohistory_(fictional)

Marxists used to think they had something like that – that was one of the driving factors behind socialist politics for the past 150 years and more, the idea that socialism had the wind of history in their sails, that socialism was inevitable, because Marx had worked out on paper how fully developed capitalism was doomed to self-destruct through its internal contradictions. That shows the folly of basing a “scientific” system on something as nonscientific as Hegelian idealism.

But then, just because alchemy was mostly bullshit does not mean chemistry is invalid. Can we ever get a real, predictive science of society?

I don’t mean something that replaces or rules out free will. Example: In S.M. Stirling’s “The General” SF series – http://en.wikipedia.org/wiki/The_General_series – the hero, General Raj Whitehall, is in permanent contact with an unimaginably powerful computer (originally a traffic-control computer!) left over from the ruined interstellar civilization Whitehall is trying to revive. Central sees through Whitehall’s eyes and hears through his ears. At any point where Whitehall needs to make a tactical or strategic or even a political decision, Central can tell/show him exactly what the results will most likely be if he does A as opposed to the results if he does B, and can state precise odds with a specifed margin of error. No outcome is 100% certain, of course. Having been so informed, Whitehall is still free to make what might seem, in light of that knowledge, to be a bad choice, and accept the consequences (but he never does so, WRT anything important).

Something like that. But without the black-box technology.

There was a book, actually a series of books on this. Bacically societal trends could be predicited exactly, unless acted on by a totally unperdictable event. (and you guesses it they had one of those events). The title was ‘(something which I forget) and Empire’ The something I think is along the lines of institution but that’s not it, some alternate neam for a scientific institution.

I don’t know if you’re joking or not, but that sounds like Foundation and Empire, one of the books the Op referred to, i.e. Isaac Asimov’s “psychohistory”. It’s called the Foundation series.

The biggest problem with political decisions is that they are always done from scratch.
People really don’t like to consider what’s already been worked out elsewhere.
And I’ve seen this happen on the most local levels. One city councilman will say “Why not collect garbage every two weeks instead of one week?” Then everyone will chime in with opinions about whether that’s even possible, and how it would create endless problems and disease and rats. All the while ignoring the 28 town within 30 miles that have varying collection times. What I’m saying is that in politics facts do not accumulate like in science- people always want to start over assuming that there can’t be a known answer you could just look up.

While your brother’s description of engineering is fairly accurate there are many big differences between that and predicting human behavior. Lets use your plane example and roll back the date to the late 1930s early 1940s. Your engineer can mix and match various components and come up with a plane design. What he can’t do is say that in 5 years we will have these jet engines that can easily meet and surpass your requirements. Almost all decisions and predictions your engineer makes are based on current understanding not on what we will know in the next 5 years. If we were trying to predict human behavior in the United States in mid 2001 all our predictions will be completely wrong becuase they were based on the facts prior to 2001.

The next biggest difference in these cases is that engineers are able to do repeatible experiments. We can set up an experiment to say study airflow around a wing and we can repeat it without significantly changing the wing. Thus we can accurately measure and determine the performance of this wing under different conditions. This allows us to make predictions of performance based on given conditions. We are unable to do this with humans becuase any experiment changes whomever we are studying. Lets say we are trying to figure what effect a horrific accident has on a person. Once we expose that person to one horrific accident our test subject is irrevocably changed. We can’t set up an experiment to say see if losing a brother is worse than losing a parent. All we can rely upon is studies and those do not provide us with the same level of information as experiments do.

Even if we had perfect information we still have to deal with the enormity and complexity of the factors that go into determing human behavior. In the airplane example there are only a few factors that go into determing aircraft performance. Drag, lift, engine power, weight etc. etc. and a small error in one of those while it will bring error into our other calculations its effect is minor. Human behavior is controlled by probably millions and millions of factors. Think of how much of an effect a misset alarm could have on someones life. It could cost someone a job and at that job they might have met their future spouse or gotten a promotion or be injured in an accident or any million of other things. Thats just one small factor in every day life. It could be an alarm, it could be an animal making noise, it could be slipping on ice on the sidewalk or anything else you can imagine. Now thats just one person. There are 6 billion people in the world and this effect holds true for each of them.

The last thing you have to take into account is that these predictions will affect human behavior. Lets say that we run a simulation that shows that I will get into a car accident and die on my way to work this morning. But wait you say, we can include that knowledge into our prediction. However you still will have to at some point come to a final prediction and that will affect the world. The prediction is inherently inaccurate becuase what it predicts affects reality.

Even in engineering, it is not so clear-cut I would have to assume. I can tell you that for certain implementations of a program that one way will run faster but use more memory, or be more friendly to later additions but run like a pig. But still every program I ever make is going to depend on how many exceptions I can foresee and act against. This is going to be a product of the skill of the engineer, his knowledge of the system he is working on, how much time he is given, how many test hardware systems he can receive/make to verify everything, etc.

We may have a better understanding of what happens to light entering a black hole than we do of what’s going to happen if you raise interest at the Federal Bank–but we’re getting better at it.

You may not be able to predict what is going to happen, but given the amount of forethought you put into something, the better you can make it able to deal with a larger array of unforseen eventualities.

For instance, I had a ginormous database application + interface I had to build. One of the things I put into it was that every hour, the database would be scanned and any impossible entries in the database would be flagged for deletion. If everything was coded correctly, this code will never delete any entries in the database–but the thing needed to be able to run for 20 years without error so given that time frame there’s perfectly enough chance for junk data to collect and eventually overflow the hard drive. And since it was a flag which didn’t cause deletion for two weeks, and the directions were to backup the database once a week, if the data suddenly disappeared, you could go in and manually fix the DB.

Law and economics may not be so cut and dried, but we certainly aren’t shooting in the dark. The real question is just whether or not our policy-makers are putting in triple-redundancy when they do their thing.


int main(void) {  /* The only program with no bugs */
        return 0;
}

I always called it the Foundation Triology. And you forgot to add that it’s fiction! :slight_smile:

You’re starting from the assumption that programming is engineering, which is an assertion that many take issue with for the exact reasons you just listed.

I would personally vote that engineering is very similar, just that the physical nature of it makes it less arcane to deal with. You want to build a bridge and you need to know length, height, gravity, stresses, max windspeed, material expansion, etc. And once you’ve got those all figured out, you’re probably good to go.

But there is a bug.

God, the hacker, cracks into the system and doubles gravity (the bastard!) and your bridge collapses. You have totally not taken that eventuality into account have you? Ergo, you have a bug.

Every possible insane thing you could ever do to something that is beyond the expectations of the builder is a bug. This is the same in Engineering as Programming. But computers are a lot more variable a platform and their users are more keen on seeing them undone. Plus whereas in building a bridge, if I’ve got a long hunk of steel, I’ve got a pretty good building unit that is simple, direct, and easy to put to use. Programmers just have words. Try and describe every gear, tube, material, spring, and doohickey all in text off the top of your head that would need to be made, plus how to assemble them to all result in a BMW M5–at a sufficient level to where no illustrations are needed.

Simply the medium is a lot more difficult, so obviously there is going to be a greater chance for errors. But if you make a very complex machine, like say a space shuttle, that is so complex you must split the design and creation into small groups of developers and needs to be so well made as to withstand everything God could throw at it–well even engineering will start to show up with bugs that are a lot more likely to come into effect than gravity doubling under a certain bridge.

Humans interact. Humans respond to perceptions. Humans try to gain advantage.

In other words, the system adapts to changes. Any predictive social theory will have to account for the assimilation of itself, among the actors.

What || Gyan || said.

In an engineering problem the rules of physics and properties of materials don’t alter themselves to take advantage of and spite the engineers, no matter how much it may seem otherwise.

In other words, sensitive dependence on initial conditions.

This might preclude the predictive powers of sociology from ever becoming as accurate as aerospace design, but does NOT preclude it from eventually becoming as accurately predictive as meteorology is today.

*“Engineering is the art of modelling materials we do not wholly understand into shapes we cannot precisely analyse to resist loads we cannot properly assess in such a way that the community has no reason to suspect the extent of our ignorance.” *

  • President, Scottish Branch - Institute of Engineering UK, 1946
    Don’t let 'em kid you…
    we know a LOT, but we don’t know everything :wink:

In Asimov’s Foundation series, Hari Seldon got around that – http://en.wikipedia.org/wiki/Psychohistory_(fictional):

Which I think is a cheat, and an impractical assumption besides. Secrets cannot really be kept, especially in science. Any predictive theory would have to account for how people will act even if the prediction is publicized.

AIUI, meteorologists can make accurate predictions – but only a few days in advance; beyond that you run up against the effects of that ol’ chaos butterfly. Too many unpredictable factors. A predictive social science, to be useful, would have to be applicable over a somewhat longer time-scale.

That brings up another point: One of the most important factors in human events is technological progress. In fact, it can change the fundamental conditions of human existence, and often has. As Marx would have been the first to agree – technological progress was necessary to the industrial revolution and the emergence of industrial capitalism – none of which the wisest savant of the Sixteenth Century could have predicted. Technological progress is unpredictable, even to engineers and scientists. We can speculate (and science-fiction writers do) about how a computer-brain interface, nanotechnology, or strong artificial intelligence will change society – but not only do we have no way of knowing how accurately predictive our speculations are, we have no way of knowing whether those technologies ever will be developed to the point of practical application. Would Marx have made different predictions about the future of capitalism, if he could have foreseen the emergence of electric power, telephones, radio, automobiles, airplanes, pennicilin, birth-control pills, personal computers and the Internet? Maybe; but there is simply no way he or anyone of his time could have foreseen them, except on a pure guesswork/wishful thinking level.

I wonder if the pioneers of social engineers are currently in marketing rather than in psychology or sociology. Those in marketting have an awesome ability to measure and manipulate masses.

Imagine if a cadre of engineers infiltrated Madison Ave. for a five-year study, then came back and refined marketting practices into an engineering discipline. Why, they’d rule the world!

Perhaps social engineering models can measure the forces that make such technolgical breakthroughs marketting gold mines. In other words, conditions are right for such and such a need, there is x percentage that some companies will realize that and spend resources to meet that need, with y percentage of success over z years. The chaos of all of this componded uncertainty may be reigned in using probability wave functions lifted from quantum mechanics.

They do teach quantum theory in Marketing 101, no?

Ask an MBA. Market analysts can predict a market for a given new technology, certainly. What they cannot predict is whether a given new technology is possible. And, in many cases, neither can engineers.

Cultural geographers (and related academics like anthropologists) have tried to work with various ways of predicting human behavior over the years. Often they will use ecology’s ecosystem model (populations, energy flows, resilience to disturbances, etc.) as a rough analogue - an approach many call “cultural ecology”. Especially during the “quantitative revolution” of the 1960s and 70s, many turned to systems theory and Monte Carlo-type simulations. Now, few geographers think that such models have much explanatory (let alone predictive) power, but they still think in terms of linkages and multi-scale systems, if not as quantitatively. An exception would be those working with “agent-based modeling”, which tries to predict land use change in rural areas based on each farmer’s instistutional and economic linkages, etc. But most are going in a different direction, exploring how, say, local village leaders can actively (and unpredictably) subvert these structures.
The short answer would be that prediction is too strong a word, but that cultures are often usefully analyzed in ways that borrow ideas (occasionally inappropriately) from more predictive sciences.