My brother the engineer tells me that engineering, in practice, is largely a matter of policy decisions. E.g., the Navy wants a fighter jet that can go twice as fast as anything they’ve got now. The engineers answer: Sure, we can do that, no big conceptual breakthroughs are necessary, we’ll just make the engine bigger. But that will also make it heavier. We’ll need to reconfigure the landing gear. It will also reduce the fuel efficiency, which will reduce the craft’s range, the number of air miles it can fly before refueling. We can make the fuel tanks bigger, but that means even more weight . . . So just what other advantages are you willing to trade off for all that speed? Policy decisions.
What struck me about that illustration is that the engineers, generally, know what they’re doing. They don’t have to build the plane half a dozen different ways and test each prototype. They know (almost) exactly what will happen if they change a given design feature. It’s all subject to precise calculations.
Whereas when politicians, the public policy makers for society, make “policy decisions,” they not only have to wrangle over basic disagreements about values and principles and justice, they really have no such certain way of knowing what the effects of any particular policy will be. Too many x-factors. “Law of unintended consequences.”
Will it ever be possible to develop political science and sociology to the point where policy formers will be able to rely on them the way engineers can rely on physics and chemistry?
Will we ever have something like Isaac Asimov’s “psychohistory” that can predict the course of human history in its broad outlines? http://en.wikipedia.org/wiki/Psychohistory_(fictional)
Marxists used to think they had something like that – that was one of the driving factors behind socialist politics for the past 150 years and more, the idea that socialism had the wind of history in their sails, that socialism was inevitable, because Marx had worked out on paper how fully developed capitalism was doomed to self-destruct through its internal contradictions. That shows the folly of basing a “scientific” system on something as nonscientific as Hegelian idealism.
But then, just because alchemy was mostly bullshit does not mean chemistry is invalid. Can we ever get a real, predictive science of society?
I don’t mean something that replaces or rules out free will. Example: In S.M. Stirling’s “The General” SF series – http://en.wikipedia.org/wiki/The_General_series – the hero, General Raj Whitehall, is in permanent contact with an unimaginably powerful computer (originally a traffic-control computer!) left over from the ruined interstellar civilization Whitehall is trying to revive. Central sees through Whitehall’s eyes and hears through his ears. At any point where Whitehall needs to make a tactical or strategic or even a political decision, Central can tell/show him exactly what the results will most likely be if he does A as opposed to the results if he does B, and can state precise odds with a specifed margin of error. No outcome is 100% certain, of course. Having been so informed, Whitehall is still free to make what might seem, in light of that knowledge, to be a bad choice, and accept the consequences (but he never does so, WRT anything important).
Something like that. But without the black-box technology.