I’m mostly referring to the segment of the book (where you can click to look inside, I can’t link the passage exactly but it’s a quick find) on Epistemology. The short answer is that all knowledge is fallible and that nothing can be known with true certainty, and then it goes on to try and apply this to economics.
Sounds like old hat to me but I wondered how people respond when there is the claim that “we can be wrong”. I mean science is pretty much based on such a thing right? But does that then invite people to just say anything then if we (and our methods) aren’t perfect?
No, you have to wean yourself off dichotomies (black-and-white thinking). The fact that there are no absolutes doesn’t mean that nothing matters. There is not an absolute highest altitude on earth (as you can keep going further into space), but that doesn’t mean that some altitudes aren’t higher than others. There is no perfect tasting or perfectly healthy food, but evidently there are variations for individuals.
Similarly for knowledge, the issue is how much (and what quality) evidence there is to support a statement. A lot is actually simply relying on others, but that is in general again not a bad policy. It just shows that you should always be open to the possibility that a statement is false. Persons who claim absolute truth of a statement and draw far-reaching conclusions from it without any proviso or doubt are usually bad scientists or unscientific. Good scientists know that everything is based on certain hypotheses and assumptions.
The example I usually give is the Newtonian laws, which are strictly speaking false, even though there was abundant evidence to their truth for a long time. But under relativistic conditions they give incorrect results. For normal day-to-day interactions, however, they are a sufficient approximation and thereby they are used all the time by engineers and scientists (except where relativity matters). So we may use ‘false’ statements because they are true as far as daily life is concerned, as they approximate the truth and the deviation is not interesting in many cases.
If people would be intransigent towards false statements, we would be forced to work with relativistic formulae all the time, which would needlessly complicate our work. Truth is not an absolute good.
There’s a bait and switch happening here - selling the idea “knowledge might be wrong” and delivering the conclusion “therefore all things are equally uncertain”
Mathematical statements are absolute truths. Unless they aren’t. There are two possibilities for their being false. There might be an error in the proof. It happens. The first proof of the four color theorem stood for ten years in the late 18th century before a gap was found. Or the axioms might be inconsistent. Russell’s paradox found such in Cantor’s set theory. But in well over 100 years no further has been found.
Makes me think of Isaac Asimov’s “The Relativity of Wrong.”
Some ideas are “more wrong” than others. The idea that the world is flat is “more wrong” than the idea that the earth is a perfect sphere. Both are wrong, but one is way wrong and the other is fairly close to true.
Some things in science – special relativity or the evolution of species – are so close to “absolutely certain” that no meaningful disagreement exists.
There is a level of confidence in knowledge, which can be expressed as a kind of probability. If I drop a pen, there is a very high probability it will fall. If I say the coin I toss will wind up heads, there is only roughly a 50% probability I am right. Saying imperfect knowledge is the same as high uncertain knowledge is confusing these two probabilities.
Back 45 years ago there was a lot of work on proving programs correct. Then there was a famous paper examining proofs in the literature (of really complex programs like GCD finders) that found that most of the proofs were faulty.
One of the points was that proofs in math are social activities, (either during construction or when the proof is reviewed.) Social activities leave lots of room for errors.
There is usually more than one way to accomplish a task. However, there are still only so many ways to accomplish that task successfully.
There are an infinite number of ways to do the task unsuccessfully.
Say, as example, that you want to run a nail into a piece of wood. You can apply force to it in a direction perpendicular to the wood, using a variety of implements like hammers, rocks, nail guns, etc; in theory, you could use matter transmission to “beam” it into the wood; you could wack the wood onto the nail, reversing the process; and so on for a small, finite, discrete list of options.
But for doing it wrong, you can throw the wood away, you can set a timer of 10 billion years before beginning, you can wish the nail away, you can try to hammer the nail away from the wood, and so on. There’s no end to the list of actions you could take that have no possible chance of succeeding.
(And, yeah, truth is finite but error is infinite. There are infinitely many wrong answers to “What is two plus two?” but only one correct answer. I tried to explain that to my old math teacher, but she still gave me a ‘C’ on the midterm.)
That’s really interesting. Do you happen to know the author or title or some pointer on how to find it? I remembered these discussion from when I studied IT in the eighties.
I also recall an article consisting of the proof of correctness of a simple program, and afterwards it was found that the proof itself contained multiple errors. Unfortunately I haven’t been able to find that article either.
That’s what I thought at first, that while it might not be perfect it’s better than before. Like when people thought gods controlled natural phenomenon but now we know that’s not the case. But I don’t think that’s what the book (and the area I was talking about) is getting at. He cites Godel’s incompleteness theorems as and example I believe.
It is “Social Processes and the Proofs of Theorems and Programs,” by De Millo, Lipton and Perlis, CACM, May 1979.
This paper doesn’t seem to be mentioned in the various lectures on program correctness I came up with. I don’t know if that is because automated program provers are so good now or because people have mostly given up.
Smart people are wary of sophistry. It’s one thing to listen to Zeno’s (and Parmenides’) arguments, and it’s quite another to refuse to get out of bed in the morning because you don’t believe in motion anymore.
I find the distinction between the weak and strong conceptions of knowledge interesting in this context: The Value of Knowledge (Stanford Encyclopedia of Philosophy).
Thanks! This is what makes the Dope so good, so many experts being around.
FWIW my professors in the eigthies gave the impression that automated proof was a dead end, or at least that was what I took away from my studies. The issue was discussed but they did not research it themselves.
But I believe the idea has seen renewed interest recently. In particular I believe editors may check for common kinds of errors (which of course is far less ambitious than complete proof of correctness), and program validation might also help in counteracting hacking by exploiting certain coding errors.
Sorry for the tangent, this topic might merit a thread of its own.
There is a lot of work on coming close to proving programs. I heard a talk from someone at Microsoft about this a while ago.
There is a lot of production hardware verification, especially in areas like equivalence checking - making sure two hardware implementations are equivalent. Hardware is somewhat more constrained than software in terms of environment. Though these days thinks like power supply or switching noise can make logically correct circuits fail. I’m reviewing a book on this right now.
But to get back to the topic of the thread, proofs have all sorts of built-in assumptions, often not obvious, which make them incorrect.
I was mostly getting at the sample chapter in the link which I think gets into the matter enough.
It does seem to follow the less wrong model though. That we could be wrong and that we can’t even prove that a material universe exists but we carry on as though it does because…well what else is there?
A scientist should have the mental attitude that “based on the available evidence, it seems that x is true”, realizing that almost always our knowledge is imperfect. From that basic foundational position, the scientific community will gather more information and modify their understanding of “x”. As this continues, factual truth is refined and built upon. So the newest scientific knowledge is normally the least sure of being truth, while the older knowledge is more sure of truth, having been examined and refined as new evidence accumulates.
However, a cautionary note. Humans have a tendency to build larger ideas and social concepts based on what they think or want to be true. This can lead to disastrous results. An example is racism - that is that one “race” of people are superior to other “races”, and should therefore derive benefits due to their superiority. My personal hero, Jacob Bronowski, in his Ascent of Man book and BBC series presented the case of Nazis treatment of Jews as an extreme result of racism, with the warning that it could happen anywhere if we use our knowledge of what is true inappropriately.