So here’s what bugs me about the scientific method. In the most recent results in AI, the best outcomes were obtained from artificial systems that don’t start with pre-existing human supplied assumptions.
To me, the idea of “proposing a hypothesis” seems like an injection of dirty and statistically likely to be false information into a process.
Suppose you are investigating an animal that is in a box. You can only indirectly measure parameters about that animal.
You determine eventually that it’s a large land animal. Someone has felt a tiny patch of skin but they don’t know where it was on the animal.
Scientist 1 : “My hypothesis is that it’s an elephant!”
Scientist 2 : “My hypothesis is that it’s a hippopotamus!”
And you have a senseless ego match and breathless headlines get published that you found there’s an elephant in the box when it will later turn out to have been a tiger.
The alternate approach I am suggesting is that each measurement is a constraint on the probability function of what could be in the box. Each subsequent measurement you make narrows the set of what it could be.
So a scientific paper could be “based on the evidence, the likely remaining possibilities are an elephant, a tiger, a hippopotamus, or a very short giraffe in that box. This set of experiments is designed to reduce the set of possibilities to 2 by determining if the enclosed animal has fur…”
I thought of this when hearing about “dark matter”, which the evidence does not in any remote way suggest that the phenomenon that causes the observations is in fact a form of matter. Right now it could be a large set of things. Advancing the hypothesis that the culprit is a form of matter with never before seen properties is just hubris.