Will science ever reach "saturation point".

I didn’t say we can understand everything. I said that we can understand an infinite number of things. But that is still infinitely less than “everything”.

I know there’s an infinite number of worthwhile things to discover because that’s the nature of discovery. Once you discover something new, that discovery leads to further discoveries. Plus you get more and more ways to combine the discovery with earlier discoveries. The amount of things to discover only goes up.

Consider poems or novels. Do you think that there will be a certain point where all possible novels have already been written, and it is no longer possible to write a new novel?

Why is the inability to have two human players execute the absolute most flawlessly perfect game of chess a failure, in your mind? Isn’t knowing the rules completely sufficient to lay claim to “understanding” the game? If I were someone who knew nothing about chess, but was able to discover the rules through observation and experimentation (say, I had a computer chess game that provided me no input but that I had made a legal or illegal move), can I not say I know all there is to know about chess fundamentals? Isn’t, then, the process of becoming a skilled chess player just applying those basic rules to ever more complex situations? And is that not what we do all the time when we discover, and then attempt to practically apply, the rules of nature? A reductionist approach, as I said before, is not about discovering everything there is to know. It’s the opposite, in fact. It’s an approach taken with the hope of finding relatively simple explanations for a complex array of seemingly disparate phenomena. So what if that can only get you “so far”? Again, this line of reasoning seems to be founded on an assumption we must know everything to understand. I simply cannot agree with that position without some demonstration of why it’s necessary, or even desireable, in the pursuit of science (vs., say, practical application of scientific principles in the development of technology).

Because a lot of what science want’s to solve ultimately boils down to “What makes a good game of chess”. Fluid dynamics, cognition, weather, social networks, visualisation, economic modelling etc.

And you feel these problems are necessarily intractable due to complexity?

To expand, do you think approximate tractability is no substitute for god-like precision?

I’m saying that we can perfectly know the rules that a system operates under yet still not be able to extract certain properties of the system to our satisfaction. We know to a high degree of accuracy exactly how the neurons in our brain work yet nobody has come close to gaining an understanding of how vision works for example.

We know exactly how fluids behave yet that hasn’t helped much in building more efficient airplane wings or submarines. The problem is not that a reductionist approach can only get you “so far” and “so far” is good enough, it’s that the reductionist approach, in many instances gets us nowhere. What the reductionist approach fails to capture is that systems have emergent properties that only arise due to the interaction between smaller sub systems.

To be a good chess player, it’s not sufficient to simply learn the rules of chess, it requires an enourmous amount of effort to be able to see how the rules interact with each other in complex situations and what consequences of your actions are. SImilarly, to be a good biologist, it’s not simply sufficient to understand the basic rules of genetics and adaptation.

To make this more relevant, let me give you an example from my current research. Assume you take a photograph of a scene and you know that certain points on your photograph correspond to certain 3D locations in the scene. Based on this information, it should be possible to determine where your camera is located through simple geometry. However, if you assume certain noisy models and so forth, nobody has yet figured out a really good way to determine your camera position under a quite diverse set of circumstances. And this problem has been worked at on and off for 150 years.

Another problem that I’m looking at is the understanding of how to recognise the same object under different lighting. It turns out to be an incredibly difficult problem since the raw colour values change dramatically. We understand the physics of materials and how they are affected by light but we still haven’t begun to solve this problem anywhere near adequately.

I back Shalmanese’s overall approach here. That there is a perfect chess game (or games) is a mathematical fact, but we are incapable of knowing what it is; nor could a digital computer (even if all particles in the universe were converted to processors and allowed to churn until the end of time) conceivably calculate it. Maybe a quantum computer will one day do it, but for now it is simply unknowable.

We know that certain things are unknowable. We may one day find that these pockets of ignorance prevent us from making further leaps in scientific progress.

That’s one way we could reach “saturation.” The other way is the situation in which we are able to gain knowledge, make leaps, but simply are unable to coordinate and manage all our knowledge.

Both of these have been mentioned, of course, but I think they are two distinct problems. I’d like to talk more about the latter problem.

As has been mentioned the issues of theory and the results of application of theory are two separate issues. Thankfully, in many cases the totality of theory is small and manageable. Mathematics would seem to be an example in which it definitely is not. Already, math is like a rocket taking off from a planet with limited fuel. In order to make progress, one must get further and further out into space, but the lifetime of the mathematician is limited, and each new one must start at the same planet.

So another issue is whether progress depends on climbing higher and higher up a stack of knowledge. You cannot zoom out to the nth digit of pi and calculate it. You have to start with the last known digit.

A counter-example would be biology: After learning a chunk of basics one can zoom directly out to an area of interest and study in depth. If someone wants to devote his/her life to the powder on moth’s wings, then we will end up knowing about that powder.

Branches in the results of theory would be whether old results can be superceded by new results or cannot. This is often true in applied science, in which once a better method is found the old may safely be forgotten (exceptions apply).

Here would be a crude outline of the issue. I’m sure many other branches are possible!

I. Graspability
A. Comprehensibility
B. Knowability

II. Manageability
A. Stackedness of theory
B. Supercession of results

A stunningly bad example. Bailey, Borwein and Plouffe famously showed (a pdf) how it is possible to calculate the ten billionth hexadecimal digit of pi without going through the lower ones. Whether or not it’s possible to do the same in other bases is very much an open question.

A lot of these “It’s too comlex! We’ll never make it!” arguments appear to reference many tasks the human brain does easily. It seems natural to simplify the argument: “We’ll never match the computational capabilities of the human brain through research, so I predict science will ultimately fail.” Is that a fair summary of the position? Otherwise, it just seems like we’re arguing from reams of examples of complexity that has proven thus far difficult to manage, predicting from the intractability of complexity that we’ll converge somehow on Saturation, and that, apparently, approximate tractability is a useless approach to analysis.

Without going into a lot of technical philosophical jargon, theres a big gap between being able to do something and understanding how it works. In fact, a recent booming area in computer science is so called “soft computing” where the computer is left to it’s own means to try and learn a solution. Crucially, however, once it knows the solution, it’s impossible to figure out why it’s such a good solution which means it might be engineering but it isn’t really science.

My examples had to deal with human cognition and vision because thats the field I’m working in atm, but a lot of other complex phenomena are also similarly emergent and seem to be similarly intractable. Dust, for example, is something physicists still don’t understand. They can figure out how a gas works and how a solid works, but a large number of solids suspended in a gas is vastly more complicated than either of them.

I read an article that I now can’t find about digital evolution. The experimenters hypothesized they could use a simulation of a simple digital signal processing chip with reconfigurable circuits, randomize those circuits (mutation) such that the chip’s ability to transform one waveform to another would converge through some number of iterations on the optimal desired output (selection). Very simple evolutionary protocol, and the result was completely inscrutable. The chip configuration that was ultimately evolved did exactly what they wanted, and they had no idea why. They concluded stochastic methods may be superior to rational design for certain computational applications.

This is not science?

The way I heard it, some researchers wanted to make a circuit that can distinguish between a 10Khz and 100Khz signal without using timers or clocks. Something which no human knows how to do.

Anyway, they evolved a circuit with 1000* transistors which eventually did it. In the final product, they found that there were 300 transistors that were electrically isolated from the circuit and didn’t not affect the behaviour of the circuit when taken out. However, they found another 30 or so transistors that were completely electrically isolated from the circuit but would make it stop working if taken out. I don’t think anybody’s quite figured out exactly what was going on in that circuit but it probably had something to do with capacitance and back EMF subtly slowing down signal transmisssions and the like.

*All numbers completely picked out of my ass to make a point.

I still fail to see The End of Science in all this.

I was responding to this point. You can’t just wave a magic “reductionism” wand and make all the hard problems go away. There are some systems which are provably irreducible. That is, to verify properties about them, they need to be simulated completely. I hear than Wolfram’s much hyped “A New Kind of Science” claimed to prove this conclusively although I haven’t read it.

Sure, a lot of things can be reduced, simplifed or approximated. But a lot of science is and will remain tedious grunt work at the borders of knowledge where things are the way they are just because thats the way they are, not because of some elegant system of equations.

Wolfram may have strayed into crackpotism (as many appear to claim), but I thought the whole point of Wolfram’s “research” was to prove that cellular automata are the fundamental unit of everything, and that you can describe almost anything by modeling with the right cellular automata. That’s about as reductionist as it gets, or so I thought.

The whole point of his “research” which had largely been thought up by other people some years back was that, under certain conditions, certain complex systems can only represented as cellular automata. That is, if you want to know the condition of the system f(t) at t, then the only way to find out is to determine the condition of the system at f(t-1) which must be determined from f(t-2). In short, he is saying the only way to figure out how something works is to simulate it.

This is markedly different from standard physics where formulas are generally time independant. So, if I wanted to know the position of earth 1 million years in the future, I don’t need to compute the position of the earth 999,999 years in the future.

What this also means is that there is a theoretical lower bound on the information neccesary to describe such systems which also implies that there is a theoretical lower bound on how much information someone needs to understand to understand this problem. For example, to determine the position of a body in a 2 body system, you need about dozen different pieces of information or variables that you feed into an equation. If the number of variables in your equation goes above a certain limit, then it’s going to be impossible for a human to fully comprehend any of it.

And “feed it to a computer” doesn’t help either. You still need to be able to digest the information the computer is spitting out and the minimum amout of information it can spit out to fully encapsulate a problem is still those n bits.

BTW: However, I concede that this has strayed largely into rather theoretical discussions. Most likely, the number of problems that lie between already solved and provably unsolvable are going to be pretty small.

The issues with reductionism have more to do with the fact that we have no effective techniques to simplify emergent properties than any theoretical lower bound. An analogy might be the speed of light and space travel. It might be nice that we can prove a theoretical upper bound to speed, but if our best ships can only go at a few thousandths of that speed, then any discussion is going to be rather theoretical in nature.