I have read that we are doubling and tripling our knowledge every so many years. If knowledge is limited to what we can learn, and what we can learn is limited, then is there a limit to knowledge? Would we ever come to a point that we knew everything knowable? How would that change the history, religion, economics etc…
Infinite knowledge is probably impossible (simply because knowing completely about things has a tendency to change them and there is some potential problems with recursion in knowing the very medium you are storing the information in).
General knowledge now, that has been argued quite a bit. Some claim we are near to the end of that already.
I would recommend these two opposing views which will also allow CSICOP to make a few cents to continue their noble work.
The End of Science:
http://www.amazon.com/exec/obidos/ASIN/0553061747/csicop/002-4670515-6068832
and (written as a direct rebuttal)
What Remains to Be Discovered:
http://www.amazon.com/exec/obidos/ASIN/068482292X/qid=961115932/sr=1-1/103-7556043-8832623
Where do you get the notion that what we can learn is limited? Limited by what? There is no limit to knowledge. The more answers we find, the more questions that arise. There is also the matter of direction. There is knowledge that has been lost to history. I don’t even understand what infinite knowledge could possibly mean.
What the “noble” people at CSICOP really do:
with some info in the history of american “skepticism.”
Lots of info and links.
Obligatory amazon.com links:
CSICOP [Committee for the Scientific Investigation of Claims of the Paranormal] is imperfect. It’s hostile to every new idea… will go to absurd lengths in its knee-jerk debunking, is a vigilante organization, a New Inquisition.
—Carl Sagan, in The Demon-Haunted World: Science as a Candle in the Dark
Moderator’s note: I shortened the links to prevent screen scrolling -manhattan
Knowledge of the position and velocity of every subatomic particle that ever was, is, or will be in every universe that was, is, or will be? And knowing what all that adds up to and how it all works. egad!
I read “The End of Science,” and I can tell you, this topic is definitely a Great Debate.
Yeah, this one is almost certainly headed over there.
But first, I wanna see one of our relativity/universe experts explain exactly how much information there actually is in existence, how much information is required to store one other piece of info, and therefore the theoretical upper limit on what is knowable in megabytes.
You know, so I can decide whether to go with DSL or wait till I can afford the T-1.
Even if you “know” everything about every subatomic particle that ever existed or ever will exist, you can never know all the digits of pi, or any other infinity. Knowledge seems “boundless” to me.
manhattan:
I am expert in neither relativity nor the universe (nor for that matter in information theory, so prepare to see me over my head soon) but here’s some of what I’ve read.
John Casti in Searching for Certainty, pages 356-9, discusses Chaitin’s theorem and, using some estimates of Rudy Rucker, how its implications limit our ability to find or create scientific theories.
Here’s an overview of what he says. To avoid infinite regression, definitions are minimal.
Chaitin’s theorem says, if we have some program, there always exists a finite number T such that T is the most complex number our program can generate.
If K represents ‘our best present-day knowledge’ about math, physics, chemistry, biology, and all the other sciences, and M ‘denotes a universal Turing machine whose reasoning powers equal those of the smartest and cleverest of human beings’, then an estimate of T in Chaitin’s theorem is
T = complexity K + complexity M + 1000000,
where the last term is thrown in for program overhead.
Casti then, citing Rucker, estimates the knowledge in 1000 books each of 8000000 bits suffices for K.
Similarly, 1000 books could hold everything we know about M.
So T is less than 16000000000, or 16 billion, or 16G.
Quoting Casti one last time:
‘The bottom line then is that if any wordly phenomenon generates observational data having complexity greater than around 16G, no such machine M (read: human) will be able to prove that there is some short program(i.e., theory) explaining that phenomenon. Thus, …Chaitin’s work says that our scientific theories are basically powerless to say anything about phenomena whose complexity is much greater than 16G. But note that Chaitin’s theorem also says that the machine will never tell us that there does not exist a simple explanation for these phenomena, either. Rather, it says that if this “simple” explanation exists, we will never understand it–it’s too complex for us! Complexity 16G represents the outer limits to the powers of human reasoning; beyond that we enter the “twilight zone”, where reason and systematic analysis give way to intuition, insight, feelings, hunches, and just plain dumb luck.’
It’s certainly enough to make you think. But Chaitin/Casti says ultimately that won’t do you any good.
I’m not disputing the claim about CSICOP (although it doesn’t really jive with my experience with CSICOP), but that alternative science site is a bit shrill itself. See
this page: http://www.alternativescience.com/speciation.htm
It says
Please point me in the direction of any Darwinist who is claiming that breeds of dog are separate species, and I’ll help wield the raw noodles myself.
Moderator’s note: I shortened the link here, too. -manhattan
Ok, manhattan, if we assume that the smallest region of space that can store a bit of information is a Planck volume (approx. a cube 10[sup]-35[/sup] meters on an edge), and that the (observable portion of the) Universe is a sphere with radius 15 Gly, and that each Planck volume of that space stores one bit (even the empty space; you’ve got to account for virtual particles), then that gives the observable Universe a data capacity of approximately 10[sup]186[/sup] bits. Converting those to bytes, and using the largest defined metric prefix, that’s still 10[sup]161[/sup] Yottabytes. Put yet another way, assuming that data capacity continues to double every 18 months, (a ludicrously fast growth, which can’t last indefinitely), it’d take approximately 500 years before we have hard drives with that capacity.
Better get the T-1.
I don’t know about infinite knowledge, but I have definitely people with finite knowledge.
I think I got you beat, Chronos. I don’t see why a single Planck Volume should only be able to hold one bit. And there’s no reason we couldn’t just wait for the Universe to expand more and add more bits. But I have an idea that I think is much simpler. We simply convert the entire mass of the Universe to photons of certain wavelengths, and then use these to store the information. We’ll assume that we can measure their wavelengths as accurately as the Uncertainty Principle will allow, with a delta-t of, oh, one second. My intial estimates say that this will give us around 10[sup]4000 googol[/sup] bits.
You know you’re doing Astrophysics when the error of your calculations is given in terms of the logarithm of the logarithm of your answer.
ROFLMAO! Can I use that quote?
Wow, you’d do that for me? Thanks. Just don’t use it on too many people who wouldn’t get it.
A related book. (I didn’t find the argument persuesive, but I have to withhold criticism because members of the Max Planck Insitute for Molecular Genetics outrank guys like me whose scientific training ended with high-school lab courses and a lone seminar on science-history.)
Stent, Gunther S. Paradoxes of Progress (New York: W. H. Freeman & Co., 1978) - (out of print)
I believe the argument is mainly that technological progess is a self-limiting enterprise, but part of his presumption is that “scientific knowledge, like our universe, must be finite”
I will leave it to you all to debate, and time to decide, whether Mr. Stent has given up on us too early. My prejudice would be that the side of caution would assume that no limits are impenetrable
An interesting link from another thread by PapaSmurf:
I think that the main reason for the huge increase in knowledge is the ease at which it can be stored. It’s not so much that we are creating knwoledge faster, but that a much smaller percentage is being discarded. Once that percentage approaches zero, the increase in knowledge will level off (not the knowledge, but the rate of increase).
1000 books? The Library of Congress has quite a bit more than that.
Just one terabyte would be enough to simulate a human mind? Gee, AI must be just around the corner!