Will there ever come a time where it is impractical to teach people all the requisite knowledge needed for them to make a significant advance in science?
As little as a century ago it was fairly common for scientists to be a jack of all trades. Numerous discoveries and advancements in one field were made by a man who was an expert in another. Now, this seemingly isn’t the case anymore.
Will this “saturation point” ever happen? If so, when?
And these should give you the grounding you’ll need in thermodynamics, hypermathematics, and of course microcalifragilistics. Moodavit!
Will there be another Thomas Edison? Who knows. I think there is more of an emphasis on teamwork and collaboration because in the future only teams will be able to make certain technological advancements. It’s a matter of man hours. And I hope that people will generally live longer because we will need to have twice, maybe three times the background in certain fields as today.
I agree that teamwork is the way of the future. In addition to the reasons already mentioned, an ever-increasing percentage of scientific research depends on computer simulations to approach problems that are intractible by any other means. A good computer simulation often requires input from experts in several different fields. Some people to do the programming, others to handle high-level details, others to analyze the data and verify the accuracy.
Probably not. I’m looking into a chemistry research position at the university soon and the people are very very specialized.
It used to be you were a ‘scientist’, you knew the sciences (math, physics, biology, chemistry, anatomy, geography, etc). Ie the renaissance men or the Greek scientists.
Today you are an expert in one area of subgroup of a subgroup of a subfield of a field. The people I am looking at work in a handful of areas. One of my old professors is working on Electron Trafficking between Metal Clusters and Non-innocent Ligand LCSs. This is a branch of Inorganic chemistry. Inorganic chemistry is a branch of chemistry. There are several branches of Inorganic chemistry (MO theory, electron theory, Metallo-ligand bonding) and this is just one of those branches, this is metallo-ligand bonding.
If you want a bachelor degree at IU chemistry is just one class needed (in any field, you need at least 1 semester of chemistry to graduate), and stuff like metallo ligand bonding or electron trafficking will never be touched on. Inorganic chemistry is one class you need if you are doing a chemistry degree, and this stuff will be lightly touched on. If you do a doctorate in Inorganic chemistry an entire class will be taught on electron trafficking or metallo-ligand bonding. After you graduate instead of one class, you may devote 10-15 years of research to metallo-ligand bonding as it relates to a handful of millions of possible scenarios.
I suspect the future will just have more and more branches, and this is just my understanding as an undergrad on the outside looking in.
I’ve no answer to this question. I wondered the same. Some reasons :
-The “poll” of potentially talented but uneducated people is lessening. You can’t count on the number of scientists increasing just because more gited people have access to a high level of education. Also, the world population is apparently soon going to stagnate and even diminish. It means that at some point in the future, we’ll reach a ceiling, while until now, the number of people involved in sciences grew significantly with each generation.
-As you mentionned, the accumulated knowledge is so enormous that scientists have to be more and more specialized. So, you need all the time more of them to cover all the fields of knowledge, and also, they can be unaware of useful informations that belongs to another specialized field.
-The cost of the equipments/experiments needed in a number of field is becoming enormous. So much so, that their cost has sometimes to be paid for by polling the ressources of several (wealthy) countries. For instance, the cost of the space telescope, of tokamaks, of particle accelerators. What will happen when phycisists will tell us : “we’ve an interesting theory to test. We just need a particle accelerator the size of the earth’s orbit/of the milky way” or “it will just take a couple thousand years to get the results”.
On the other hand, we could be able to rely more and more on computers. Who knows? They might be someday able to discover things all by themselves. It could also happen that with news knowledges, technics, machines,… our productiity could become so enormous that cost won’t be an issue.
I would finally mention another related point : maybe we could run in another limit : the inability of the human brain to understand beyond a certain point.
Already, there are fields of knowledge we can’t “grasp”, like quantum mechanics, for instance. But still we can (or more exactly some people can) understand the theory behind them, and go on making new hypothesis/discoveries.
But I’ve personnally no reason to assume that we’re able, as mere gifted apes, to even theorically understand everything. There might be things in the universe that are beyond human comprehension. I this case, we might someday have discovered everything we can discover and be unable to make any new progress.
I have thought about this exact problem, and I wondered if it will require help from computers and AI to bridge the different disciplines quickly so science can advance efficiently.
Not in the foreseeable future. Why? Because the perspective to adopt is to realise that the accelerating growth of the literature is nothing new. Moreover people have been worrying about its consequences and finding ways of dealing with it since the start of modern science. The current situation is the way it’s always been.
This perspective is not new itself - it’s even old hat in sociology of science circles - but it is still often rather counterintuitive. Scientists continually worry about how the literature they might theoretically read is getting ever larger and assert that it can’t have been like this in the old days. They point to things like particle physics papers with hundreds of authors or the Human Genome Project as examples of a Big Science that’s somehow essentially different from what people used to have to deal with.
That view, in which there’s some sort of discontinuity of size in the recentish past, was challenged by Derek de Solla Price, most notably in his book Little Science, Big Science (1963). Price’s argument was that there’s a striking continuity in the growth of science in general that spanned from about 1660 to when he was writing. And this growth had been inexorably exponential. By any measure - the total number of papers, the number of scientists, the size of journals, the number of journals etc. etc. - he found that science had doubled within each human generation. Whether the doubling time was 10 or 30 years slightly depended on what you where looking at, but the subject had still doubled at least once within the career of any lifelong scientist. The only events that had even dented this growth over three centuries were the two world wars and their effect was merely transient.
Now suppose you were an active member of the Royal Society in the 1670s. Keeping up with things might involve attending the Society one night a week, gossiping with about two dozen people (the usual estimate is that there’d have been about 20 other active members at that time, in the sense that they were seriously engaged in research), catching up with what Oldenburg’s correspondents were telling him and subscribing to both Philosophical Transactions and Acta eruditorum. That and reading some new books as they came out would probably keep you abreast of everything important.
By contrast, a modern researcher probably attends a few seminars a week at most, interacts with a few dozen researchers, reads a few general science magazines (say New Scientist weekly and Physics Today and Physics World monthly; this will vary by field and inclination) and reads the relevant literature. The latter’s one big difference, but they can deal with it differently. You can skim abstracts, read review papers, do online literature searches etc, etc.
But the big difference - which has already been noted several times in this thread - is specialisation. In the 1670s, you were keeping up with everything, today you’re keeping up with just a fraction. Still, it’s hopefully the fraction you need.
Part of Price’s point was that this process of adaption to the expansion of the literature goes all the way back. By the early 1700s you’ve already got journals devoted to summarising books and papers, so that researchers can skim that and decide what has to be read in more detail. More rigorous forms of editorial review develop as a form of quality control, so that readers can assume that they’re not wasting their time on rubbish. Ultimately that becomes the peer review process. Subfields of research emerge and start publishing their own journals. The review article is invented. One gets journals devoted to reprinting abstracts. Journals split into sub-issues. People worry about fragmentation and so sponsor interdisciplinary conferences. The Science Citation Index comes along as a tool for allowing researchers to find what they want in the past literature. In the last decade, computer searches have replaced that.
Each of these developments was in response to what those at the time saw as a problem. But it was always the same problem: there was too much literature to master. In a sense, one might even say that science has had a permanent literature crisis for three centuries. People have coped in the past by inventing new tools and schemes and there’s no immediate reason to believe that that part of the pattern won’t also continue.
Back in 1963, Price predicted that it’d be this that’d break the pattern of exponential growth by the end of the century. We’d have just run out of people who could become scientists. While it no doubt has been done, I haven’t seen any analysis (and ain’t that an arguably self-referential example) of whether his indices leveled off before then. My impression is that, while the number of professional scientists in industrial countries almost certainly leveled off, the others growths he examined have barrelled on regardless. Efficiencies due to computer technology are indeed one obvious possible factor involved.
My opinion would be the exact opposite of clairobscur’s.
There is a vast untapped “pool” of potentially talented but uneducated people in the world. Look at the developing countries. There are many more people living there than there are in developed countries. And a far smaller proportion of them have higher education. Also, just a tiny fraction of people living even in developed countries are scientists. Imagine a distant future where all routine labor (farming, manufacturing) will be handled by robots. “Everybody” could be a scientist. Kind of like in Star Trek.
The accumulated knowledge of humankind at the moment represents just the tiniest fraction of a tiny fraction of all that is knowable. Talk to any scientist you know and he/she will explain how, even in just his own highly specialized field, there is an infinite amount of stuff that hasn’t been researched yet and an infinite amount of ideas they could/would like to explore. Multiply that by the number of highly specialized fields that exist, plus the number of fields that haven’t even been discovered yet, and you see the picture.
Many unexplored fields can be explored by using just simple, cheap, widely available instruments. Talk to any scientist you know and they will lament how society spends $10zillion on large particle accelerators, even though just a tiny sum of money invested in their own, largely unexplored field could yield far greater progesss and far more science for the price. One scientist I know studies just plain reflection of light off of materials and surfaces. The amount of stuff that is still unknown and unexplored just in that tiny subject is really staggering.
The ability to “grasp” things determines the rate of progress but does not limit the ultimate amount of progress. What makes you say that “we” can’t grasp quantum mechanics? Maybe you can’t, maybe I can’t, but eventually somebody will come along and he/she can. And for the next generation of scientists, it’ll be boring routine, physics 101. I would think that quantum mechanics (of the 1920s and 1930s kind) is thorougly grasped by now.
As you can probably guess, my answer to this question is a resounding no. We have the same brains that we had 100,000 years ago. If there is any inherent limitation in our brain design that would make it impractical to teach people all the requisite knowledge needed for them to make a significant advance in science, we would have passed that point many millennia ago. Instead, our brain design includes just the right amount of individual variation and adaptability to earlier progress that will provide the capability to keep science advancing just like it is now, for as long as we can foresee.
As a research student in the field of Computer Science right now, there definately is still plenty of room for generalists. It would be common but by no means rare for someone doing a 4th year undergrad project to make a significant contribution to the state-of-the-art. Its pretty much expected that a PhD will be making a significant contribution.
This seems to be quite different from the physical sciences where I understand that PhD work is a lot more about contract labour for your supervisor and the chances of you contributiong signficantly to the field are rather low. It’s not until post-doc work or later that you really start getting into the productive phase of your career.
Science Fiction write John Brunner posited that a specialised field called synthesists would arise who’s job was to be professional generalists. They would try to read as broadly as possible and then reccomend unlikely relationships between disparate research fields. It certainly is an intriguing idea and it seems like it might have some merit.
The field of social develop may have major implications for all of this, which is one reason I despise the current method of teaching across the world. It’s dedicated to pedantic idiocy, and quite possibly the worst designable system of teaching.
On the other hand, if we can develop and devise advanced methods of teaching, we may be able to teach our descendants far more knowledge.
Nah. Education will adapt to focusing increasingly on connecting principles over reams of facts, especially in the biological sciences, where the number of facts you can know appears to be endless, but the number of principles you need to know to tie all those facts together is quite manageable. It wasn’t that long ago that biology was pretty much all about memorizing reams of facts, observing things, and then recording still more reams of facts. Now with the complementary edifices of Neo-Darwinian evolution and biochemistry (I’ll lump molecular and cell bio. into the latter), everything you absolutely must know (that isn’t pure chemistry or physics) can actually fit in a single textbook, and easily be comprehended by any undergraduate student. Virtually everything beyond that is specialization, where having a firm grasp of some subset of the endless reams of facts is necessary. Really, for someone like me, good core courses in evolution, molecular, biochem, orgo, calc. and 1st year college physics, and you’re good to go. Stats is always nice, but I rely heavily on softare and simle guidebooks for that. Some more p-chem, biochem & ezymology, stuff on particular signaling pathways (involving G protein-coupled receptors, voltage-gated receptors and other electrophysiology, tyrosine kinase receptors, ligand-activated nuclear receptors, growth factor families and superfamilies, and so on), the elements of molecular bio (gene structure, mRNA processing and structure, translation, nucleic acid polymerases and other modifying enzymes, core transcriptional complexes and associated factors, chromatin structure and modification, transposable elements, etc., etc.), all these round out the basic picture. After that it’s hyperspecialization, during which you struggle not to forget a lot of the core stuff, largely because you never use it in a blatantly obvious way.
It would be to no good purpose to have people trying to know all there is to know. In bio, we’ve got the data hose aimed straight at us and turned on full blast as it is. A tenth of a percent would keep a person going full time for his or her entire life. It’ll take twenty years just to get it all cataloged, characterized, and subdivided in a manner that is intelligible. That’s all bioinformatics, and really, what’s wrong with relying completely on computers to handle all that data and the vast webs of associations? I think it’s fair to say it’s physically impossible for the human mind to memorize the contents of the human mind, or even understand most of what it does without some major help. You can’t be saturated so long as you’ve got good databases, and good tools to search those databases and derive relevant connections. No matter how vast those databases become, the core principles that any generalist with a modicum of brains can apprehend and appreciate are all you need to competently put all that fact-based knowledge to good use. It’s the ability to form good hypotheses and test them that you want, not an encyclopaedia+genbank in your head.
I took into account the develloping countries. Education in these countries has made recently significant progress. A number of former develloping countries have now a high rate of litteracy, are able to provide a good level of education and are already able to “produce” scientists. If the trend continue (and we must hope it will), this pool which is already tapped and is diminishing will be fully used.
Well…no. Many people just don’t have the ability to become scientists, and many don’t have the inclination.
And in any case, you will necessarily eventually reach a ceiling. Assuming that everybody on earth become a scientist, you’ll get 10 billions scientists (assuming that there’s 10 billions people when the population will begin to stagnate). It’s still a limit. And you’ll never get a human population entirely made of scientists.
Which is precisely the problem. The more things there are left to discover, the more likely it is that we won’t be able to discover them all, by lack of ressources (human or material).
es. And many can’t any more. And more and more of them, even if it’s not on the scale of a particle accelerator. There are precious few fields or research where you could make a major breakthrough working in your kitchen with a saucepan.
Then he will be a mutant. I don’t know of any human being that can grasp the concept of more than 3 dimensions, for instance. Most of quantum mechanics runs contrary to common sense and can’t be “grasped”, either. We’re just not equipped to do so.
Which is also, in my opinion, precisely the problem. Our brain has evolved in order to allow us to be efficient hunters/gatherers/scavengers, not in order to allow us to understand the workings of the universe. In order to believe that we’re potentially able to discover/understand everything I would have to believe that we were designed to such an endeavour. It might be possible for a theist to believe so, but not for this atheist.
I live in Kansas. It seems there does come a time where it becomes impossible to teach people more science but that has nothing to do with any “saturation point.”
I think one can ignore design for the purposes of this discussion and simply focus on adaptability. You seem to assume not only that humans must know all there is to know continue advance science, and also that humans cannot adapt to augment their own brains with engineering and the help of artificial brains like computers and other stuff we haven’t invented yet.
If science has taught us anything, it’s that there’s a lot of stuff out there, but the number of principles that tie all that stuff together is vastly smaller than the numbers of phenomena those principles can be applied to. All the major equations one needs to know currently in physics could probably fit on a few pages, and all the other equations one could apply to specialized phenomena could be derived from them. If there is a Theory of Everything, that list of equations you need to know, vs. equations you can derive from the former, will shrink even further. The problem isn’t so much knowing the core principles as applying them to ever more complex phenomena. Complexity itself is a burgeoning field, but understanding the general mechanics of complex systems is apparently not a hopelessly intractable problem.
The purpose of science isn’t to know everything. Those who feel reductionism is a useful approach understand science as the effort to tie it all together with the least amount of fuss. The trend supports this approach. If we discover in the future that we truly must know everything to understand how the world works, then we can give up on reductionism; but until them, I see no good reason to assume we’ll never be able to grasp the deep fundamentals and apply them to whatever problem presents itself. As long as we can keep developing and building newer and more wonderful tools, I don’t see a “saturation” point in our future for a very long time.
“knowing everything” was just a way of expressing it.
I mentionned computers in my very first post as a con argument.
But still, we’re the ones making the computers. If there’s something we just can’t conceive, because we aren’t hardwired to conceive it, and would be necessary to understand a fundamental element of the workings of the universe, we won’t be able to programm a computer to conceive it, either, in all likehood.
And how would you know that formulating such a “theory of everything” is not beyond our abilities?
Our ability to study the universe is merely a by-product of the capacities we develloped to survive as a specie. Being able to formulate a “theory of everything” isn’t part of the skills necessary to escape an hungry lion’s paws.
Apes are fairly bright, they can even use primitive tools, learn by example, develop new behaviors on their own. Still, they won’t ever understand the theory of gravity. They just can’t. We can understand it, but it doesn’t mean that we can understand everything. We might lack the “swrtch” brain structure that would allow us to conceive something directly relevant to the"theory of everything". It’s possible also that the universe might not be described/understood with the scientific methods we use. There might not be any “theory of everything” at all, because the concept doesn’t apply to the universe. And being limited to follow some specific logic paths by our hardwiring, the concept that would apply could be beyond our reach. The word “concept” might even be meaningless and irrelevant in this case.
I mentionned our inability to “grasp” intuitively some scientific concepts as an example showing that we have limitations. These limitations, in this case, only apply to our intuition, to our sense of logic, or to our capacity to create mental representations. we still can investigate further using our intellect. But why wouldn’t our capacity of abstraction be limited in the same way our mental representations are? If our brains are obviously and blatantly limited in some domains, why assume they aren’t in others? Also, why would I assume that there aren’t any ability that we’re completely, totally lacking, in the same way an ant lacks the ability we have to formulate abstract thoughts?
I personnally think that the idea that humans (assuming they would have unlimited ressources) could understand everything there is to understand, is completely arbitrary and in esence similar to thiniking that we are so special that the universe must revolves around us. Just wishful thinking and pride, IMO.
I guess part of my optimism is fed by the progress so far, and by the consistentently disproven notion that we’re reaching beyond our grasp. As Einstein said, one of the most incomprehensible things about the world is that it is comprehensible. I see no reason to assume, based on current progress, that the putative Final Theory should be so incomprehensible to the human mind that it will forever elude us. Why things are comprehensible to the human mind, and Einstein may have eluded to, is truly incomprehensible, because it’s one of those questions science may not be able to address, no matter how smart we become. I don’t think it’s complete hubris to assume anthing short of that is beyond our capability, and I see no reason why we can’t create computers that can evolve on their own once our capacity for rational design is exhausted. Those computers may even become integral parts of ourselves, and we may evolve with them. Or, they might destroy us. Either way, there’s no reason why the scientific method can’t be utilized by a creature with greater intellectual prowess than we currently posess. So again, it seems to me the “saturation” point is probably a long, long way off.
Sorry. Not only do I disagree with you there, but this is also a pet peeve of mine. People dismissing well-understood scientific theories as “ungraspable”.
Look at this PDF file. It’s a dissertation about quantum mechanics. Tell me if you really think that guy doesn’t “grasp” quantum mechanics. It’s the opposite. That guy makes it sound positively easy. Read how easily he improves on the past proofs of fundamental QM results and how easily he interprets them. Swimming like a fish in the water.
Oh, I agree completely that our brains are obviously and blatantly limited and totally lacking. Actually that’s another pet peeve. There’s much more that we can’t know than we can know.
Thing is, I don’t think that limits the amount science that we can do. We can’t do the things that we can’t do. We will never know what it is that we can’t do. However, that still leaves us with an infinite amount of things that we can know and an infinite amount of progress that we can achieve.
I’ve no intent to read a 66 pages long document, that I probably wouldn’t understand, anyway.
Now, if there are people around able to grasp this kind of thing, could they tell me what is their mental image of an atom? What does an electron look like? Tell me about some specific electron : where is it situated and at what speed is it moving? What was there before the big bang and what caused it? Is a photon a wawe or a particle? What is a wawe, anyway? A quarck has a spin, but is it spinning clockwise or counter-clockwise? Where are the fourth and tenth dimensions? What is there beyond the limits of the universe? Is the cat fine? Please provide explanations that are both accurate and that I can “grasp” without resorting to abstract concepts, and what won’t run contrary to common sense.
Or maybe you didn’t understand what I meant by “grasp”. I didn’t mean understanding intellectually or being able to manipulate the concepts. It was intented to shown our brain’s limitations, since, as you yourself mentionned, we’ll be unable to know what we can’t know, so I can’t show examples of our inability to understand something.
Finally, an “infinite” amount of things we can achieve seems both presomptuous and a gratuitous assumption (I’m assuming here some kind of worthwhile knowledge…knowing exactly where is situated every grain of sand on the planet and following its movements can be a nearly infinite, but pointless task).
And even if we could understand everything, how could you know there’s an infinite number of (worthwhile) things left to be discovered?
The fact that until now (for a very short period of time) we made constant progresses doesn’t mean that it will last forever. If I walk westward from here, I can go on for several days without any significant problem. Can I state, on the basis of this previous experience, that I can walk all the way to Canada, since, I’m told, it’s situated somewhere to the west?
I think we must agree to disagree. Once again barring any evidence that we were designed to understand everything, I’m going to keep believing that we’re limited creatures, and as a result that the knowledge we’ll be able to accumulate will be limited as well.
I’m not sure how well I qualify, but I can give this a shot. My mental picture is of a fuzzy circular (sometimes, the shape depends on where the electrons are located) object. We do have some images of atoms taken with a Scanning Tunneling Microscope
I think an electron is too small for it to look like anything. I believe that at some level, photons interfere with subatomic particles too much for us to be able to picture it properly.
One of the interesting things about language is that we can make nonsensical statements with it, or statements that have no basis in physical facts. For example, describe an object that has the property of being both mass-less and having mass. I believe that your question is one of these. We can know either of these qualities, but not both with certainty–the better we know one, the worse we know another.
I can try to give one (rather simple) explanation for knowing the position but not the velocity. Imagine a baseball flying through the air. You want to know where the baseball is, but for some reason you cannot look at it. What you can do, however, is throw other baseballs at it. One could, in theory, observe the path of the projectiles to determine the position of the original baseball. However, the collision is going to alter the velocity of the original ball significantly, and thus we cannot know the final (or initial) velocities of the original baseball. This analogy is not perfect, because in the macro world we could throw pebbles or something else small and light at the baseball to determine both, which doesn’t work in the quantum world (for reasons having to do with the wave nature of matter), but it’s something to think about.
We don’t know. But this does not make it unknowable. There are theories about before the big bang, but we have no hard evidence at this point.
A little bit of both. It depends on how you look at it really.
Anything that can be expressed as a function of (x - vt)
Spin is one of those weird science terms that are invented before we really know what’s going on. Quarks and electrons and such don’t really spin, they just have properties that are similar to spinning in the macro world.
I believe the fourth dimension is time. Don’t know what the 10th is, ask a string theorist.
We don’t know for sure. There are hypotheses, but this is beyond our current level of understanding
We don’t know until we look.
I think you’re underestimating what we are potentially capable of. You bring up that we were not designed to understand everything, but, outside of human engineering, nothing was designed to do anything, at least not originally. That doesn’t keep a huge number of objects from accomplishing their purpose extraordinarily. Our brains evolved in the wild, but they have been exapted for many other purposes, and are incredibly versatile. Other than computational limits (we can only handle so much information at a time) and physical laws (we can likely never know how to break the light barrier), I don’t see any potential limit on what we are able to understand. Can the scientific discoveries of the future be expressed linguistically, mathematically, or (perhaps) algorithmically? Then we should be capable of understanding them (provided they’re not too large to be really understood). Unless our future breakthroughs cannot be expressed by any of these methods, I see nothing that is to stop our progress.
Reductionism only gets you so far. It might be nice to know the very basic rules underpinning everything but that doesn’t automatically mean that we know everything.
We know every single rule that governs a game of chess. Yet we still have no idea how to play a good chess game. And we don’t seem to have developed any mechanisms to make this bit of it easier.