Not really, but let me be more specific than I was earlier.
Yes, of course, since all scientific research (however we define that) is carried out by human beings, certain aspects of scientific research can never be completely separated from the state of “being human”. That point, and which aspects these might be, are some of the very issues under discussion here. However, there are certain, specific aspects of human behavior which are generally held to have no place in scientific research or in the application of the scientific method – knee-jerk reactions, emotional or irrational responses to criticism or failure, and so on. There is nothing wrong with feeling this way, but there is nothing scientific about reacting from those feelings, either, especially when they threaten to cloud the logical, rational or communicative faculties.
As with anyone who works long and hard on a particular project or area of research (in chemistry, history, art, cooking…), a scientist may development a certain personal attachment to his/her original ideas, work and accomplishments, an emotional investment, a sense of those things as an extension of the self. Scientists are just as capable of forming emotional attachments to their ideas as anyone else! This is natural, human behavior; and sometimes these are the very passions which lead a person to give his/her life to an area of research in whatever field. Likewise, the same scientist may very well feel something when his/her ideas are called into question. It may be difficult to let go of personal attachments and accept that the near-and-dear theory is flawed. This, too, is natural, human behavior. The higher the stakes, the greater the personal attachment or the more visible or significant the theory/idea/work, the greater the potential for the scientist to respond from an emotional basis, partially if not exclusively, because that is part of human nature.
This being the case, it’s not surprising that scientists have often been unwilling to let go of a theory which is clearly flawed or incomplete, until another, better one is proposed. (Since I am not interested in re-inventing the wheel I will take it for granted that the history of science demonstrates that this is what happens – at least where the “big”, visible, earth-shattering theories are concerned.) Human beings are often uncomfortable letting go of an idea/principle/world-view without having something which is demonstrably better, but that doesn’t mean we are incapable of doing so. When, as you so rightly point out, there are several competing and equally compelling theories about the same body of knowledges floating about, the reluctance to let go of cherished ideas can turn scientific debate into something akin to a blood-sport. My point is that neither the scientific method nor the principle of falsification require a better theory to be proposed before an existing one can be discarded. It is rather human emotional nature which is satisfied by this requirement – in my opinion, at least, which is what I am attempting to express.
For instance, here is small-scale, real-world situation which illustrates a lot (not all) of my thinking in this thread: I do not work on the level of grandiose, cutting-edge theories, but I do work on the level of confirming or falsifying smaller, lesser hypotheses. A chemist wants to make molecule A. She hypothesizes (on the basis of her experience in organic chemistry) that if she reacts molecules B and C under conditions D, she will produce A. Because the discovery aspect of pharmaceutical research involves the synthesis of novel molecules (that is, never knowingly made before and reported), she usually does not know with 100% certainty than B+C under D will yield A. So, she runs her reaction, isolates the product, and brings it to me (the analytical chemist) to help determine what she has made. She hopes I will confirm her original hypothesis (an expectation to which eris referred very early in this thread), but by the same token scientific rigor for lab notebooks, patents and journal articles always requires post-reaction analyses on the chance that the hypothesis was wrong. Thus am I gainfully employed; potential falsification gives me a job! I make various tests on the material she has brought. The results of these tests are not consistent with the molecule she would have liked to make; she made molecule ‘X’ (for unknown). On the basis of this evidence, she is forced to conclude that her hypothesis was wrong. Note that often I cannot suggest what she did make or how it was produced (a better hypothesis); all I can do is tell her that her first hypothesis was flawed in some way. She will go back to the lab, and consider whether to use molecule E instead of B, try conditions F instead of D, and so on. She might even receive results, from another lab conducting a different type of test, which seem to confirm her hypothesis, and so come back and question whether it is my test results which are flawed. As carefully as I conducted my tests, I still must be open to the possibility that something could have gone wrong – that the data are, in fact, “bad” and representative of nothing. Back and forth we go. But at each stage, the scientist has to be open to the possibility that s/he is wrong – and take care to differentiate between logic/scientific rationale and emotional/personal attachment. Because in the end, neither molecule A nor X care what she thinks.
The world goes as it will, not as you or I would have it. – Darkoven proverb, courtesy of Marion Zimmer Bradley
That’s sometimes a tough thing for anyone, scientist included, to accept.
Mr. Svinlesha, may I ask what is your motivation to “define” science in a manner which would be rejected by the vast majority of both scientists and non-scientists? Do you have a problem with a world that has both theoretical scientists and experimental scientists?
emphasis is mine
Color me confused. I thought that Einstein was specifically concerned with observation and reality. You claim that Einstein just came up with some interesting math that “luckily” happened to fit later experimental test? This is disturbing to me, because I have seen this attitude before. “Einstein didn’t predict anything, everything is discovered by trail and error and Einstein’s math just happened to fit the data”.
The muon, tau lepton, and upsilon were discovered by experimenters so that counts as science in your view. But the positron, pion, antiproton, and neutrino weren’t a part of science because they were predicted by theorists? Or they only became “scientific” after they were confirmed by experiment? The Higgs boson may or may not exist, but does a theory which predicts it not count as science?
(On preview I see that DSeid has expressed my sentiments much better than I.)
Mr S
Cross-purposes we may be, but only because you seem to want science to be only th eend-product of a “mature” technological society. I disagree. Science is a human activity which has been engaged in for millenia. It does not require either whit lab coats or calibrated instruments to do science. It requires a methodological approach, a spirit of investigation, and a willingness to adjust one’s ideas to match observed results. Apparently, you would like to declare that DNA mapping has removed all early naturalists from the class “scientist”. I object.
On other examples:[ul][li]The change in color of a litmus strip is a direct perception by human eyes of a chemical reaction. The history of chemistry is full of examples of human senses gathering data directly about reagents (color, smell, even taste). Were the early chemists not scientist because they lacked spectrometers?[/li][li]Magnitude is a measure of brightness. Classification of stars by magnitude originated with a Greek astronomer (whose name I forget) before the birth of Christ. Is the observation scientific only if one converts it into a logarithmic scale?[/li][li]I’m sure your book on chaos theory if full of graphs. The hallmark of chaos theory is the detection of “similar” patterns in graphs (and sounds and other sensory inputs). The recognitions which underlie chaos theory is inseparable from the subjective human experience of pattern.[/ul][/li]
Agreed, so long as the word “instrument” is taken as inclusive of human sensory organs.
Really? You don’t think people agree that white is a brighter color than black? You don’t think people agree that whipoorwhills have a similar call to chuckwill’s widows? You don’t think that people agree that burning wood generates both heat and light?
More to the point, what exactly do you think instruments are calibrated against, ultimately? Remove human sensory impressions from the equation and explain to me how the Kelvin scale developed.
Lib
The hijack I meant was solely the issue of epistemological “grounding” for science. I don’t think anybody was upset by it, and I certainly meant no offense.
As to “falsifiability”, am I correct that you simply mean that the process by which Popper determined that falsifiability was a distintive element of scientific hypothesis was inductive? If so–sure, I think it was. I see nothing “philosophically problematic” about that. Had he declared his vision of science as a priori knowledge it would make no difference. Only if Popper claimed that his demarcation of ideas was itself a scientific hypothesis would we enter the muddy waters.
Thank you, Spiritus Mundi. I believe you have offered examples which support my view that quantitative assessments – whether by something as complex as electronic instrumentation or as simple as a yardstick – are not necessary for scientific investigation. Instruments? As long as we include human senses, yes. Quantitative measurements? No.
You are correct in your interpretation of my remarks about Popper inducing falsifiability. I agree with your point about the principle itself not being a scientific hypothesis, but I didn’t say it is. Clearly it isn’t, because it has no testable implications. I liked your references to early science because, in a sense, science has returned to its roots: trial and error.
I don’t mean that there is any philosophical problem with Popper per se, but rather with the more general public, some of whom are scientists, who do not understand that the demarcations are not scientific. So many people believe that the scientific method is undeniably valid above and beyond any other method of inquiry, ascribing to it a nearly mystical power, solely because they do not realize that it is, in fact, philosophically derived. So long as anyone speaks of falsifiability as indisputably validating scientific inquiry, I will be there to point out that falsifiability itself is not validated.
And for what it’s worth, I knew you meant no offense.
Well, I really really wanted my quantifiability, but I am forced to admit that in areas where it is not germane, still one can follow the scientific method, and make useful observations, deduce probable interactions, and make predictions which provide falsifiability. But without requiring falsifiabiity, how do you keep the Astrologers out of the NAAS, or the Palmists for that matter?
Is Astrology a Science? I don’t think it is. Whether it is an ancient art, filled with arcane wisdom or pointless clap-trap, intended to separate fools from funds is not my question. Is it science, and if not, why not? Palmistry makes predictions, and while they are seldom specific, if I don’t require specific standards of falsefiability, it really seems that that is just another case where quantification is not desirable. (At least not desired by the Palmists, and their clientele.)
Now, I don’t think that every theory statement is going to be falsifiable. I don’t think it is unscientific to examine the theoretical interactions of unobserved forces predicted by currently unproven theories. But as soon as you do get a prediction, and find an observation, whether quantified, or not, which contradicts your prediction, you have to change your theory. Even failing to find what your predictions say you should find might not require that you declare your theory wrong, but you have to consider that it ain’t hunkerd down on all four legs.
So how does this fit into definition?
In mathematics you have conjectures. Formal examples of things which seem as though they ought to be true, but have not been proven, nor can it be proven that they are false, or cannot be proven. Science deals with such things all the time, as well, but how does it differ from reading the entrails of a goat to determine whether the conjecture is true? I think it differs, but just what defines reading entrails and conducting scientific research to be fundamentally different? Repeatability? I got a whole herd of goats, here. Quantifiability? I only need one goat, and science may or may not need to count or measure, by our current level of agreement.
If we don’t include the requirement that science is a method that always accepts falsification by observation as sufficient evidence for rejection of a theory, when such falsification can be demonstrated then it seems to me that I will have to add a department of Arcane Arts to the curriculum for a BS.
Bachelor of the Lesser Mysteries, Master of the Hidden Wisdoms, Doctor of Unrevealed Arcana
I have to admit that when I originally opened this thread, I figured it would sink like a stone and just disappear. I never expect so much spirited (and high-quality) dispute!
There are far too many details here for me to address in one post, but I’ll try to get to as many as I can. Please accept my apologies if I miss something you think is important, remind me, and I’ll try to get back to it.
First off, I’m beginning to notice a growing consensus from an number of posters who seem to feel that my definition of science fails to address the central point of the entire scientific endeavor: namely, that the act of science involves theorizing, in some sense. I think starryspice made this point originally, on the previous page, but it has also been brought up (twice now) by DSeid, once by Jarevan, and maybe by others as well.
Lest I seem too pig-headed, just let me state at the outset that I regard this accusation to be one of the most serious leveled against my definition thus far (along with the question regarding taxonomy, I think). I am in fact of two minds about the matter, but, just for the sake of playing devil’s advocate :D:
To begin with, I’ve never meant to imply that the construction of theories was not an essential aspect of scientific work. Of course it is, and only a fool would claim otherwise. Thus, when DSeid points out, “But the act of instrumental measurement is not science,” well, the fact of the matter is, I agree. Random, unsystematic collections of data from some kind of recording device, assembled just for the sake of collecting, do not a science make. That is why I included the word “interrogation” in my definition. I chose this particular word because I felt it connoted the act of – oh, shit. I just realized that I fucked up the OP. Ah, never mind. I meant to define science as “the interrogation of Nature by means of instrumental measurement…”
Uh….can we start over?
…but never mind, investigation will do as well. Investigation also should at least imply some sort of systematic, directed way of asking questions of Nature. (I really do prefer the word “interrogation” here, which brings to mind the image of a policeman or detective “cross-examining” a potential criminal in a directed, systematic manner, with the intent of extracting some kind of information from him/her. Nature, in this case, might be considered the criminal, or potential criminal.)
But there is a reason why I have not placed emphasis on this particular aspect of scientific work, which I hinted at as well in my OP. It is because I reject the falsification criterion. I do not think that one can successfully differentiate between science and non-science by claiming that scientific theories are always falsifiable, and I would like to point out that this is without doubt the consensus view of practically all modern historians, sociologists, and philosophers of science. (I will return to this point, hopefully, at a later date, because as I noted on the previous page, it is one of the more common criticisms of my position).
In addition, I submit that if one does reject the falsification criterion as a demarcation between science and non-science, one lands smack dab into a real can of worms. The problem is that it is difficult, if not impossible, to locate the formal elements of a statement that allow us to classify it as “scientific,” or “non-scientific.” And this in turn means that we risk not being able to differentiate “science” from “non-science” at all. In fact, in professional circles, the attempt to find a criterion for this differentiation has completely foundered, and so I assume that my definition wouldn’t hold water in such a context (although I’m not sure why it wouldn’t – so I decided to start this thread).
Perhaps a counter-example will help clarify my point. If we reject Popper’s criterion (as I will later argue we have good reason to jettison), how are we to differentiate between astronomy, on the one hand, and astrology, on the other? I’m certain that there are very many unfalsifiable theories in astronomy, and very many falsifiable observations in astrology – so Popper is no help to us here. Without falsification as a crutch, how are we to justify our exclusion of astrology from the field of Natural Science?
Lookit: the point is that many extremely non-scientific modes of inquiry, or methods of knowledge production, are also directed towards, in DSeid’s words, “…the development of the best possible models, or metaphors, about the world.” I work in such a field myself (psychotherapy). I also feel that many of these non-scientific fields of study (although not all), with methodologies different from that of “normal science,” produce reliable information about the world around us. So this isn’t a question about the validity of knowledge produced by an “alternative” methodology, such as, let us say, praying to Jesus. I suspect that Lib, for example, has acquired a great deal of knowledge by this method, which is subjective and free of any kind of measurement. But it ain’t exactly scientific knowledge, now, is it? The fact is, using DSeid’s definition above, we cannot separate Lib’s spirituality from Jarevan’s chemical experiments – yet the two seem to be very different ways of acquiring knowledge to me, and to most others as well.
Okay, if ya’ll are still following my argument thus far, and I haven’t bored ya to tears or anything, here’s where I stand: Human beings are constantly investigating (or interrogating) Nature in various ways. We consider some of these ways to be “scientific,” whereas we consider others “unscientific” (or, maybe, “nonscientific”). So – what specific characteristic can we point to separates science from other forms of investigation (if it ain’t falsification?) Well, one thing for sure – scientists sure use a lot devices, yardsticks, clocks, apparatuses, etc, when they perform their investigations. Ipso facto….
Having written all of the above, I want to again reiterate that I am still of two minds about this point. Perhaps some sort of modification of the definition might be in order (if we don’t reject it out of hand). Maybe….hmmm….
Natural Science: the systematic interrogation of Nature by means of instrumental measurement.
No?
Okay, on to more specific points:
DSeid:
That’s what I’m getting at. Can we define “instrument” as a specific kind of man-made tool?
This is where we disagree, sorta. I would argue that the tools we use to extend our perceptions also have a profound affect on those perceptions as well (although not, perhaps, in all cases). Our “picture” of the universe is in fact shaped by the tools we use to perceive it. This is also an important idea I’m trying to get across, which is kind of embedded in my definition, but you are quite eloquent in providing examples of the way these tools structure our “scientific” world picture:**
Jarevan:
Well argued, and compelling. Will consider relinquishing my definition completely on the basis of your post, but I’m not quite ready to give up the ghost yet. So, while I’m still breathing, and can play DA: naturally, all the work you do in the lab involves various measurements, yes? With weights, scales, litmus papers, graduated beakers, and so forth? If so, your example does not at least refute my definition directly. If not, how do you know if your chemist has produced A, as opposed to X? (This is not a rhetorical question. I know next to nothing about analytic chemistry, and am genuinely curious.)
Regarding you example of falsification: let us say that, rather than producing substance X, your chemist does indeed produce molecule A by reacting molecules B and C under conditions D. Trick question: would you consider that the chemist has confirmed her hypothesis in that instance? rsa:
The devil made me do it.
Well, that might have come off sounding somewhat more glib than I intended. To be more specific, I’ll turn again to Popper. Sir Karl claims that if one accepts his demarcation criterion, certain other “conditions” are “implied” by it. Among these conditions is priority in theory choice: Popper claims (rather absurdly, in my opinion) that theories (or more correctly, hypotheses) are to be preferred on the basis of their ability to generate falsifiable observations statements. In other words, the more observation statements the theory generates, the “better” it is as an object of scientific inquiry. My point was that Einstein’s theory did not, in any obvious sense, immediately generate any observation statements; that, in fact, despite that it was almost immediately accepted by scientists, it took them a while to figure out a way to test it.
Regarding this:
I have no idea what a “Higgs boson” might be, but…yeah, if one accepts my definition, any physics theory that cannot be empirically tested falls outside the circle of science. Either that, or one must give up the claim that science is empirical: and was it not you who, in an earlier post, suggested that I add the word “empirical” to my definition? As I see it, theories are neither scientific nor non-scientific. They’re just speculations, or metaphors, or narratives, about the world around us. I think.
Did I just hear something snap? Spiritus:
[Paleolithic scientist]
Og see fire.
Og think, “Fire COLD!”
Og think, “I make falsification test.”
Og put hand in fire.
Fire HOT!!!
OG SMASH!
[/Paleolithic scientist]
Sorry. I just couldn’t help myself.
Anyway, briefly: no, it is not my intention to imply that scientific research is the “end product of a mature technological society” – whatever that might be. A sextant, for example, or a yardstick, or a pair of scales, would qualify as instruments of measurement, and they don’t require a lab coat.
Having said that, it is my impression that there exists among historians of science a general consensus that “science,” as we know it, really got started around the 1600s, in tandem with the popularization of Bacon’s, and perhaps Gallileo’s, work, along with the establishment of “scientific communities” (such as the Royal Academy of Scientists, established in, I think, 1654). Most scholars, as far as I know, relegate work done previous to this watershed to categories such as “magic,” “alchemy,” “crafts,” or maybe as some kind of “proto-science.” You can redefine science to mean something along the lines of “rational, open, skeptical inquiry” if you wish, but I fear that we lose something with that definition. If you claim that my definition is too narrow (and it may well be), then I warn that your definition risks becoming too broad.
Your post deserves a more detailed response, but after all of the above I’m spent and so I must close for now. I will try to get back to you (and that ever-daunting second point) as soon as I get a chance. I would like to put this question to you, however: assuming you reject Popper, then how would you characterize the scientific endeavor, and draw a line between it and, let us say, the subjective spiritual experiences of a devotee of Kali?
I will look forward to an expansion on this point, since I have already submitted that the idea in question is the potential falsfication of theories, not the actual falsification. Maybe it’s a nitpick but I do think there is a difference. I shall refrain from further comment on this point until you have had the chance to say more. For what it’s worth, though, I think that your logical extrapolation from the lack of a principle of (potential) falsification, which follows the above quote, is compelling.
And I would submit to you that the tools themselves are also shaped by our perceptions. Consider what Spiritus Mundi last said:
The point is that we design and construct instruments which, to a large degree, confirm and conform to our sensory perceptions because that is the only way through which the vast majority of us can imagine the universe.
My real-world example was not intended to refute your definition, but rather to illustrate my own thinking and experiences about how potential falsifiability, “good data” vs. “bad data”, good theory vs. flawed theory, and the “human element” come into play. Yes, as an analytical chemist – my actual field of specialization is mass spectrometry – my work relies heavily on electronic instrumentation to “extend” my sense of sight, if you will, to the molecular and atomic levels. Mass spectometry can yield information about molecular weight, molecular structure and the elemental composition of a material. Notice, however, that I have not said that measurement by instrumentation is never important to science; I have only maintained that it is not essential.
It is a trick question – you devil – but the answer is “no”. The best we can ever say, and the way I and my group do say it, is: the results are consistent with the proposed molecule. This is not just a CYA-type statement. It reminds the chemist that no amount of supporting data will prove her hypothesis absolutely. Recall what I said to eris early on: I only need one piece of good data to prove a theory/hypothesis wrong. But no amount of good data will prove it right 100%.
Now, you may ask: so what keeps the chemist from being paralyzed in her tracks, if she can’t know anything with 100% certainty? Well, in practical terms if all of the analytical data she has are consistent with her predicted result, then it is reasonable for her to assume (key word) that the hypothesis was very likely sound, and go on to the next step – which may be doing a reaction on molecule A to produce G, then H, etc. all the way to target drug candidate Z. Or perhaps molecule A is the proposed drug, in which case it is sent for biological assay to determine whether it has any efficacy in vitro against the target disease or condition. One does reach a point, in practice, where the accumulation of “consistent” results effectively amounts to confirmation – because the likelihood of the chemist having n flawed hypotheses in series (that is, each presuming the correctness of the last, as I describe) and still achieving “consistent” results in the final step becomes quite low as n increases. This does not preclude the possibility, even then, that she has been wrong all along. But it is extremely unlikely.
So, say she goes through all this chemistry, hands it off to the biologists, pharmacists, etc. and we have a drug which makes it all the way to market. Am I saying that we never really know for sure that the molecule is what we think it is? Yes, I am, because nothing every constitutes 100%, absolute proof. Does it matter? I’m not sure. If the universe behaves as if that were the right molecule – the person with the disease who takes the pill gets better – no, I’m not sure that it matters whether the model by which we got there represents “reality” or, like Camelot, “it’s just a model”.
That’s okay, so long as you are not actually deluding yourself that prehistoric human beings were less intelligent, inquisitive, and concerned with the workings of nature than we enlightened children of modern science.
Then I would submit that these “scholars of science” are guilty of monumental perceptual bias–perhaps even cultural hubris. Most scholars that I know group mathematics, particularly theoretical mathematics, as a science. No scholar that I know believes that before 1600 only “proto-mathematics” existed. Scientific inquiry did not begin with the European Rennaissance, despite the towering figures of Bacon and Gallileo.[ul]
[li]When Eratosthenes calcuclated the circumference of a spherical Earth based upon observation of shadow length–he was doing science.[/li][li]When Ptolemy constructed his model of the geocentric solar system, with those lovely epicycles and concentric celestial spheres–he was doing science. [/li][li]Whem Archimedes formulated his Law of Bouyancy–he was doing science.[/li][li]Jabir ibn-Hayyan must have done something back in the 8[supth[/sup] century to earn the nickname “father of chemistry”. You would have liked him; he invented lots of instruments to assist/improve his experiments–oh, and he was doing science.[/ul][/li]
I have studiously resisted setting forth any personal definition of science, but if yours eliminates the works above, along with the works of the early taxonimists, the observations of Tycho Brahe, theoretical physics, et. al.–then you are most definitely losing more than I am willing to let go.
An interesting warning, considering that I haven’t given any definition. I have simply pointed out elements which I consider “science” but that your definition eliminates.
Not to be glib, but that would depend upon the specific nature of the subjective spiritual experience. Our little thugee might be inspired to genetic engineering.
But that was my point. The “Higgs Boson” is subject to empirical tests. Do you remember that big hole that was dug south of Dallas? The Super Conducting Super Collider’s primary’s purpose was to find the Higgs. Now other labs are working on the problem.
[sub]You should know what a Higgs Boson is. I recommend The God Particle by Leon Lederman. Buy it for the humor if nothing else. The guy should have been a comedian.[/sub]
Oh keerist no, that was why those little quotation marks were there. I consider the social sciences to be “sciences”, merely ones that are vulnerable to the retension of competing paradigms instead of the ruthless elimination of theories and fairly robust refinement of the hegemonic paradigm that exist in the “hard sciences” (and to a lesser and more problematic extent in economics). A human mind is, ironically, in many respects a much tougher system to understand than the ones that allow it to exist. As for trying to understand multitudes of them? Egads, I’m amazed we’ve even got this far. There’s a reason Psychohistory was such a far-out idea.
Jerevan meet Thomas Kuhn. Thomas, meet Jerevan.
Observations which science can’t adequately model? Kuhn called them “anomalies” and said they were what prompted paradigm shifts. Enough anomalies and people start hunting around for a new paradigm. If it’s found, the community starts switching over en masse. The WAY in which he said switched over was somewhat controversial; he believed it required old scientists dying and young ones choosing the new theories, because people aren’t willing to change their minds. Being an optimist, I tend to believe that paradigm shifts can be willing. After all, the community itself risks credibility if too many inexplicable anomalies start showing up.
Svin: Ok, just to be clear on your position: if it doesn’t have numbers, it ain’t science. Yup or nope?
If nope, I ain’t buying it, simply because language (the basis of qualitative research) can and does serve as a useful, perhaps necessary tool for studying human behavior. Trying to translate human behavior into neat numerical patterns (outside of perhaps economics) introduces more problems of validity than it solves a lot of the time. Witness the enduring difficulty in the process of writing surveys. How do you translate “happy” and “honour” into numbers without introducing so many basic assumptions that the entire enterprise risks falling like a house of cards? I rather hope that you don’t actually think that “humans can’t be studied scientifically”; that just gets right back to that Aristotle guy.
I also think that you’re underplaying the importance of deductive work in science as opposed to inductive. Einstein didn’t determine the effects of near-light speed travel by watching starships zip by; he did it by (IIRC) working it out by deducing from the standard model predictions of the interaction of light, mass, and energy. If you can prove that 2+2=4 by counting rocks and defining what “2”, “+”, “4”, and “=” mean, then you can figure out that 4+4=8 (assuming you’ve defined “8” as well). You’ve already inductively proven the former, so you can prove the latter deductively.
(I’m probably restating the opinions of others that just went far over my head, but what the heck, if they’re willing to give me the posting space…)
As for falsifiability, I believe it’s important based on simple utilitarian grounds. Without the attempts to disprove (as opposed to proving) theories, you invite satisficing. Only by the ruthless and constant attack on theories can you be sure that a theory is not just one of many possible correct theories, but the correct theory. Even if it isn’t a “definition of science”, it’s so bloody useful that it’s a good idea anyway.
I would say that if I learned from Jesus that photons are emitted by electrons collapsing from one level to another, it would be no different than you learning it by moving your eyes in horizontal patterns across the pages of a book. You seem to be saying that scientific knowledge cannot be mediated. If that’s the case, then Einstein wasn’t a scientist.
I’ll just start with your next-to-last post, and work my way down. However, before beginning, I’d like to point out what I believe to be a factual error on your part:
On the contrary. Did you not claim, “Science is what scientists do,” on the previous page? (You thought I missed that one, didn’t you?) Anyway, what I derived from your next-to-last post was a group of characteristics that you seem to think typify the nature of scientific inquiry:
That passage seemed to me to be somewhat in the nature of a definition, but if I’ve jumped the gun here, please accept my apologies. You might also want to present your definition of science, if you have one, so that I know where you stand.
Anyway, let’s go back to square 1. As I pointed out in my last post: I have nothing in particular against what I perceive to be your strictly phenomenological approach to this question, nor would I deny that all knowledge is mediated, ultimately, by the subject. Actually, that claim is something of an understatement. In fact, I emphatically agree with you on this point, assuming that we are on the same note. But the examples you post here seem to indicate, again, that I’m not getting my message across. So I’m going to try a little schematic:
Assuming realism, we can conceptualize a straightforward empirical investigation of Nature in the following form:
Subject --------> Object
Now, in this case, the Object of study is apprehended directly with the senses. Positivists such as Comte, and especially Mach, argued that such a straightforward investigatory program is in itself unsatisfactory, because the reports of the Object brought back by different subjects tended to vary wildly. They therefore wanted to control for the variations caused by the “equivocality” of the individual subject by the techniques of “intersubjective sense certainty,” on the one hand, and “methodological certainty” (basically, everyone uses the same methods) on the other. In this case, the Object was to be reduced to only those characteristics that a majority of subjects agreed it exhibited, but this is a bit of side issue. However, regardless of one’s opinion on that, it is certainly true that scientists, in order produce as accurate a picture of the Object as possible, are anxious to eliminate any distortions that might be caused by the “subjectivity,” as it were, of the observer. One way to do this is to insert, between subject and Object, some sort of measuring device that all subjects can utilize as means of standardizing their observations. There are in my opinion two variations on this method:
Subject ------> Instrument -------> Object; or
/////////////// ---------------> Instrument
Subject
////////////// ----------------> Object
With regard to method 1), which I argue is the preferred method of scientific inquiry: Sometimes, there are several stages intervened between Object and subject, especially in modern physics research. Take, for example, the experiment designed by Raymond Davies to observe solar neutrinos. To measure neutrinos, which are massless and chargeless particles, Davies created a sophisticated measuring instrument: an olympic-sized swimming pool filled with dry-cleaning fluid (C[sub]2[/sub]Cl[sub]4[/sub]). This fluid, in turn, contains a specific isotope (Cl[sup]37[/sup]) that, theoretically, reacts with neutrinos. This reaction, in turn, creates a radioactive isotope of argon (Ar[sup]37[/sup]). Thus, the presence of Ar[sup]37[/sup] in the swimming pool was understood as evidence of the passage of neutrinos. (Since this same reaction is also produced by the passage of cosmic radiation through the solution, the tank had to be isolated from such radiation – so it was placed at the bottom of mineshaft, a mile deep into the earth.) To measure the presence of Ar[sup]37[/sup], the tank was “swept” with helium gas once a month, the argon extracted from the gas and collected onto a “supercooled charcoal trap,” and then the infinitesimally small emission of “Auger electrons” was recorded by a tiny Geiger counter. In order to separate this specific “Auger” signal from other sorts of background radiation, the counter’s record was subjected to a sophisticated data analysis, and finally, the presence of Ar[sup]37[/sup] was plotted on a graph. Technically, this graph was the phenomena, if one will, that presented itself to the senses of the investigating subject. (I derive this summary from Pinch, Trevor: “Towards an Analysis of Scientific Observation: The Externality and Evidential Significance of Observational Reports in Physics,” Social Studies of Science, vol. 15, 3-37.)
How many stages is that? Well, let’s see: because it was theoretically postulated that these neutrinos are produced by fusion within the interior of stars, this would mean that in this case our object of study would be the core of the Sun (the experiment was, in fact, an attempt to falsify an astrophysical theory about fusion as the primary solar energy source). And the pattern would look something like this:
Subject —> Splotches on a graph —> computer analysis —> Geiger counter reading —> AR [sup]37[/sup] atoms —> “solar neutrinos” —> fusion occuring in the Sun’s core.
So, to return to my argument: what I see here is a tendency, in science, towards greater and greater instrumentation. Certainly, if we lack instruments, we will continue to stumble (or, in some cases, dance gracefully) along, as best we can, with our investigations – until such time as we can develop some sort of measuring device that will aid our work. We then automatically prefer to use such a device, because of the various advantages it provides – such as were listed by DSeid, above. There may be cases in which it is difficult to pass judgement on the state of a field of inquiry, such as taxonomy; but the general thrust is towards the development of reliable, quantitative instruments of measurement. In taxonomy, the minute such an instrument had been developed, the direct, subject-to-Object model of research was abandoned.
I think that I want to call theories related to these kinds of measurements scientific, and other theories non-scientific – with the caveat that non-scientific theories can, of course, be valid, intelligently designed, and systematic.
Now, as to your examples:[ul][li]The change in the color of litmus paper is of course a chemical reaction observed by the human eye. But here that reaction is, as it were, taken for granted. Your object of study, in this case, would be the ph value of a given solution, and again, you’ve inserted an instrument, a given means of quantifying the acidity of a solution, between you and the Object.[/li][QUOTE]
** Were the early chemists not scientist because they lacked spectrometers?**
[/QUOTE]
I don’t know about that, but I would say that anyone who’s basic mode of inquiry into the chemical nature of the Universe did not involve measuring and quantifying it some way, would instead be an alchemist, rather than a scientist. Or do you regard alchemy as a natural science?
Uhh…pending clarification of the question, since I’m not expert on astronomy, I’ll go ahead say yes to this one. The observation becomes scientific once it is converted to scale. If I go out at night, sit down in a pasture, and count stars, I do not think that falls under the category of scientific activity. If I try to identify and organize stars according to how bright they are, that might be a kind of proto-scientific activity. When I start measuring them, with, let us say, a sextant, then I have become an astronomer.
Undoubtedly true. But these patterns had to be measured against some sort of stable background, did they not? Thought experiment: put 50 researchers in 50 laboratories, each with one of those “chemical clocks” that got Prigogine thinking in terms of chaos. Ask them to repeatedly plot the “bifurcation point” – when the clock changes color – against time, so as to ascertain whether or not the pattern is random. But – do not give them watches or any other method of measuring time in a standard manner. Then ask them to compare their data.[/ul]
Of course I do. But these observations do not bear upon scientific investigation except in the most tangential manner – except, arguably, the bird calls. Broad agreement that light is light and dark is dark is not what this is about. This is about getting into the details of inspecting Nature at the root; and as we know, the devil resides in the details. When we begin inspecting these details, we find major variations in the reports presented by different subjects. Two people, listening to a bird call at night, can identify it as originating from different species. How do we decide which report is accurate?
I have not, I repeat, “removed human sensory impressions for the equation.” Having said that, the development of the Kelvin scale was not as straightforward as one might think: refer back to my quote from Kuhn, above. Human subjects had to negotiate about what the scale actually measured, for a while, because it did not seem to jive with their sensory impressions, and produced curious results. Eventually they came to accept it, and to replace their direct sensory impressions of the Object with thermometric readings. (Davies’ measuring tank, by the way, had to go through a similar process).
Back later, with more (I hope. Girlfriend is beginning to think I spend too much time here.).
Let me see if I’ve got this right. The key question here seems to be -
What is the difference between “science” and other forms of rational, skeptical inquiry?
Is it falsifiability? Clearly Popper is mistaken. Falsifiabilty is desired but not required in scientific inquiry. Every science, every epistemology, ultimately rests on some postulates that are not falsifiable within itself. There are things that are true that are not provable. Science accepts these truths … but always with some small, albeit sometimes infintesimal, doubt. Moreover sometimes paradigms shift NOT because one model was falsified, but because another model explains more or explains better. But I’ve said this before.
Is it that science is focused on the continual betterment of its models?
That a system of inquiry is “scientific” when it is approaches improvement of its models by the following methods:
-reducing doubt with collection of more data points that are consistent with it, and
-reserving doubt, even a small amount of it, as to whether or not the model has it quite right. Being willing to discard even basic postulates i the face of sufficient evidence or of a “better” model.
So that skeptism is important, but only insofar as it serves the purpose of model improvement.
So what happens, practically? We see data. We suspect a pattern, usually by matching against other kinds of patterns we’ve seen before, maybe in entirely different domains. We apply that that pattern as metaphor to the data at hand and make predictions about what other data points there will be if that metaphor is apt, and we are primed to look for those points. It guides our search for additional data. We may then find more data consistent with our suspected pattern with provides a positive feedback, strengthening our belief that this is indeed the correct pattern and further guiding the hunt for more data. Or we may find inconsistent data, providing a negative feedback to that suspected pattern, perhaps resulting in another pattern winning out in competition, or merely to discard the current pattern and “reset”, start the search for a pattern match from scratch.
Science is looking for a resonance between the data and the ideas, a feedback loop. And science is devoted to the search for those additional data points to either reinforce or weaken the current resonance. (This, BTW, is a bastardized meta-application of a neural network theory called “adaptive resonance theory.”) Any inquiry that is not devoted to the analysis of data to the end of providing either positive or negative reinforcement to the currently “resonant” pattern, to the end of altering our amount of doubt (see how well I avoid the f-word ;)) is not scientific.
Measurement, in some form, subjective or objective, is thus required to get that data. If we define science as a group activity, then the more objective the better.
Spiritus will, of course, have to speak to this directly, but I took the quoted remark as tongue-in-cheek. A humorous way of stating, as I also did, that in constructing a definition of science, one has to consider first who is allowed to participate in constructing that definition to avoid a tautology.
This is an excellent example of the difficulties – what I earlier called “grunt work” – of scientific experimentation: conceiving of and considering all possible variables which may contribute to the results in a manner unconnected to the hypothesis in question; controlling or eliminating those extraneous variables in the experimental parameters; and interpreting the data in light of the idea that the variables contained in the hypothesis itself (quantitative or otherwise) are the only ones which will dictate the outcome of the experiment. It’s not easy, especially when one can only make reasonable assumptions about the significance of the variable s which need to be controlled/eliminated – or when one considers that there may well be variables at work which one has not been able to imagine.
But, like your point about flawed theories being discarded only when a better one has been proposed, this involves a regressive assimilation (of history) into the definition of science. The fact that, historically, many areas of inquiry have tended towards a greater use of/dependence upon instrumentation to advance the frontiers of knowledge does not mean that a working definition of science must incorporate this history.
To do so would be like making the following argument (a rather extreme example, I admit): the New Testament is the basic document from which definitions of Christianity are drawn. Nothing in it requires believers to force the conversions of non-believers; certainly nothing indicates that burning non-believers at the stake is a defining act of Christianity. However, at various points in the history of the Roman Catholic Church, non-believers were burned at the stake and otherwise persecuted. Since that is a historical fact, does that mean that a working definition of modern Catholicism must include some aspect of the Church’s past violent treatment of non-believers?
If a definition of science must somehow incorporate its own history, then that means we must constantly update of our definition of science to reflect historical trends. This is certainly one useful way of looking at it, but it comes very, very close to agreeing with Spiritus’s “science is what scientists do” quip – bending the definition back on itself to make a circular argument.
In spite of some its more mystical aspects, alchemists did lay some of the concrete foundations which led to the development of modern chemistry. I wouldn’t be so quick to dismiss alchemy et al., in its entirety, as devoid of modern approaches to scientific inquiry.
Clarification: a logarithmic scale (base-ten, at least) of measurement is one in which an increase by an integer of 1 constitutes a ten-fold increase in the property being measured. Star magnitudes and earthquakes are measured this way. Thus, a star of magnitude 4 is ten times brighter than a start of magnutude 3, one hundred times brighter than a 2, and so on. In the Star Trek series, “warp factors” are logarithmic, but on (I think) a base-two scale: an increase of 1 on the scale represents an increase of 2 in velocity.
On the issue of (proto-)science: so if I sit out under the stars with only my eyes to gauge relative magnitudes, and theorize that the apparent brightness of a star is some combined function of its distance and intrinsic brightness (*i.e.*at its surface) – without any tools to measure or quantitate those things – that is not scientific? What about the nature of the observations and the theory are fundamentally changed (from non- or proto-scientific to scientific) by measuring those distances and brightnesses with instruments?
Here is the danger I see with in concept of proto-science, as introduced in your last two posts. If I understand, you are defining “proto-science” roughly as “a mode of investigation which precedes the existence of useful instrumentation”. In doing so, you are obviously aware that certain areas of inquiry into the natural world throughout our history are thereby dismissed as not being scientific – even when their conclusions withstand modern scrutiny. Again, I think this becomes another regressive assimilation verging on a circular argument: “science is what scientists do now, not then”. It draws a line (maybe a circle?) in the historical sand which, I think, cannot and ought not really be drawn – and conceivably subjects our current “scientific” efforts to a similar future dismissal.
OK, so I think this answers my question: no, you don’t consider psychology and the like to be “sciences” in the same manner as chemistry and the like. Fair enough… neither do I, frankly.
Actually, my colleague’s remark was part of a discussion in our reading of Rene Descartes (I think :D), but I always like meeting new people.
Yes, I’m not quite sure now what I meant by that question apart from the way you took it. I had an idea which I did not express very well and now… well, now it might be gone…
Me too. As I said earlier, just because we tend towards a certain behavior doesn’t mean we are incapable of a different one.
Am I to understand you both then to be saying that the distinction between true science, or “hard” science, if you will, and nonscience, is that “science” is intolerant of simultaneously entertaining multiple models that explain the same data set? Or just that science is more discomfitted by having to entertain them and strives for the hegemony of one?
I’m beginning to wonder if perhaps the label “science” is often arbitrary. Do we, in fact, constantly use “regressive assimilation” in attaching the identity of “science” to any particular form of inquiry? (In fact in attaching an “identity” to any phenomenom of our experience?)
“Science” becomes the midpoint of all of the things that we have seen or heard of “scientists” doing, which of course is heavily weighted in the “now”, and we, after the fact of forming that conceptual prototype, are trying to attach a definition to it? The more similar a particular example is to that prototype, the more we feel it is “science”?
Is the study of history “science”? By most of the defintions proferred or implied in this thread one should say, “yes”. But most of us feel that history should not be contained as a subgroup of science. Is this anything other than arbitrary? We’ve never heard history called science, so we don’t want to accept it as one? Mr. S.'s experience is that modern scientists use technology and attempt to reduce subjectivity to a minimum, and thus objective (instrumental/technologic) measurements are a key component to what he calls science. So much so that he was willing to leave out the whole point of the activity from the definition. And while many of us object to that definition, none of us, myself included, have offered an alternative that encompasses what we tend to call science and distinguishes it from some forms of inquiry that we don’t “feel” should have the same label.